March 6, 2024 5:26 pm

admin2

Do you want to learn about AI drawing but don’t know where to start? Today we will take you from the most basic to a step-by-step in-depth understanding of the most popular AI drawing robot, Midjourney. Our AI in-depth teaching video has come to the second episode. We have already mentioned how to use ChatGPT , so let’s talk about Midjourney today . These two robots can be said to complement each other. One is for text processing and the other is for image processing .

So in addition to introducing Midjourney today, we will also combine ChatGPT in the second half. Using them together , these two AIs can create a lot of sparks when they come together , and they have a wide range of uses . So today we will take a look at what Midjourney is and how to use it. What are the main features of Midjourney ? For some uses, let’s briefly distinguish several relatively large commercial uses , such as graphic design, game art design, film , or animation concept art , or realistic photography. For example, some logos for commercial design or website design and packaging design are all fine Use Midjourney to design a draft first , and you will find that there are some pictures on the screen , which are very similar to English words.

I don’t know what language they are writing in . This is because Midjourney has no way to generate meaningful text content. It can digest what we send. These text commands , but it cannot produce text content like ChatGPT , but this part is also very easy to solve , just use some third-party App or Photoshop to repair it , and then we will see the game art part Many game companies are now using AI to generate game concept maps .

What you see on the screen is, for example, character design or game environment design as an example . If you use ChatGPT to write game programs, this is the process of generating games. It will really speed up a lot. In addition, this Midjourney can also generate , for example, animation-style or movie-style pictures . For example, the pictures on the screen are pictures generated with the keywords of the four words movie picture .

Of course, there are also pictures in it. Adding some other elements will produce such an effect . Then use Midjourney to generate some creative pictures. For example, Victorians swipe their mobile phones on the side of the road , or monkeys make trouble in the office, which are more dramatic scenes . So from From now on, even if you don’t have artistic talent , you can still draw the ideas in your mind through simple instructions. Next , we will show you how to use Midjourney to generate pictures from the most basic operations . When the AI ​​robot gives instructions , I will introduce some necessary formats and the content we need to input.

In addition, we will talk about some very important Midjourney settings , such as the creation mode or image ratio, etc. These are very important. Important elements . Of course, at the end, we will also introduce how to combine ChatGPT to make the pictures produced by Midjourney better . First of all, we will talk about the AI ​​model of Midjourney. What it actually does is to convert the text we input into pictures. Rather than drawing, the whole process is more like entering keywords or writing a program. Like this , the Prompt field you see on the screen contains the keywords I entered. Midjourney will insert this field In the bit, what I wrote is converted into a picture . For example, the Prompts I gave it here are monkeys dancing in the office. The 1980 behind it is the year I specified.

After sending it like this, it will produce the following picture , then Let’s take another example . In my Prompts field, I write Victorians using smartphones. It will produce the following picture for me . However, it is impossible for AI to produce these simple words immediately in our minds. Ideal for perfect images So today, in addition to the quick basic teaching, we will also teach you some (more) advanced usage skills of Midjourney . Next , we will see the official website of Midjourney , which is related to ChatGPT is also a website. As long as your mobile phone and computer use a web browser , you can use it directly. The official website of Midjourney will be connected to two very important places. The first one is the generation area that actually uses AI to make paintings. It’s the button on the right , and the second one is the display area on the left .

Any pictures you generate with this AI will be displayed here synchronously . After we click in, the grid on the left is the display area on the official website just now . You can see your works or other people's works in it. You can enjoy everyone's achievements here , just to compare with each other . It's a bit like a gallery. As for the generation area that you just clicked on the right side of the official website, you will enter Now that there is a Discord community like this , if you don’t know what Discord is, let me explain it to you a little bit . It is a platform that combines communication software and forums. One of the biggest features of Discord is that there are a lot of robots on it. Everyone You can interact with these robots on this platform, and you can even design your own robots on it.

So the Midjourney AI we are going to talk about today is actually a robot on Discord . Here is a bad news for you. That is, Midjourney is currently suspending the free trial. According to the official statement, it is because of being abused . They have not mentioned when they will resume this trial . As for the price of Midjourney, we will mention it in more detail later.

We will now Go back to the part used by Midjourney. When we joined Discord , we saw a row of lists on the left . This is called the channel list . Before there was a free trial, these people can only create on these channels that start with Newbies . Normally you You will see this block here, and you will keep scrolling up because everyone is creating this in the same place . As for the paid version , you can have your own space without being crowded with everyone. As for how to have your own private space We will talk about it later.

To start drawing , go to the input field below to start a conversation with the robot . When Midjourney wants to talk to the robot, you need to type a slash in the input field , and then it will jump out like this Form Then this form is the function menu of the robot. The first option we see is the slash Imagine , which is actually the AI ​​drawing function we want to use. You can directly click on this item, or you can directly press Enter on the computer. It is called into the input field , and then after grabbing it into the input field, the space behind the Prompt We can start to input the content we want to create. What kind of things can we input ? This is a huge knowledge. Now there is even a profession like Prompt Engineer , because behind the scenes, you have to start to understand the AI ​​that digests text. The method , which text is placed first and which text is placed last will actually affect your final result , so let’s take a look at the general directions set by Prompt .

Let’s explain a little bit one by one. First of all, you The theme and background of the theme must be placed at the forefront. Suppose I want to design a new Disney princess, I will describe its appearance . Apart from the appearance, what kind of thing is it doing , or what is it doing? The scene and environment can describe this part . Here we see that it is all in English because it is input in Chinese . In fact, Midjourney can’t understand it . We will talk about another Midjourney twin later. It is listening. Can understand Chinese, but the method of use is exactly the same , so you can listen to this part first , and then come to the second part is style and medium , which is to describe the style of this painting or picture . It can be one or The names of multiple artists, manga artists, directors or illustrators , such as Disney, Hayao Miyazaki , Van Gogh, Junji Ito, etc.

Can be filled in directly , or you can also fill in a wider range of styles , such as Impressionist paintings, abstract art, etc. These nouns can be added. Then you can also fill in the specified art medium . What does the art medium refer to ? You want to generate a hand-painted illustration or a graphic design drawing. Or you can specify the photo, the most interesting part is the part of the photo , you can also give it the camera model or the photographer ’s name as a reference , and finally the color and light , or the adjective of the atmosphere , such as the afternoon sun in the film lighting Product lighting, etc.

Adjectives for atmosphere , such as nostalgic, 1980s, futuristic, etc. If you want to specify a color, you can write it , but I think the color should be used with caution because sometimes it is specified The color may not appear where you want it to appear . As for the three major items we mentioned above, it is not necessary to write about each of them . It is just for your reference, so let’s share it with you next. For example, here I want to generate a new Disney princess, I just want to describe its appearance , then I set the style in Spanish portrait , and at the end I emphasized again that I want the style of this Disney Pixar Then pay attention, each prompt must be separated by a comma and a space, so we have not mentioned color or atmosphere adjectives here, just a simple example to demonstrate to everyone , except for Prompt Except for the keywords, you will see this — at the end . These are some basic settings — ar 2:3 is actually setting the aspect ratio of the picture .

Ar refers to the abbreviation of Aspect Ratio. I will mention more settings of this kind later . After writing it, let’s test it out. Send out the prompt of this Disney princess . That’s it. Then press Enter . It will take a little time to generate a picture in Midjourney , about In a minute it will also show the percentage of the progress on the screen and it will go from a blur like this on the left and slowly start to show your theme , let's say this is 10% progress and this is 30% progress Then when it is finished, there will be four pictures like this . At this time, you may have a question , that is, if you use the same prompt to send it again, will it generate exactly the same picture? The answer is no , it is Midjourney.

It is sent with exactly the same keywords , it will not generate the same pictures repeatedly , that is, every batch of commands you send out will produce different results , then let’s take a look at the four pictures we just generated, you will find The styles of these four pictures are a little different , and there are eight buttons below to perform some actions on these four pictures , and you will see that there are 1 to 4 here , which respectively represent the above 1 2 3 4 Then let’s take a closer look at what U and V mean .

U stands for Upscale , which means that after you click on it, one of 1 to 4 will be upgraded into a complete large picture . If you like all four pictures If so, you can also press from U1 to U4 to upgrade all of them . The V part below stands for Variation , that is, the change type . After pressing it , you can make a picture that is very similar to 1 to 4 just now , that is, the possible composition and color.

It will be the same , but the details of the face or which accessory may be changed , so let ’s demonstrate it to everyone In this way , give me a complete big picture , which can also be regarded as our final product. If you want to download it , you can click in and press the right button to store it in our computer . Then we Going back to the eight buttons just now , suppose I want to press the V1 below to generate a variation of the first picture. What will happen? After I press V1 like this , it will generate four pictures like this. The first picture just now is very similar , but the face or accessories are a little different. There is a redo button on the far right of the eight buttons . If you are not satisfied with the four pictures today , you can click this to start again. It will generate exactly the same Prompts and then generate four completely different images . As just now , it is a complete process of using Midjourney .

Usually when we use Midjourney, we just keep repeating this step until it produces what we are satisfied with. So far, as we just said, it actually takes a little time when it is generated, so you can also send the exact same Prompts several times at the beginning (recommended 2-3 times) and then let it run by itself, assuming we We have finished our works today , let ’s take a look at the part of the gallery we just mentioned on the official website, that is, all the completed works will be placed in the gallery of the official website simultaneously, let ’s take a look at this part , there will actually be Divide one area into your own works , and the other area is everyone's works. Suppose you see other people's paintings in this gallery today and think they are very beautiful. You can also put the cursor on this picture and you will see where others used it. You can copy some Prompts and use the same Prompts to generate pictures Next, let’s talk about some more advanced things to optimize the results of images generated by Midjourney .

Midjourney actually has a lot of settings . We won’t bring all the settings today , but only talk about a few more important ones. To call out the interface of this setting, we input the slash in the input field , and then it will pop up the menu . At this time, we have to go to Settings. If you can’t see the Settings, you can type after the slash. Press an S and the Settings should appear. Then we click on this item to call it into our input field and press Enter to send it out .

Then it will pop up this form . This form is actually the setting interface of Midjourney . First of all Let’s take a look at the first two rows. These two rows are in the creative mode . You can only choose one of them at a time . The ones we use most often are MJ version 4 and MJ version 5. Their abbreviations are V4 and V5 respectively . After choosing it, it is actually using different AI models to generate images . The relatively new V5 model is a model that is commonly used by everyone. It is best at making super-real photos , and there are other things that are quite worthwhile. The thing mentioned is that you don’t know. Before Midjourney released the V5 model last month, Midjourney was actually not very good at drawing eyes and hands. So after the release of V5, everyone found that the ability to draw eyes and fingers of the V5 model has greatly improved. Progress Although when I use it myself , I think it sometimes draws a little weirdly , but it is much better than before . As for the V4 part, it can’t make very realistic photos , but V4 is very suitable It is used to create a sense of atmosphere .

It has a little more personality than V5. Here I would like to add a little bit about the working principle of this setting . It is that the things we adjusted in the setting interface just now will be automatically added to the prompt you sent. For example, When we pressed the MJ version 5 creation mode just now, when we sent the prompt , the robot will automatically add this –v 5 after our prompt , or we can see that the robot will automatically add it for us in the setting interface. What material is the Suffix part that you see on the screen . For example, today I clicked the button of MJ version 4 and the Style low below in the settings. After pressing it, these two options will appear automatically. After that , when I send When prompted, it will be automatically presented in this way after my prompt, that is, it will be automatically added like this , and then we will mention other commonly used modes in the next row.

For example, the recently popular Niji mode , Niji is It is specially used to draw anime. For example , let’s say we switch to Niji version 5 now , and then we send it out with exactly the same keyword as Disney’s Spanish Princess. You will see if there is an ending here , and it will automatically add a –niji 5 for us. Then it will become Japanese anime style. This effect is very good, so the operation mode has a great influence. That is, when you are creating the same theme, you can also switch to different modes to try it out. The results will be quite different. Except for the creation In addition to the mode, let’s take a look. The other setting below is called the Style value. This value is a bit like controlling the degree of divergence of AI creation .

Then, if the Style value is lower, it will generate images more conservatively according to the prompt you gave. Let’s take a look at an example. For example, let’s use the prompt of Disney’s Spanish Princess just now to test for high Style values. The difference between low and low is that if the Style value on the right is increased , it will not stick to the Disney characters, and some different styles will start to appear , even if it has almost completely escaped from the Disney style .

It seems that the background is also here. It is the part of its free play , because we did not mention its background at all in the original Prompt. What kind of background is it , so it can be freely played on the side with a high Style value, like the part with a low Style value on the left You can see that there is almost nothing in the background, but it is relatively simple , and the characters are also relatively close to the setting of Disney Pixar we gave it. Now, the basic application teaching of Midjourney has almost come to an end , and you may still have one. The big question is that I don’t know where to start when writing prompts .

For example, if you want to make a logo style, how to grasp it or what words can be used , so here we will mention another very powerful tool. It’s ChatGPT, so if you haven’t watched the video we introduced ChatGPT in the last episode, you can click the link in the upper left corner to learn how ChatGPT works , so when we don’t know which prompts to write, we will You can turn to ChatGPT to help us write some high-quality prompts. It’s role-playing. Remember , so next we will use two steps to combine ChatGPT and Midjourney .

The first step is to let ChatGPT write Midjourney’s prompts and then Let's throw these prompts into ChatGPT to generate pictures . How does this actually work? First, we open the ChatGPT page . Now we are going to make a Midjourney Prompt Generator (producer). Let's open a new chat room and give He appointed role-playing, and this role is to help us make the Prompt that can be used by Midjourney. You can adjust the script of this role-playing freely. It doesn’t have to be like my script . The picture here is what I gave to the ChatGPT chat room What is the first command here ? Let’s break down its structure . Because ChatGPT does not know how Midjourney works. They are two completely separate and independent AI robots , so here I am To explain it , I am using an image-generated AI called Midjourney , and then I want you to act as a machine that generates MidjourneyPrompt , and then the following sentence, I will start to explain my rules , that is, if I put a slash in front of a topic today At that time, you have to help me generate a prompt that fits this theme according to different situations Then I wrote an example , assuming that I input a slash and then a picture of a shoe product .

The following demonstration is that I wrote a large list of prompts that can be used by Midjourney , and the structure is the same as ours just now. It is the theme and then add some atmosphere words. Then my style is Product Photography. Finally, we can also add the camera model or the way of shooting , but this list is just an example for ChatGPT. Then the next ChatGPT replied that he is ready , so from now on Our creative process can use ChatGPT to generate text prompts and then paste them on the Midjourney page to generate pictures.

The examples you see next , whether it is web design or realistic photos, are all produced using this generator. Prompt to do it, then I can start to use slashes to add a simple theme and let it play the rest. For example, I say (Chinese can be used) I want to make a logo for a new company and then ask for Apple Then it gave me five sets of prompts . If you find that it only gives you one or three sets of prompts when you use it, you can ask it yourself that you want five sets of prompts .

Then we can start to choose the combination we like more. For example, if I like the first set of Prompt, I will copy it and paste it into Midjourney to generate a picture . After sending it out, you can get something like the above screen. Have you noticed the logo of the child ? It is obvious that Midjourney can’t grasp the concept that Apple is a company. So at this time , we have to give feedback to ChatGPT and tell ChatGPT that the picture I want is not a real apple. Next, ChatGPT apologized to me , and then another five groups of prompts were generated below that did not mention the word Apple.

At this time, I will copy it and paste it into Midjourney , and the real apple will not appear. The resulting Logo is This is more normal , so here is the part of the logo that is completed . These are all made in V4 mode. Sometimes, it doesn’t matter which mode is generated , it must look better , so you can try different styles. Have you found out here that you have any problems , you have to tell ChatGPT directly , just like I just encountered the problem that Midjourney can’t recognize the word Apple, so I can directly complain to ChatGPT, so let’s do the next example Let’s take a look at the webpage design , which is the pictures generated by Midjourney , which can be used as templates for designing webpages , or as art settings, etc.

In the same way, we can first talk about our theme with ChatGPT and then let it provide some available prompts , then continue. In this ChatGPT chat room where the characters have been set just now, we will start with a slash first and then add our theme. The theme, if you are used to it, you can also input it in Chinese and then ask him to output it in English. It just depends on how you compare Get used to the theme of this web design . After I actually tested it myself, I found that if you enter the word website in Midjourney’s Prompt, there is a 99% chance that the generated picture will show a real device , which is like the lower left corner of the screen. It’s the same as the picture of a computer or a mobile phone , but we don’t need a picture of the device today, all we want is a webpage . Then I found out later that the word web page must be used so that the picture of the real device will not appear .

So I am here When talking about my needs with ChatGPT, I first specified that I want to use the two words web page to call the website, then ChatGPT will give me three sets of prompts, and I can throw them all into Midjourney to try and see . I can also switch between V4 and V5 to see which one is better, and you can choose the typesetting or style you like more. Suppose I like the one in the lower right corner of V5 . I can press U4 to get a big picture. Then this By the way, the sketch of the solar system web page is complete . Suppose we want to use this website sketch again today to make a mobile app version of the picture . It is the same style , but it is the typesetting on the mobile phone .

How to let Midjourney refer to it has already been done. For this ready-made image, here we will mention another way to use Midjourney , which is to add a JPG or PNG image file URL in front of your prompt, so that Midjourney can refer to that image and then generate similar images . So here We need two materials. The first is the URL of the web page sketch , and the second is the prompt for the app version of the web page. So first of all, we ask it to generate the prompt for the app design on ChatGPT . In fact, I just posted it directly on the question side. I uploaded the original web version of the prompt to give him a reference , and asked him to design a prompt that uses similar characters but is an app version, and then come to the second step.

We need to get the picture URL of the web page sketch . Here , click on the one that we have just generated. Then in the lower left corner, there is an open in browser. After clicking in, when the picture expands to the maximum state , you can click its URL and copy it . Here, you must also pay attention to the end. If it ends in png or jpg, after collecting these two materials , we will go back to Midjourney to generate the picture . In the field of prompt , we will first paste the URL of the reference picture , then press a space , and then paste the app version. The prompt is copied from ChatGPT like this, and then we package it up and send it out . Here are the sketches of the two app versions that I think the final results are not bad .

They are the same as the original web version. The design language or color tone are the same. Those pictures In fact, it can be used in the stage of design proposals. It ’s pretty easy to use . Let’s move on to the next example. We ’re going to generate realistic photos . In fact, the v5 mode will perform best in this part , like on the screen . The pictures and the pictures you will see next are all generated using the v5 mode with ChatGPT, so at the beginning , you can pre-book and tell what kind of photos you want to take , for example, street photography or movie stills You can talk to ChatGPT.

For example, for the first one , let’s make a movie still with an artistic conception. First , ask the ChatGPT character to feature an old lady , and then add some keywords. Adding oil and vinegar is to make this artistic conception more complete . Now it is for the sake of cleanliness of the picture . Otherwise, there will be two sets of different prompts below . In the end, we will choose a set of prompts that we are satisfied with and throw them into Midjourney. The four pictures on the left will be generated , which are similar in style , and if you look closely, they all seem to be the same person , is n’t it very cinematic ? Then we can upgrade the one we like to the big picture on the right Now suppose our characters and scenes remain unchanged and directly become street photography . In the same way, we first use ChatGPT to generate a prompt. Pay attention here. I ask him to freely use the camera model and photographer . The photographer will refer to his style and generate similar photography.

Style That ’s not to say that this picture was really taken by that photographer . This is the prompt that ChatGPT gave birth to freely. Because each set of prompts has a combination of different camera models and photographers , it ’s pasted in Midjourney. There will be relatively large differences in the results and the composition of the pictures is also different . Using this method of not specifying the camera and photographer , you can have a lot of ways to play , not only the camera and photographer, but also other types of pictures . You can also use ChatGPT to play freely For example, film directors, illustrators , or cartoonists, etc. We just took a solo photo. Now let’s try to take a street photo of a group of Taiwanese teenagers . Here we also invite ChatGPT to freely use the camera and photographer , and then I will Pick a group and post it to Midjourney to generate pictures, then it generates the group of pictures on the left , which is quite interesting, but I don’t know why the clothes I wear just have a sense of age.

So I went back to ChatGPT and asked it to adjust their clothing, that is, to ask them to wear some more fashionable clothing, so that it became the picture on the right, and what everyone saw was more modern. The effect is also quite good, but there is a small disadvantage, that is, the sight of his characters in the v5 mode sometimes still drifts randomly. For example, when we see the picture on the right , the boy on the far left just doesn’t know where Where to look and look carefully In fact, the girl on the second from the right is not looking at the camera , so now Midjourney has such a small shortcoming, let me tell you first , after watching the characters, let’s take a look at the natural landscape photos , you can specify the location yourself Or just tell him which side of the natural landscape you want to see. This part is a little regretful. The resolution of the pictures generated by Midjourney v5 is not very good , but if you need it, you can actually use it through a third party.

App to enhance its resolution. As for the landscape part , we first let ChatGPT retouch our prompts and then ask it to generate a landscape photo of Taiwan with mountains and seas . In Midjourney , we used two models here. Let’s generate the same prompts . Let’s see which side you like better . Although the v4 side is very beautiful , if you look closely, you will feel that it has a little fake feeling .

It seems that the filter cover is too heavy. Is it just a repair? The picture has been edited too much and it is quite suitable for the feeling of using it as a tablecloth . Then the v5 on the right is really like a real photo, as if it was taken in a secret place in Taiwan . The last two pictures are both V5 (slip of the tongue) is that if these two pictures are put on the Internet and you don’t tell me, I will really think that he was taken by a human being .

Finally, let’s try to generate a natural landscape without specifying a location. That is, I directly use a very A general word, tell him that I want a natural landscape like this. Here you can see that these four pictures are the results of v4 , which is a bit more dramatic, but the effect is really full marks , that is, each picture has a story. I feel that the next four pictures are made with the v5 model , and they are really similar to the photos taken . Of course, the prompts used in these four pictures are different , so using the combination of ChatGPT and Midjourney, there is really room for development.

It's very big , I'm afraid we don't have any creativity . So here we have almost finished the explanation of the main usage of Midjourney (wait for more content). Let's review some of the content of Midjourney we just arrived. To use Midjourney to generate pictures , we need to use text The way of input is to let him convert the prompts we provide into pictures . The prompts are a bit like the ingredients for our cooking. They can be divided into three types of content that can be input . The first is theme and background, style and medium , and the third is color.

And the atmosphere , these materials are not necessarily as many as possible , but everyone has to keep trying to find out how to combine them to suit your appetite . Some people will also advocate fewer prompts to give Midjourney a greater space to play , and some people prefer more. Prompts gives this picture more details , so the vegetables and radishes have their own preferences. Then the next one is the setting interface of Midjourney we mentioned just now . The most important thing is the switching of the creative mode. The style he is good at in each mode is also It’s not the same. It depends on whether you want , for example, a super-realistic style or an anime style . You can switch between different modes to see which one has the best effect . Finally, we also mentioned that it can be used with ChatGPT to make it fast. Retouch your prompts and then you can also communicate with it what is the final result you want to present in Midjourney , then this ChatGPT can speed up the whole process of image generation , then we will talk about the paid part , let’s look at it first Take a look at its monthly payment plan Now you can see that there are mainly three plans on the screen .

The first one is 10 US dollars, which is about NT$300. This is a monthly price . After you subscribe , you can use about 200 pictures a month. Here we also need to talk about the way Midjourney calculates the quota, which is actually the time you use to calculate the pictures , so it doesn’t have to be exactly 200 pictures. For example , if your pictures today are some relatively simple pictures with relatively small files.

You can generate more than 200 pictures , so I suggest that everyone can start playing with this 10 dollars first , and the second is that it is commercially available , that is, Midjourney, except for the free trial, all pictures can be used For commercial use, let’s see the $30 a month part . Its biggest feature is that it does not limit how many pictures you can generate a month , but it has a proviso , which is the 15-hour fast generation time , which is Midjourney. There is a very special mechanism called Fast hours and Relax hours . Fast hour is of course faster to produce the pictures you need. When you produce more than 15 hours of generation time, it will become Relax hour , which will become slower to generate. The picture you want looks like this , but 15 hours is actually quite a lot , and if the 15 hours are not enough, you can also buy more like this . The last Pro plan is 60 dollars . The biggest difference lies in this plan. There is an incognito function , that is, you can set the pictures you generate to be private , so that only you can see your own works , because didn’t we say that Midjourney’s final works will be placed in the official gallery, as long as you have your user ID? In fact, you can find your work , but using this incognito mode , you can set your work to be private .

For example, if you generate relatively confidential content , you may need such a function. For example, if you are in a game creation company If you use Midjourney , you must use this function . In addition, the benefits of other Pro plans , for example, its Fast Hour has more , and there is a 30-hour quota . If it is not enough, you can also purchase it . The other one I didn’t mention just now That is, the first two schemes can run up to three tasks at the same time, that is, generate three pictures at the same time. The Pro scheme can run up to 12 groups of tasks at the same time. So in general, the biggest difference of Pro is that it can be set to be private and Fast hour can be changed. There are more than 12 groups of tasks performed at the same time . Next, let’s look at the part of the annual payment plan . It is basically a 20% discount for each plan, but you have to pay for the whole year at once . Then come again We talked about the part of private space , so how do you create your own private creation space on Midjourney? The first step is to create a server .

There is a plus sign on the left side, which is to create your own server. Then here, Join a server, you can create a server . After you create it , let’s go back to Midjourney’s server first , then click on the Member list in the upper right corner, find Midjourney Bot, and then there ’s Add to Server here. On the server side, we just choose the server we just created . After adding it , we can directly start generating the pictures we want in the input field of our channel, and then come to the second way. That is, we can directly find this Midjourney Bot and start talking with it directly in the following Message, and it will become a private message in DM .

In the private message, you can directly start generating pictures . This is the second type Method Next, we will mention another twin of Midjourney called niji journey . This niji journey is actually an AI robot specializing in animation. You can type any text on it and it will help you generate Japanese anime-style pictures . And the most important thing is It accepts Chinese input, as you can see on the screen here , I just used the prompts of the Spanish Disney princess to input it , and these four pictures will appear , and the effect is very good , it is really like Disney Pixar Let ’s introduce the difference between it and Midjourney , that is, after you type niji journey on the Internet, you will enter this website , and then you scroll down, and he will ask you to join another Discord community .

The way to join is exactly the same. It will enter the server , and you can start to generate pictures here , and then you can also directly talk to this niji journey Bot. The method of use is almost exactly the same , only the largest The difference is that it can be used in Chinese , and the pictures it produces are also Japanese anime style . Finally, let’s talk about the alternative software for free trial, that is, if you are really reluctant to spend the 10 dollars today to play and watch Midjourney You can also try it from other alternative third-party websites. The usage of these two websites can be said to be similar to Midjourney , that is, they will have a prompt field , and you can enter the keyword topic you want.

Then come the third alternative software , Bing Image Creator , you can go to the Bing website , you can find this Image Creator, and the same thing is, after entering the prompt, it will help you generate pictures , but after I actually tested it myself Well, I think Bing’s generation speed is really a bit slow . Well, the above is our complete Midjourney teaching today. I hope you all have fun and this software is addictive , because I really think the pictures it generates are true. It’s so beautiful , and each one is so beautiful , and I want to try it in every field. In fact, I think it’s more fun to use ChatGPT .

If there are any updates related to Midjourney or useful AI tools, I will also Make a video and upload it to our channel , so if you don’t want to miss it , remember to subscribe to our channel and press the little ding bell to receive notifications . I will also post some latest news and life-related content on Instagram if you are interested. Go follow up. Well , we're almost done with today's video . See you in the next video . Bye bye.

Leave a Reply

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}