Corridor Crew Workflow For Consistent Stable Diffusion Videos
Creating Consistent AI Videos with Stable Diffusion
In this video, the speaker discusses how to create consistent AI videos using stable diffusion. They explain their workflow and techniques for achieving a "Quarter Crew" look. The speaker also provides tips on choosing footage and removing backgrounds.
Using Green Screen
- Quarter Crew uses green screens to separate the subject from the background.
- Removing the background can be done without a green screen but may not produce as clean results.
- Choosing footage depends on the technique used, such as face tracking or stabilizing video.
Training AI
- Quarter Crew trained their AI with images of themselves wearing costumes they wanted to generate.
- They also trained with a specific style and stabilized the head while zooming in to reduce noise.
- Reverse stabilization was used in some videos but not all.
Post-processing
- After generating sequences, Quarter Crew removed backgrounds and used deflickering tools like DaVinci Resolve Studio.
- Images of 3D models were used to generate backgrounds with the same style as training data.
- Other effects like glow effects were also added.
Conclusion
Stable diffusion is a powerful tool for creating consistent AI videos. By following these techniques, users can achieve high-quality results similar to those produced by Quarter Crew.
Editing Software Options
In this section, the speaker discusses options for editing software, including DaVinci's free version.
DaVinci Free Version
- DaVinci has a free version that can be used as an editing software.
- Check out the speaker's video on how to use it to do some of the steps discussed in this video.
Choosing and Cutting Footage
In this section, the speaker explains how to choose and cut footage using Adobe Premiere or Adobe After Effects.
Choosing Footage
- Choose the section of footage you want to use.
- Cut it to where you want it.
Cutting Footage
- Use "I" at the beginning of the clip and "O" at the end of it.
- Cut out unwanted parts by selecting them and deleting them.
Exporting Footage
In this section, the speaker explains how to export footage using Adobe Premiere or Adobe After Effects.
Exporting Process
- Export your footage as 16x9 or change dimensions if needed.
- Go to "Export" by pressing Ctrl M.
- Select "Range Source in and out."
- Name your file and select frame size if not already correct.
- Export your file.
Using Runway Green Screen
In this section, the speaker explains how to use Runway green screen for removing backgrounds from videos.
Removing Backgrounds with Runway Green Screen
- Use Runway green screen (free version available).
- Click on subject and make corrections as needed.
- Preview results before exporting.
Using After Effects for Face Stabilizing
In this section, the speaker explains how to use After Effects for face stabilizing.
Face Stabilizing with After Effects
- Use After Effects for face stabilizing.
- Track the face and export as JPEG or PNG.
Locking the Face in Place
In this section, the speaker explains why it is important to lock the face in place when using stable diffusion.
Importance of Locking the Face in Place
- When an image is put into stable diffusion, there is a noise pattern that stable diffusion sees with the image.
- When someone is moving, all that noise pattern changes drastically.
- By locking and tracking the face, it locks to that noise pattern and you will get very consistent results because of that.
Tracking and Exporting Footage
In this section, the speaker explains how to track and export footage using Mocha AE.
Tracking and Exporting Process
- Use Mocha AE for tracking faces.
- Select layer being tracked and press OK.
- Put in effect.
Stabilizing Footage
In this section, the speaker explains how to stabilize footage using Adobe After Effects.
Steps for Stabilizing Footage
- Use the pick whip tool to select the top right corner of the frame and link it to the top right corner of the footage. Then, use the pick whip tool to select the bottom left corner of the frame and link it to the bottom left corner of the footage.
- Alt + Left click on bottom right corner of frame and then use pick whip tool to link it with bottom right corner of footage.
- Click on "Invert" button.
- Press "S" key to bring up scale and scale up until face is in frame.
Making Changes in Mocha
In this section, the speaker explains how to make changes in Mocha for better stabilization results.
Steps for Making Changes in Mocha
- Press "S" key to bring up scale and move footage so that face is in frame.
- If not stabilizing as desired, go back into Mocha and make necessary changes.
- Create track data after making changes.
Exporting Stabilized Footage
In this section, the speaker explains how to export stabilized footage from Adobe After Effects.
Steps for Exporting Stabilized Footage
- To export, use Media Encoder or add it to render queue.
- Select Output Module and choose JPEG or PNG sequence.
- Save images into a folder.
Training AI Models
In this section, the speaker discusses training AI models using specific characters or styles.
Tips for Training AI Models
- Train with specific characters or styles for best results.
- Use Laura training to add likeness of a person or style to an existing model.
- Training is key for achieving desired results.
Challenges with Training AI Models
In this section, the speaker discusses challenges faced when training AI models.
Challenges with Training AI Models
- Difficulty finding artwork for specific characters or styles.
- Art styles can be drastically different, making it difficult to achieve desired results.
- Training takes time and investment.
Results of Trained AI Models
In this section, the speaker shares the results of trained AI models.
Results of Trained AI Models
- Face locking and training produced good results.
- Using consistent 3D models helped achieve better results.
- Taking screenshots from different angles and training in Laura helped achieve desired results.
Face Locking and Training Overview
In this section, the speaker discusses face locking and how it allows for a more anime-like feel in animations. They also provide an overview of the training process.
Face Locking
- Face locking allows for bigger eyes and exaggerated expressions to give animations a more anime-like feel.
- AI Entrepreneur has a tutorial on face locking that is worth checking out for more technical information.
Training Process
- To begin the training process, Python, Git, and Visual Studio must be installed.
- Once installed, open PowerShell as an administrator and follow the commands provided by GitHub to install required dependencies.
- Accelerate config will ask several questions about compute environment and machine type. Choose "no distributed training" and "fp16" when prompted.
- If you have a 30 or 40 series graphics card from Nvidia, downloading an additional option can result in faster training speeds.
Upgrading PS1 and Getting High Quality Images
In this section, the speaker explains how to upgrade PS1 and get high-quality images for use in a video.
Upgrading PS1
- To upgrade PS1, right-click on it and run it with Powershell.
- This will update it automatically.
Getting High-Quality Images
- Get some high-quality images with different angles.
- Use a 3D model in Blender to take screenshots from different angles.
- Use beardne.net to align the images and ensure they are focused on the desired object.
- Caption each image using Koya or manually by describing what is in the image.
- Create a new folder with three subfolders: image folder, model folder, and log folder.
- Name the image folders based on the number of images you have.
Automatically Captioning Images
In this section, the speaker explains how to automatically caption images using Koya.
Automatically Captioning Images
- Copy the URL of your high-quality images into Koya's blip captioning tool.
- Click "Caption Images" to load text files inside your image folder.
- Edit each text file by describing what is in each image (e.g., character name, outfit color).
- Save each edited text file using Ctrl+S.
Creating Folders and Configuring Settings
In this section, the speaker explains how to create folders and configure settings for Dream Booth Laura.
Creating Folders
- Create a new folder named after your character (e.g., Dudley Laura).
- Within that folder, create three subfolders: image folder, model folder, and log folder.
- Name your image folders based on the number of images you have.
Configuring Settings
- Download the basic or low VRAM settings file, depending on your system.
- Open Dream Booth Laura and go to Configuration File.
- Configure the settings using the downloaded file.
Training a Laura Model
In this section, the speaker explains how to train a Laura model using Safe Tensors and folders.
Training the Model
- To train the model, select Safe Tensors and then click on the image folder in the Folders tab.
- Select the folder where you created three folders and choose the output folder in the Model Folder.
- Name your model and click on Train Model. This process may take a few minutes.
- Once training is complete, go to the Model Folder and copy your trained file.
Adding Trained File to Laura Files
In this section, the speaker explains how to add your trained file to Laura files.
Adding Trained File
- Go to Stable Diffusion and select Extensions Available Load From.
- Install Koya SS Additional Networks Models Installed.
- Go to Installed tab and apply changes by clicking Apply and Restart UI.
- Click on Show Extra Networks icon, go to Laura, select your trained file, and activate it.
Using Laura for Image Creation
In this section, the speaker explains how to use Laura for image creation.
Creating Images with Prompts
- Enter any prompt you want into Laura's text box.
- The model will create an image based on that prompt.
- Adjust CFG scale number for less influence from trained model.
- Use seed number for higher quality images.
Tips for Better Results with Laura
In this section, the speaker provides tips for better results when using Laura.
Tips for Better Results
- Use images of actual person wearing outfit if possible.
- Reduce strength of prompts by lowering weights or CFG scale number.
- Experiment with different models until you find one that works best for your needs.
Tips for Using Two Different LaYers in DAIN-APP
In this section, the speaker provides tips on how to use two different layers in DAIN-APP.
Experimenting with Layers
- It takes a lot of tries before getting the desired result.
- Use two different lauras and split them. The number at the end determines how much a Laura influences the final image.
- Try different numbers and see what looks good.
Control Net Settings
- Pay attention to denoising strength. Keep it at a decent level.
- Lock in the face and noise to bring up denoising strength and style pretty strong while still getting good results.
- Control net is where most of the magic happens. Enable candy or head/normal map between those two. Don't push them too hard or you'll get bad results.
- Bring one of them down if they are too strong.
Noise Multiplier for Image-to-image
- Go to settings, user interface, put in initial_noise_multiplier, apply settings, and bring it up to add more style.
Final Results
- Check control net settings if not getting good results.
- Mess around with width and height but be careful not to get errors.
Stabilizing the Head and Body
In this section, the speaker explains how to stabilize the head and body of a subject in order to get consistent results when using AI to modify facial features.
Stabilizing the Head
- The speaker ran the AI program twice, once with closed eyes and once without, to ensure consistency.
- Closing the subject's eyes during stabilization can help achieve desired results.
- When stabilizing the body, similar settings can be used as for stabilizing the head.
- Without stabilizing, there may still be decent results but fine-tuning may be necessary.
Exporting Images
- Consistent images are important for batch processing.
- Zooming in on an image can also give good results without stabilizing.
- Batch processing is done by going to "batch" and selecting input directory and output directory.
Importing Sequences
- Import sequences into software by selecting "import sequence".
- Ensure that frame rate is correct when interpreting footage.
Reverse Stabilization
- Reverse stabilization is necessary after modifying facial features.
Reverse Stabilization
In this section, the speaker explains how to reverse stabilize a video using Mocha and Power Pin in After Effects.
Steps for Reverse Stabilization
- Duplicate the original video.
- Select the generated sequences and pre-compose them.
- Apply Mocha and Power Pin to the original clip.
- Mask the face with a pen tool and feather it.
- Generate the body sequence and place it on top of the masked layer.
- Add a background using Stable Diffusion's text-to-image feature.
Removing Jitter and Flicker
In this section, the speaker explains how to remove jitter and flicker from stabilized videos using DaVinci Resolve Studio or Fusion.
Steps for Removing Jitter and Flicker
- Use DaVinci Resolve Studio's Flicker plug-in or Fusion's dirt removal and flicker nodes.
- Let Fusion render smoothly before delivering it.
- Render all in seconds by adding it to render queue.
3D Tracking
In this section, the speaker offers to make a video about implementing 3D elements into AI-generated videos using Blender's 3D tracker.
Steps for Implementing 3D Elements
- Use Blender's free 3D tracker or After Effects with Blender together.
Appreciation and Signing Off
In this section, the speaker expresses gratitude to the viewers for watching the videos and subscribing to the channel. The speaker also encourages viewers who haven't subscribed yet to do so and like the video.
- The speaker thanks everyone for watching the videos and mentions that it helps him make more videos and tutorials.
- The speaker requests viewers who haven't subscribed yet to subscribe to his channel.
- The speaker asks viewers to like the video as it helps him out a lot.
- The speaker signs off by thanking everyone again and wishing them well.
Conclusion
This section is just music playing in the background with no spoken content.
- No spoken content, only music playing in the background.