Animated AI background video generated in Deforum Stable Diffusion. Compositing, lyrics and audio reactive effects edited in After Effects.
Music ????
Grey - RAVEN (feat. Virtual Riot)
Playlists ????
Lyric Videos: https://youtube.com/playlist?list=PLVFGz67j2v_951Lo7kt9aVx8ZyoNZabnA
Lyrics ????
Tools ????️
Background made in Automatic1111 Stable Diffusion Deforum extension with help from ChatGPT.
Edited in Adobe Premiere Pro and After Effects.
Process ????
1. Finding fitting stable diffusion prompts for the music video.
I ask ChatGPT to give me fitting prompts based on the lyrics I feed it. I test the outputs and make changes based on the results from img2img generations with Stable Diffusion. I also try out different models and Loras to find outputs I like.
2. Getting music reactive keyframes for the animation.
I give the song to this audio keyframe tool: https://www.chigozie.co.uk/audio-keyframe-generator/ and give it suitable functions for the values I want. I personally use 25 frames per second, something like “0.60 - x^6” for strength schedule and “1 + x*10” for Translation Z in 3D mode. I often play around with these settings and also give the 3D rotations different functions to react to.
3. Setting up a render in Deforum.
Now I enter the keyframes and prompts into Deforum to start generating an animation. I usually do this 4 times with slightly tweaked settings and prompts to get different results to work with later.
4. Editing in Premiere Pro.
To get a more dynamic video, I cut between the different animations in a multicam sequence, picking the best parts and cutting to the music.
5. Finishing up in After Effects.
In After Effects I manually add the lyrics, timing and dividing them as I see fit. I also add a bunch of effects like additional 3D camera movements, shakes, overlays, particles, blur and chromatic aberration. Most of these are made to react to the music through generated audio keyframes.
#lyrics #lyricvideo #lyricsvideo
Music ????
Grey - RAVEN (feat. Virtual Riot)
Playlists ????
Lyric Videos: https://youtube.com/playlist?list=PLVFGz67j2v_951Lo7kt9aVx8ZyoNZabnA
Lyrics ????
Tools ????️
Background made in Automatic1111 Stable Diffusion Deforum extension with help from ChatGPT.
Edited in Adobe Premiere Pro and After Effects.
Process ????
1. Finding fitting stable diffusion prompts for the music video.
I ask ChatGPT to give me fitting prompts based on the lyrics I feed it. I test the outputs and make changes based on the results from img2img generations with Stable Diffusion. I also try out different models and Loras to find outputs I like.
2. Getting music reactive keyframes for the animation.
I give the song to this audio keyframe tool: https://www.chigozie.co.uk/audio-keyframe-generator/ and give it suitable functions for the values I want. I personally use 25 frames per second, something like “0.60 - x^6” for strength schedule and “1 + x*10” for Translation Z in 3D mode. I often play around with these settings and also give the 3D rotations different functions to react to.
3. Setting up a render in Deforum.
Now I enter the keyframes and prompts into Deforum to start generating an animation. I usually do this 4 times with slightly tweaked settings and prompts to get different results to work with later.
4. Editing in Premiere Pro.
To get a more dynamic video, I cut between the different animations in a multicam sequence, picking the best parts and cutting to the music.
5. Finishing up in After Effects.
In After Effects I manually add the lyrics, timing and dividing them as I see fit. I also add a bunch of effects like additional 3D camera movements, shakes, overlays, particles, blur and chromatic aberration. Most of these are made to react to the music through generated audio keyframes.
#lyrics #lyricvideo #lyricsvideo
- Catégories
- prompts ia
- Mots-clés
- lyric video, lyrics, kewl
Commentaires