AnimateDiff: Adding Motion to Generated Images and How We Trained Our Own Motion Model
In this talk, we will review AnimateDiff (Guo et al), a novel and effective approach for fine-tuning Stable Diffusion to generate short and high-quality animations. We will present the model and training paradigm and compare it to other fine-tuning approaches aimed at adapting diffusion models to specific tasks. Additionally, we will introduce LongAnimateDiff - a motion diffusion model trained at Lightricks and recently released as open-source. Here, we scaled up the training of AnimatedDiff to generate longer animations. We will discuss the challenges in training such models and present our solutions.
Sapir Weissbuch is a Computer Vision and Generative AI researcher at Lightricks. As part of her role at Lightricks, she trains large multi-modal models for creative editing of images and videos. Sapir holds an M.Sc. in Computer Science from the Hebrew University, where she focused on NLP, and a B.Sc. in Mathematics and Physics from Bar Ilan University.
0 Comments