The LoRa Library in Cinema Motion stores your trained avatars so you can apply them directly to video generation. This is what makes WAN 2.2 different from other models: the character stays consistent in the video output without a reference image from Picture Lab.
Select WAN 2.2
Open Cinema Motion and choose WAN 2.2 from the model selector. This is the model with a dedicated LoRa Library control in the Cinema Motion bar. The control bar shows a LoRa Library button on the left.
Open the LoRa Library
Click LoRa Library. A panel opens on the left side with three tabs: Avatar, Style, and Product.

- Avatar — trained characters (people or consistent personas)
- Style — trained visual styles
- Product — trained product appearances
Apply a LoRA
Click any trained avatar in the Avatar tab. It closes the panel and loads the LoRA into the generation context. Write your prompt describing the scene, then click Generate. The character from your LoRA will appear in the generated video.
Why video LoRAs are different
In Picture Lab, LoRAs are used to generate a reference image, which is then animated in Cinema Motion. That two-step process introduces variation at the animation stage. WAN 2.2 with a LoRA skips the intermediate image entirely — the character identity is applied directly to the video model, which gives more consistent results across multiple generations.