Use the LoRa Library in Cinema Motion

Updated April 22, 2026See Cinema Motion app →

The LoRa Library in Cinema Motion stores your trained avatars so you can apply them directly to video generation. This is what makes WAN 2.2 different from other models: the character stays consistent in the video output without a reference image from Picture Lab.

Select WAN 2.2

Open Cinema Motion and choose WAN 2.2 from the model selector. This is the model with a dedicated LoRa Library control in the Cinema Motion bar. The control bar shows a LoRa Library button on the left.

Open the LoRa Library

Click LoRa Library. A panel opens on the left side with three tabs: Avatar, Style, and Product.

LoRa Library modal with Avatar, Style, and Product tabs open in Cinema Motion alongside the WAN 2.2 interface
LoRa Library modal with Avatar, Style, and Product tabs open in Cinema Motion alongside the WAN 2.2 interface
  • Avatar — trained characters (people or consistent personas)
  • Style — trained visual styles
  • Product — trained product appearances
If you haven't trained anything yet, the tabs will show "No scenes found". See Train an avatar LoRA to create your first avatar.

Apply a LoRA

Click any trained avatar in the Avatar tab. It closes the panel and loads the LoRA into the generation context. Write your prompt describing the scene, then click Generate. The character from your LoRA will appear in the generated video.

Reference the character naturally in your prompt — describe the scene and action, not the person's appearance. The LoRA handles consistency automatically.

Why video LoRAs are different

In Picture Lab, LoRAs are used to generate a reference image, which is then animated in Cinema Motion. That two-step process introduces variation at the animation stage. WAN 2.2 with a LoRA skips the intermediate image entirely — the character identity is applied directly to the video model, which gives more consistent results across multiple generations.