Turn an Image into a Video
Use this guide when you need to add image-to-video generation where an image becomes the first or last frame of a generated video.
By the end, your implementation should submit an image-to-video job with
frame_images and download the finished clip.
For reusable agent knowledge across projects, install the openrouter-video skill.
Before you start
You need:
- An OpenRouter API key available as
OPENROUTER_API_KEY - Node.js 20 or newer
- A public HTTPS image URL available as
FIRST_FRAME_URL - A model that supports
frame_images, confirmed withGET /api/v1/videos/models
If you have not chosen a model yet, read Choose a Video Generation Model so you can select one based on your clip duration, output shape, input type, audio, provider controls, and cost requirements.
Use the API reference pages as the source of truth for exact fields:
- Create video generation request
- List video generation models
- TypeScript SDK video generation reference
Submitting POST /api/v1/videos starts a real video generation job and may
spend OpenRouter credits.
frame_images is for exact frame control. If you provide both frame_images and input_references, OpenRouter treats the request as image-to-video.
Use a stable, directly downloadable image URL. Some providers cannot fetch image URLs that require cookies, redirects through HTML pages, bot checks, or unusual headers.
Before submitting, check that your image URL returns 200 with an image
content type:
Example output:
Step 1: Choose a model with frame-image support
Fetch the model list and choose a model whose supported_frame_images includes
the frame type you want:
Example model output excerpt:
For first-frame and last-frame control, look for supported_frame_images
containing first_frame and last_frame.
Step 2: Submit the image-to-video job
Build the video request with frame_images when the image should anchor an
exact frame. This example uses a first frame, but the same request shape
belongs in whatever server route, queue, or worker owns video generation in your
app.
The submit call returns the job fields immediately. In the QA run, the submitted job later completed and downloaded with this final summary:
Step 3: Use a last frame when you need a transition
If the selected model supports last_frame, add both frames so the model can
move from a known starting composition to a known ending composition:
Then set frame_images in the request body to frameImages.
Request shape for the optional last-frame path:
This is useful when you want the video to move from a known starting composition to a known ending composition.
Step 4: Poll and download
After submission, poll from a server route, worker, or job runner instead of the browser. Keep the flow explicit: poll with a limit, stop on terminal failure, then download the completed video.
Example polling and download helper:
The QA run saved the finished video after polling completed:
Check your work
The first frame of the resulting video should closely match the image you
provided as first_frame. If you also supplied last_frame, the clip should
resolve toward that image. The implementation should produce a playable MP4
from the completed job.