Turn an Image into a Video

Use frame images to control the first or last frame of an OpenRouter video

Use this guide when you need to add image-to-video generation where an image becomes the first or last frame of a generated video.

By the end, your implementation should submit an image-to-video job with frame_images and download the finished clip.

For reusable agent knowledge across projects, install the openrouter-video skill.

Before you start

You need:

  • An OpenRouter API key available as OPENROUTER_API_KEY
  • Node.js 20 or newer
  • A public HTTPS image URL available as FIRST_FRAME_URL
  • A model that supports frame_images, confirmed with GET /api/v1/videos/models

If you have not chosen a model yet, read Choose a Video Generation Model so you can select one based on your clip duration, output shape, input type, audio, provider controls, and cost requirements.

Use the API reference pages as the source of truth for exact fields:

Submitting POST /api/v1/videos starts a real video generation job and may spend OpenRouter credits.

frame_images is for exact frame control. If you provide both frame_images and input_references, OpenRouter treats the request as image-to-video.

Use a stable, directly downloadable image URL. Some providers cannot fetch image URLs that require cookies, redirects through HTML pages, bot checks, or unusual headers.

Before submitting, check that your image URL returns 200 with an image content type:

$curl -I "$FIRST_FRAME_URL"

Example output:

HTTP/2 200
content-type: image/jpeg

Step 1: Choose a model with frame-image support

Fetch the model list and choose a model whose supported_frame_images includes the frame type you want:

$curl https://openrouter.ai/api/v1/videos/models

Example model output excerpt:

1{
2 "id": "google/veo-3.1-lite",
3 "supported_durations": [8, 4, 6],
4 "supported_resolutions": ["720p", "1080p"],
5 "supported_aspect_ratios": ["16:9", "9:16"],
6 "supported_frame_images": ["first_frame", "last_frame"]
7}

For first-frame and last-frame control, look for supported_frame_images containing first_frame and last_frame.

Step 2: Submit the image-to-video job

Build the video request with frame_images when the image should anchor an exact frame. This example uses a first frame, but the same request shape belongs in whatever server route, queue, or worker owns video generation in your app.

1const apiKey = process.env.OPENROUTER_API_KEY;
2const firstFrameUrl = process.env.FIRST_FRAME_URL;
3
4if (!apiKey) {
5 throw new Error("Set OPENROUTER_API_KEY first.");
6}
7
8if (!firstFrameUrl) {
9 throw new Error("Set FIRST_FRAME_URL to a directly downloadable image URL.");
10}
11
12const response = await fetch("https://openrouter.ai/api/v1/videos", {
13 method: "POST",
14 headers: {
15 Authorization: `Bearer ${apiKey}`,
16 "Content-Type": "application/json",
17 },
18 body: JSON.stringify({
19 model: "google/veo-3.1-lite",
20 prompt:
21 "The camera slowly pushes in as the subject turns toward warm window light, cinematic, realistic motion",
22 duration: 4,
23 resolution: "720p",
24 aspect_ratio: "16:9",
25 generate_audio: false,
26 frame_images: [
27 {
28 type: "image_url",
29 image_url: {
30 url: firstFrameUrl,
31 },
32 frame_type: "first_frame",
33 },
34 ],
35 }),
36});
37
38if (!response.ok) {
39 throw new Error(await response.text());
40}
41
42const job = await response.json();
43console.log(job);

The submit call returns the job fields immediately. In the QA run, the submitted job later completed and downloaded with this final summary:

1{
2 "id": "kBJZL5kI6gO33dfKN76A",
3 "status": "completed",
4 "output_path": "image-video.mp4",
5 "bytes": 1515304
6}

Step 3: Use a last frame when you need a transition

If the selected model supports last_frame, add both frames so the model can move from a known starting composition to a known ending composition:

1const lastFrameUrl = process.env.LAST_FRAME_URL;
2
3if (!lastFrameUrl) {
4 throw new Error("Set LAST_FRAME_URL to a directly downloadable image URL.");
5}
6
7// Before submitting, confirm this URL returns 200 with an image content type:
8// curl -I "$LAST_FRAME_URL"
9const frameImages = [
10 {
11 type: "image_url",
12 image_url: { url: firstFrameUrl },
13 frame_type: "first_frame",
14 },
15 {
16 type: "image_url",
17 image_url: { url: lastFrameUrl },
18 frame_type: "last_frame",
19 },
20];

Then set frame_images in the request body to frameImages.

Request shape for the optional last-frame path:

1[
2 {
3 "type": "image_url",
4 "image_url": { "url": "https://your-domain.example/first-frame.jpg" },
5 "frame_type": "first_frame"
6 },
7 {
8 "type": "image_url",
9 "image_url": { "url": "https://your-domain.example/last-frame.jpg" },
10 "frame_type": "last_frame"
11 }
12]

This is useful when you want the video to move from a known starting composition to a known ending composition.

Step 4: Poll and download

After submission, poll from a server route, worker, or job runner instead of the browser. Keep the flow explicit: poll with a limit, stop on terminal failure, then download the completed video.

Example polling and download helper:

1import { writeFile } from "node:fs/promises";
2
3async function waitForVideo(job) {
4 let current = job;
5
6 for (let attempt = 1; attempt <= 60; attempt += 1) {
7 if (current.status === "completed") {
8 return current;
9 }
10
11 if (current.status === "failed") {
12 throw new Error(current.error ?? "Video generation failed.");
13 }
14
15 if (["cancelled", "expired"].includes(current.status)) {
16 throw new Error(current.error ?? `Video generation ${current.status}.`);
17 }
18
19 await new Promise((resolve) => setTimeout(resolve, 30_000));
20
21 if (!current.polling_url) {
22 throw new Error("Video job did not include a polling_url.");
23 }
24
25 const pollingUrl = new URL(current.polling_url, "https://openrouter.ai");
26 const response = await fetch(pollingUrl, {
27 headers: {
28 Authorization: `Bearer ${apiKey}`,
29 },
30 });
31
32 if (!response.ok) {
33 throw new Error(await response.text());
34 }
35
36 current = await response.json();
37 }
38
39 throw new Error("Video generation did not complete after 60 attempts.");
40}
41
42async function downloadVideo(job) {
43 const videoUrl =
44 job.unsigned_urls?.[0] ??
45 `https://openrouter.ai/api/v1/videos/${job.id}/content?index=0`;
46
47 const response = await fetch(videoUrl, {
48 headers: videoUrl.startsWith("https://openrouter.ai/api/")
49 ? { Authorization: `Bearer ${apiKey}` }
50 : undefined,
51 });
52
53 if (!response.ok) {
54 throw new Error(await response.text());
55 }
56
57 return Buffer.from(await response.arrayBuffer());
58}
59
60const completedJob = await waitForVideo(job);
61const videoBuffer = await downloadVideo(completedJob);
62await writeFile("image-video.mp4", videoBuffer);
63console.log("Saved image-video.mp4");

The QA run saved the finished video after polling completed:

Saved image-video.mp4

Check your work

The first frame of the resulting video should closely match the image you provided as first_frame. If you also supplied last_frame, the clip should resolve toward that image. The implementation should produce a playable MP4 from the completed job.