Skip to main content

Basic Information

Endpoint: https://api.laozhang.ai/v1/chat/completionsWith Watermark Endpoint: https://api.laozhang.ai/v1/chat/completions?watermark=trueMethod: POSTAuthentication: Bearer Token (API Key)Content Type: application/json
10/18 New Feature: Watermark OptionBy default, generated videos have no watermark. To generate videos with Sora native watermark, add the parameter ?watermark=true to the URL.

Authentication

Include your API key in the request header:
Authorization: Bearer YOUR_API_KEY

Request Format

Basic Structure

{
  "model": "sora_video2",
  "messages": [
    {
      "role": "user",
      "content": [...]
    }
  ],
  "stream": false
}

Request Parameters

ParameterTypeRequiredDescription
modelstringModel name, see Supported Models
messagesarrayMessage array
streambooleanWhether to enable streaming output, default false

Messages Array

Each message object contains:
FieldTypeRequiredDescription
rolestringFixed as "user"
contentarrayContent array, containing text or images

Content Array

Supports two types of content:

Text Content

{
  "type": "text",
  "text": "Video description text"
}
FieldTypeRequiredDescription
typestringFixed as "text"
textstringVideo generation prompt

Image Content (Optional)

{
  "type": "image_url",
  "image_url": {
    "url": "https://example.com/image.png"
  }
}
FieldTypeRequiredDescription
typestringFixed as "image_url"
image_url.urlstringImage URL or Base64
Image Restrictions
  • Maximum 1 image
  • Supports URL or Base64 format
  • Recommended resolution not exceeding 2048×2048
  • Real person photos not supported

Supported Models

Model NameResolutionDurationPrice
sora_video2704×1280 (Portrait)10s$0.15
sora_video2-landscape1280×704 (Landscape)10s$0.15
Due to official OpenAI computing power limitations, -hd and -15s versions are temporarily unavailable.

Response Format

Non-streaming Response

{
  "id": "foaicmpl-xxx",
  "object": "chat.completion",
  "created": 1759759480,
  "model": "sora_video2",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "```json\n{\n    \"prompt\": \"...\",\n    \"mode\": \"Portrait Mode\"\n}\n```\n\n> ✅ Video generated successfully, [click here](https://sora.gptkey.asia/assets/sora/xxx.mp4) to view video~~~\n\n"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 17,
    "completion_tokens": 244,
    "total_tokens": 261
  }
}

Response Field Description

FieldTypeDescription
idstringRequest unique identifier
objectstringObject type
createdintegerCreation timestamp
modelstringModel used
choices[].message.contentstringContent containing video link
choices[].finish_reasonstringCompletion reason, "stop" indicates success
usageobjectToken usage statistics

Streaming Response (SSE)

When "stream": true is enabled, returns Server-Sent Events format:
data: {"id":"foaicmpl-xxx","object":"chat.completion.chunk","created":1759759480,"model":"sora_video2","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]}

data: {"id":"foaicmpl-xxx","object":"chat.completion.chunk","created":1759759480,"model":"sora_video2","choices":[{"index":0,"delta":{"content":"```json\n{\n    \"prompt\": \"...\"\n}\n```\n\n"},"finish_reason":null}]}

data: {"id":"foaicmpl-xxx","object":"chat.completion.chunk","created":1759759480,"model":"sora_video2","choices":[{"index":0,"delta":{"content":"> ⌛️ Task is in queue, please wait patiently...\n\n"},"finish_reason":null}]}

data: {"id":"foaicmpl-xxx","object":"chat.completion.chunk","created":1759759480,"model":"sora_video2","choices":[{"index":0,"delta":{"content":"> 🏃 Progress: 36.0%\n\n"},"finish_reason":null}]}

data: {"id":"foaicmpl-xxx","object":"chat.completion.chunk","created":1759759480,"model":"sora_video2","choices":[{"index":0,"delta":{"content":"> ✅ Video generated successfully, [click here](https://sora.gptkey.asia/assets/sora/xxx.mp4) to view video~~~\n\n"},"finish_reason":null}]}

data: {"id":"foaicmpl-xxx","object":"chat.completion.chunk","created":1759759480,"model":"sora_video2","choices":[{"index":0,"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":17,"completion_tokens":244,"total_tokens":261}}

data: [DONE]

Streaming Response Fields

FieldTypeDescription
choices[].delta.rolestringRole, only included in first message
choices[].delta.contentstringIncremental content (progress or video link)
choices[].finish_reasonstringWhen "stop", indicates completion
usageobjectLast message contains usage statistics

Complete Examples

Text-to-Video

curl -X POST "https://api.laozhang.ai/v1/chat/completions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "sora_video2",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "A cute cat playing with a ball in a sunny garden"
          }
        ]
      }
    ]
  }'

Image-to-Video (URL)

curl -X POST "https://api.laozhang.ai/v1/chat/completions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "sora_video2",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "Generate video: Make this figurine jump out from the desk and become a living person~"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://filesystem.site/cdn/download/20250407/OhFd8JofOAJCsNOCsM1Y794qnkNO3L.png"
            }
          }
        ]
      }
    ]
  }'

Image-to-Video (Base64)

import openai
import base64

client = openai.OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.laozhang.ai/v1"
)

# Read local image
def encode_image(image_path):
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode('utf-8')

base64_image = encode_image("/path/to/image.png")

response = client.chat.completions.create(
    model="sora_video2",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Make this scene come alive"
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/png;base64,{base64_image}"
                    }
                }
            ]
        }
    ]
)

print(response.choices[0].message.content)

Streaming Output

import openai

client = openai.OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.laozhang.ai/v1"
)

stream = client.chat.completions.create(
    model="sora_video2",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "A cute cat playing with a ball in a sunny garden"
                }
            ]
        }
    ],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end='', flush=True)

Error Codes

HTTP Status CodeError TypeDescription
400Bad RequestInvalid request parameters
401UnauthorizedInvalid or missing API Key
402Payment RequiredInsufficient balance
429Too Many RequestsToo many requests
500Internal Server ErrorInternal server error
503Service UnavailableService temporarily unavailable

Error Response Format

{
  "error": {
    "message": "Error description",
    "type": "invalid_request_error",
    "code": "invalid_api_key"
  }
}

Rate Limits

Currently no strict rate limits, but recommended:
  • Control concurrency when batch generating (recommended 2-3)
  • Avoid large number of requests in short time
  • Set reasonable retry intervals

Best Practices

Video generation takes 2-4 minutes, recommend setting timeout to 5-10 minutes.
import httpx
import openai

client = openai.OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.laozhang.ai/v1",
    http_client=httpx.Client(timeout=300.0)  # 5 minutes
)
Add retry logic to handle temporary errors:
import time

max_retries = 3
for i in range(max_retries):
    try:
        response = client.chat.completions.create(...)
        break
    except Exception as e:
        if i < max_retries - 1:
            print(f"Error, retrying in 30 seconds...")
            time.sleep(30)
        else:
            raise
Download immediately after generation (valid for 1 day):
import requests
import re

# Extract link
video_url = re.search(r'https://[^\s\)]+\.mp4', content).group(0)

# Download
response = requests.get(video_url, stream=True)
with open('video.mp4', 'wb') as f:
    for chunk in response.iter_content(chunk_size=8192):
        f.write(chunk)
Use streaming output to view progress in real-time:
stream = client.chat.completions.create(
    model="sora_video2",
    messages=[...],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        content = chunk.choices[0].delta.content
        print(content, end='', flush=True)

SDK Support

Official SDK

  • Python: openai >= 1.0.0
  • Node.js: openai >= 4.0.0

Third-party SDK

Any SDK compatible with OpenAI API format can be used, just modify base_url.

Technical Specifications

SpecificationValue
Video EncodingH.264
Audio EncodingAAC
Frame Rate24 fps
FormatMP4
WatermarkNone
AudioSupported
Max File Size~50MB (depends on duration and quality)

Next Steps

I