Skip to main content

Overview

Text Generation (Chat Completions) is one of the core capabilities of LaoZhang API, supporting 200+ popular AI models for intelligent conversations and text generation. Through a unified OpenAI-compatible interface, you can easily implement:
  • Intelligent Conversations: Build chatbots and virtual assistants
  • Content Creation: Article writing, creative generation, copywriting
  • Code Assistance: Code generation, debugging, refactoring suggestions
  • Knowledge Q&A: Answer questions, knowledge retrieval, information extraction
  • Role Playing: Customized AI characters, scenario simulation
Supports GPT-5, Claude 4, Gemini 2.5, DeepSeek, Qwen and 200+ mainstream models with a single API Key.

Quick Start

Basic Example

from openai import OpenAI

client = OpenAI(
    api_key="sk-your-laozhang-api-key",
    base_url="https://api.laozhang.ai/v1"
)

response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[
        {"role": "user", "content": "Explain the history of artificial intelligence"}
    ]
)

print(response.choices[0].message.content)

Multi-turn Conversation

messages = [
    {"role": "system", "content": "You are a professional Python programming assistant"},
    {"role": "user", "content": "How to read a CSV file?"},
    {"role": "assistant", "content": "You can use pandas library's read_csv() function..."},
    {"role": "user", "content": "How to filter specific column data?"}
]

response = client.chat.completions.create(
    model="gpt-4.1",
    messages=messages
)

print(response.choices[0].message.content)

Core Parameters

model (Required)

Specify the model name. See Model Info for details.
model="gpt-5"              # GPT-5 Latest
model="gpt-4.1"            # GPT-4.1 Fast
model="claude-sonnet-4-20250514"  # Claude 4 Sonnet
model="gemini-2.5-pro"     # Gemini 2.5 Pro
model="deepseek-chat"      # DeepSeek Chat

messages (Required)

Array of conversation messages with role and content fields:

system

System prompt defining AI behavior and role

user

User message representing user input

assistant

Assistant message representing AI response

temperature (Optional)

Controls output randomness, range 0.0 ~ 2.0, default 1.0:
RangeCharacteristicsUse Cases
0.0 ~ 0.3More deterministicTranslation, summarization, code
0.7 ~ 1.0BalancedDaily conversations
1.0 ~ 2.0More creativeCreative writing, brainstorming

stream (Optional)

Enable streaming output for better user experience:
response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[{"role": "user", "content": "Write an article about AI"}],
    stream=True
)

for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
ScenarioRecommended ModelReason
Daily Chatgpt-4.1-mini, deepseek-chatFast, low cost
Complex Reasoninggpt-5, claude-sonnet-4-20250514Powerful, accurate
Code Generationclaude-sonnet-4-20250514, deepseek-coderExcellent coding
Long Textgemini-2.5-pro, claude-3-opusUltra-long context

Best Practices

Optimize Prompts

# ❌ Poor prompt
"Write an article"

# ✅ Good prompt
"""Write a popular science article about AI applications in healthcare.

Requirements:
- Length: 800-1000 words
- Audience: General readers
- Structure: Introduction, use cases, case studies, future outlook
- Include 2-3 real examples"""

Error Handling

from openai import OpenAI, OpenAIError
import time

def chat_with_retry(messages, max_retries=3):
    for attempt in range(max_retries):
        try:
            response = client.chat.completions.create(
                model="gpt-4.1",
                messages=messages
            )
            return response.choices[0].message.content
        except OpenAIError as e:
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)
                continue
            raise