Tay avatar not loaded
Upload a loop video in Avatar Studio.
Open Avatar Studio

GLM-4.7 (Z.AI) — Quick Start

Send a small test chat request to GLM-4.7 through Tay’s internal adapter and copy the returned JSON.

What it does

  • Sends a chat/completions request to Z.AI with model glm-4.7.
  • Returns the provider JSON response (OpenAI-like shape).
  • Lets you copy the returned JSON for inspection or debugging.

What it does not do

  • It does not change Prompt Dictator or any existing tool behavior.
  • It does not store your raw prompts in Tay.
  • It does not stream responses in this slice.

Step-by-step

  1. Sign in (the internal adapter route requires an authenticated session).
  2. Go to /throne.
  3. In “LLM Providers → GLM-4.7 (Z.AI)”, paste a small test message.
  4. Press “Send test (GLM)”.
  5. If it succeeds, copy the JSON result.

Example

Example input
Hello. Reply with a single sentence.
Example output snippet
{
  "id": "...",
  "object": "chat.completion",
  "model": "glm-4.7",
  "choices": [ ... ]
}

Error meanings

  • unauthorized — you’re signed out. Sign in and try again.
  • missing_zai_api_key — GLM isn’t configured on this deployment.
  • glm_timeout — the Z.AI API did not respond in time.
  • glm_upstream_error — Z.AI returned a non-2xx response.
  • llm_provider_not_configured — provider disabled or unknown.
Privacy note: No raw prompts are stored by Tay.

When to use it

  • When you need a quick sanity check that GLM wiring works.
  • When you want to inspect the raw provider response shape.
  • When you want a provider call that’s isolated from core tools.
  • When you’re debugging timeouts or configuration safely.

When not to use it

  • When you need deterministic output (this is a model call).
  • When you don’t want data sent to an external provider.
  • When the provider isn’t configured or you’re signed out.

Next steps