What Happened

Two Chinese large model providers — Zhipu AI (智谱AI ) and MiniMax — have published developer-facing integration guides targeting engineers locked out of Anthropic and OpenAI APIs due to regional access restrictions, as documented in a technical walkthrough published on Juejin (掘金) in July 2025. Both providers expose endpoints compatible with the Anthropic SDK and Claude Code toolchain, requiring only a base_url swap to redirect traffic from Anthropic's servers to domestic infrastructure.

Zhipu AI's GL M-5 and GLM-5-Turbo models are available via https ://open.bigmodel.cn/api/coding/paas/v4, configured through Claude Code's ~/.claude/settings.json by overriding ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN. MiniMax routes through https://api.minimax.io/anthropic, accepting standard anthrop ic Python SDK calls with only the base_url and api_key changed .

Why It Matters

The practical implication for engineering teams in China — or any organization with procurement constraints on U.S. model providers — is a drop-in substitution path. The Anthropic SDK compatibility layer means no code ref actoring: teams already using client.messages.create() patterns can redirect to either provider without touching application logic.

This also signals a maturation in the Chinese LLM ecosystem. Both providers are competing not just on model capability but on API surface compatibility — effectively treating Anthropic's API design as the de facto standard worth cloning. That is a meaningful market signal about Anthropic's developer mindshare, even in markets where it cannot directly sell.

For CTOs evaluating supply chain risk in AI infrastructure, these providers represent a documented fallback path. The pricing structures are also notably different from Anthropic's, with GLM-4.7-Flash listed as free up to 200K context according to the source article's pricing table.

The Technical Detail

Zhipu AI : GLM Model Line and Pricing

According to the source, Zhipu's current model lineup includes:

  • GLM-5-Turbo: ¥5/M input tokens (0– 32K context), ¥22/M output; ¥7/M input (32K+ ), ¥26/M output
  • GLM-5: ¥4/M input (0–32K), ¥18/M output; ¥ 6/M input (32K+), ¥22/M output
  • GLM-4.7-Flash: ¥0.5/M input, ¥3/M output (up to 200K context)
  • GLM-4.7-Flash 200K: Listed as free in the source's pricing table

Zhipu also offers a C odingPlan subscription marketed specifically at coding worklo ads, described in the source as providing 60%+ cost savings versus pay -per-token rates. The plan sells out daily at 10:00 local time according to the article, suggesting constrained supply.

Claude Code configuration via settings.json :

{   "env": {     "ANTHROPIC_BASE_URL": "https://open.bigmodel.cn/api/coding/paas/v4 ",     "ANTHROPIC_AUTH_TOKEN": "your-api-key",     "ANTHROPIC_DEFAULT_OPUS_MODEL": "glm-5.1",      "ANTHROPIC_DEFAULT_SONNET_MODEL": "glm-5-turbo"   } }

Zhipu also ships an official CLI helper (npx @z_ai/coding-helper) that automates C odingPlan loading into IDEs, MCP service configuration, and usage monitoring — reducing manual config overhead for teams onboarding multiple developers.

MiniMax: Call-Count Billing Model

MiniMax takes a structurally different approach: rather than per-token billing, it sells annual subscriptions priced by call volume with a 5-hour refresh window. According to the source's pricing table:

  • Starter: ¥290/ year — 600 calls per 5-hour window, 50 TPS
  • Plus: ¥490/year — 1,500 calls per 5-hour window, 50 TPS
  • Max: ¥1,190/year — 4,500 calls per 5-hour window, 50 TPS
  • Max Speed: ¥1,990/year — 4,500 calls per 5-hour window, 100 TPS
  • Ultra Speed : ¥8,990/year — 30,000 calls per 5-hour window, 100 TPS

The Anthrop ic SDK integration for MiniMax's MiniMax-M2.7 model requires only two parameter changes:

import anthropic  client = anthropic.Anthropic(     base_url="https://api.minimax.io/anthropic",      api_key="your-token-plan-key" )  response = client.messages.create(     model="MiniMax-M2.7",     max_tokens=4096,     messages =[{"role": "user", "content": "write a quicksort"}] )

What To Watch

  • Pricing stability: The source notes Zhipu's C odingPlan has already increased in price three times since launch — teams locking in annual contracts should factor in repr icing risk on renewal.
  • Model capability benchmarks: Neither provider's coding benchmark scores appear in the source article. Independent evalu ations of GLM-5 and MiniMax-M2.7 on SWE-bench or HumanEval would clarify where these models sit relative to Claude Sonnet or GPT-4o — watch for third-party evals in the next 30 days.
  • OpenCode integration: Zhipu explicitly supports OpenCode (opencode- ai) in addition to Claude Code. As OpenCode gains adoption among developers avoiding Anthropic's pricing, this integration path will matter more.
  • Anthropic's China strategy: The existence of drop-in API clones targeting Anthropic's tool chain — and the developer demand driving them — is a signal Anthropic may need to address with regional pricing or partnerships , particularly if enterprise procurement teams in Asia -Pacific start standardizing on domestic alternatives.