GLM-4.7 Now on SiliconFlow: Advanced Coding, Reasoning & Tool Use Capabilities
Source: Dev.to
Overview
GLM‑4.7, Z.ai’s latest flagship model, is now available on SiliconFlow with Day 0 support. Compared with its predecessor GLM‑4.6, this release brings significant advancements across coding, complex reasoning, and tool utilization—delivering performance that rivals or even outperforms industry leaders like Claude Sonnet 4.5 and GPT‑5.1.
SiliconFlow currently supports the entire GLM series, including GLM‑4.5, GLM‑4.5‑Air, GLM‑4.5V, GLM‑4.6, GLM‑4.6V, and now GLM‑4.7.
SiliconFlow Day 0 Support
- Competitive Pricing: GLM‑4.7 $0.6 / M tokens (input) and $2.2 / M tokens (output)
- 205K Context Window: Tackle complex coding tasks, deep document analysis, and extended agentic workflows.
- Anthropic & OpenAI‑compatible APIs: Deploy via SiliconFlow and integrate seamlessly into Claude Code, Kilo Code, Cline, Roo Code, and other mainstream agent workflows.
What Makes GLM‑4.7 Special
Core Coding Excellence
GLM‑4.7 sets a new standard for multilingual, agentic coding and terminal‑based tasks. Compared to GLM‑4.6, the improvements are substantial:
- 73.8 % (+5.8 %) on SWE‑bench Verified
- 66.7 % (+12.9 %) on SWE‑bench Multilingual
- 41 % (+16.5 %) on Terminal Bench 2.0
The model now supports “thinking before acting,” enabling more reliable performance on complex tasks across mainstream agent frameworks.
Vibe Coding
GLM‑4.7 makes a major leap forward in UI quality. It produces cleaner, more modern webpages and generates better‑looking slides with more accurate layout and sizing—ideal for prototyping interfaces or creating presentations.
Advanced Tool Using
Tool utilization has been significantly enhanced. On multi‑step benchmarks like τ²‑Bench and web‑browsing tasks via BrowseComp, GLM‑4.7 surpasses both Claude Sonnet 4.5 and GPT‑5.1 High, demonstrating superior capability for complex, real‑world workflows.
Complex Reasoning Capabilities
Mathematical and reasoning abilities see a substantial boost, with GLM‑4.7 achieving 42.8 % (+12.4 %) on the HLE (Humanity’s Last Exam) benchmark compared to GLM‑4.6. Significant improvements are also observed in chat, creative writing, and role‑play scenarios.
Get Started Immediately
Explore – Try GLM‑4.7 in the SiliconFlow playground.
Integrate – Use the OpenAI/Anthropic‑compatible API. See the full specifications in the SiliconFlow API documentation.
import requests
url = "https://api.siliconflow.com/v1/chat/completions"
payload = {
"model": "zai-org/GLM-4.7",
"messages": [
{"role": "system", "content": "You are an assistant"},
{"role": "user", "content": "What's the weather like in America?"}
],
"stream": True,
"max_tokens": 4096,
"enable_thinking": True,
"temperature": 1,
"top_p": 0.95
}
headers = {
"Authorization": "Bearer ",
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
print(response.text)