llm version adds OpenAI /v1/responses endpoint support (0.32a2)
TL;DR
llm released version 0.32a2 with support for OpenAI's /v1/responses endpoint used by reasoning-capable models. It enables interleaved reasoning across tool calls and displays summarized reasoning tokens in a different color.
What changed
llm tool released version 0.32a2. It now uses OpenAI's /v1/responses endpoint for reasoning-capable models instead of /v1/chat/completions. Summarized reasoning tokens appear in a different color during prompts, controllable with -R or --hide-reasoning flags.
Specs
- •Vendor docs https://simonwillison.net/2026/May/12/llm/#atom-everything
Why it matters
Developers gain visibility into interleaved reasoning across tool calls for OpenAI's GPT-5 class models via the new endpoint. This differs from the /v1/chat/completions endpoint by showing summarized reasoning tokens. It aids debugging prompts involving tool use like code generation or multi-step analysis.
What to watch for
Test reasoning display on OpenAI o1 models using llm prompts. Compare output against the official OpenAI Python SDK for endpoint compatibility. Monitor flag behavior in automated workflows.
Who this matters for
- Vibe Builders: Use the new reasoning visibility to debug complex multi-step agentic workflows.
- Basic Users: Toggle the -R flag to clean up your terminal output when running reasoning-capable models.
Harsh’s take
The shift to the /v1/responses endpoint marks a necessary evolution in how we interact with reasoning models. By surfacing interleaved reasoning tokens, the llm tool provides a transparent view into the internal logic of GPT-5 class models during tool execution. This is a practical win for anyone building complex agentic loops who needs to verify why a model chose a specific path.
Developers should prioritize testing this update against their existing automated pipelines to ensure flag compatibility. The ability to toggle reasoning visibility is a simple but effective quality of life improvement for CLI users. Stop treating model outputs as black boxes and start using these visibility tools to refine your prompt engineering and tool calling strategies.
by Harsh Desai
More AI news
- FeaturePitchDrop.ai adds a feature to turn pitches into live branded URLs
PitchDrop.ai launches a feature that converts pitches into live, branded URLs. Discussion | Link
- FeatureVercel launches Trusted Sources to secure your deployments
Vercel introduces Trusted Sources, letting protected deployments accept short-lived OIDC tokens from authorized Vercel projects and external services instead of long-lived secrets. Callers attach tokens in the x-vercel-trusted-oidc-idp-token header for Vercel to verify signatures and claims.
- FeatureBossHogg launches agent-first CLI for PostHog analytics and flags
BossHogg releases agent-first CLI for PostHog analytics and feature flags.