Baidu launches Qianfan-OCR-Fast on OpenRouter (66k context, $0.68/M in, $2.81/M out)
TL;DR
Baidu launches Qianfan-OCR-Fast, a multimodal OCR model, on OpenRouter. It supports 66k context at $0.68/M input tokens and $2.81/M output tokens.
What changed
Baidu launched Qianfan-OCR-Fast on OpenRouter. This multimodal model handles 66k context at $0.68 per million input tokens and $2.81 per million output tokens. It boosts OCR performance over Qianfan-OCR through specialized training data.
Why it matters
Developers integrating OCR into apps gain a targeted upgrade from Qianfan-OCR. The 66k context length suits long-form document processing. Vibe Builders can apply it to extract text from stylized visuals.
What to watch for
Compare Qianfan-OCR-Fast against Qianfan-OCR on OpenRouter. Run a prompt with a multi-page scanned PDF to check extraction speed and fidelity.
Who this matters for
- Vibe Builders: Use the model to extract text from stylized brand visuals for creative social media assets.
- Developers: Integrate the 66k context window to process multi-page scanned documents with higher fidelity.
Harsh’s take
Baidu is pushing specialized multimodal models into the open market to compete with generalist giants. By offering Qianfan-OCR-Fast on OpenRouter, they provide a clear alternative for developers who need high-performance document parsing without the overhead of massive general-purpose models. The pricing is aggressive enough to force a rethink of current document processing pipelines.
This release highlights the shift toward domain-specific intelligence. Builders should stop relying on generic models for structured data extraction tasks. Test this model against your current OCR stack to verify if the specialized training data actually improves your specific document workflows.
If the fidelity gains hold up, the cost efficiency makes this a mandatory addition to your production toolkit.
by Harsh Desai
About OpenRouter
View the full OpenRouter page →All OpenRouter updatesMore from OpenRouter
- Model ReleaseDeepSeek V4 Flash: a free, long-context AI model now on OpenRouter
OpenRouter adds DeepSeek V4 Flash for free with 256k context. The efficiency-optimized MoE model has 284B total parameters, 13B active, and supports 1M-token context for fast inference.
- LaunchOpenRouter launches Perceptron Mk1 with 33k context at $0.15/M input, $1.50/M output
OpenRouter launched Perceptron Mk1, a vision-language model for video and embodied reasoning. It processes images and videos with 33k context at $0.15/M input and $1.50/M output.
- LaunchOpenRouter adds Tencent Hunyuan model with 262K context window
OpenRouter adds Tencent Hunyuan Hy3 preview, a mixture-of-experts model with 262K context window. Pricing is $0.07/M input tokens and $0.26/M output tokens with configurable reasoning levels.