After months of testing various AI models with Claude Code - from my positive experiences with DeepSeek v3.1 and GLM-4.5 to the frustrating encounter with Qwen3-Coder-Plus - I’ve developed a practical solution for developers who want to easily access multiple AI providers through a unified interface.
I’m excited to share this multi-model launcher. This isn’t just another configuration - it’s a streamlined approach that addresses the core challenges I faced when working with different models.
The Problem: The Hassle of Multiple Configurations
When I started testing different models with Claude Code, I quickly encountered several frustrating issues:
- Configuration complexity - Each model required different setup processes
- Environment management - Juggling multiple API keys and endpoints
After my negative experience with Qwen3-Coder-Plus (which you can read about in my previous post), I realized I needed a more systematic approach - one that would let me access different models without the configuration overhead.
The Solution: A Smart Launcher Script
The result is a sophisticated shell script that acts as a smart launcher for Claude Code, setting up the right environment variables for your chosen AI provider with a simple --model
parameter.
How It Works
The .claude.sh
script works by:
- Defining a function - It creates a
claude
function that sets up the environment - Parsing the
--model
parameter - It supports both--model model_name
and--model=model_name
formats - Setting environment variables - Based on the model name, it configures the appropriate variables
- Running the real command - It executes the actual
claude
binary with the proper settings
Supported Providers
The script currently supports four major Chinese AI providers:
Provider | Environment Variable | Base URL | Supported Models |
---|---|---|---|
Zhipu (智谱) | ZHIPU_API_KEY |
https://open.bigmodel.cn/api/anthropic |
glm-4.5 |
Alibaba (阿里) | DASHSCOPE_API_KEY |
https://dashscope.aliyuncs.com/api/v2/apps/claude-code-proxy |
qwen3-coder-plus |
DeepSeek | DEEPSEEK_API_KEY |
https://api.deepseek.com/anthropic |
deepseek-chat |
Moonshot (月之暗面) | MOONSHOT_API_KEY |
https://api.moonshot.cn/anthropic |
kimi-k2-0905-preview |
Installation: Three Simple Steps
1. Install Claude Code (if not already installed)
npm install -g @anthropic-ai/claude-code
2. Add the Script to Your Shell Configuration
Add this line to your .zshrc
or .bashrc
:
source /path/to/your/repo/.claude.sh
3. Set Up Your API Keys
Configure the environment variables for the providers you want to use:
export ZHIPU_API_KEY=your_zhipu_key
export DASHSCOPE_API_KEY=your_alibaba_key
export DEEPSEEK_API_KEY=your_deepseek_key
export MOONSHOT_API_KEY=your_moonshot_key
And that’s it! You’re ready to start using any supported model.
Usage: Simple Model Selection
The beauty of this setup is its simplicity. Instead of managing multiple configurations, you can select your model with a single parameter:
# Use Zhipu's GLM-4.5 for complex coding tasks
claude --model glm-4.5
# Use DeepSeek for cost-effective development
claude --model deepseek-chat
# Try Kimi for creative problem-solving
claude --model kimi-k2-0905-preview
# Use default Claude model (bypasses the script)
claude
The script provides helpful feedback:
🚀 Intercepted model 'glm-4.5'. Routing to zhipu provider...
If you forget to set an API key, it gives you a clear error:
❌ ZHIPU_API_KEY environment variable not set!
Please set it with: export ZHIPU_API_KEY=your_api_key
Key Features That Address Real Challenges
1. Automatic Provider Selection
The script uses a mapping system that automatically selects the correct provider based on the model name. No more memorizing which endpoint goes with which model.
2. Environment Variable Validation
One of the most frustrating experiences is realizing you forgot to set an API key mid-task. The script validates all required environment variables before making any requests.
3. Full Compatibility
The script maintains 100% compatibility with the standard Claude Code interface. All parameters, flags, and functionality work exactly as expected.
4. Flexible Model Parameter
The script supports both --model model_name
and --model=model_name
formats, giving you the flexibility to use whichever style you prefer.
5. Long Timeout for Complex Tasks
Requests use a 10-minute timeout (API_TIMEOUT_MS=600000
) to accommodate the longer processing times needed for complex coding tasks.
The Technical Implementation
Provider Configuration System
The script uses an associative array to configure provider mappings. Each entry contains:
- Environment variable name for the API key
- Base URL for the provider’s Anthropic-compatible API
- Comma-separated list of supported models
typeset -A PROVIDERS=(
zhipu "ZHIPU_API_KEY|https://open.bigmodel.cn/api/anthropic|glm-4.5"
alibaba "DASHSCOPE_API_KEY|https://dashscope.aliyuncs.com/api/v2/apps/claude-code-proxy|qwen3-coder-plus"
deepseek "DEEPSEEK_API_KEY|https://api.deepseek.com/anthropic|deepseek-chat"
moonshot "MOONSHOT_API_KEY|https://api.moonshot.cn/anthropic|kimi-k2-0905-preview"
)
Provider Selection
The script match the requested model against all providers, setting the appropriate environment variables for the selected provider.
Error Handling
Comprehensive error handling provides clear feedback for common issues:
- CLI not found (with installation instructions)
- Missing API key (with the specific environment variable needed)
- Provider routing confirmation
Practical Benefits: Why This Approach Works
For Model Comparison
This setup makes it incredibly easy to compare different models. No more switching configurations or remembering different endpoints - just change the --model
parameter and you’re testing a different AI.
For Cost Optimization
You can easily choose between more expensive, high-quality models for complex tasks and more cost-effective models for routine development.
For Redundancy
If one provider has downtime or rate limits, you can instantly switch to another without changing anything but the model parameter.
For Specialized Use Cases
Different models excel at different tasks. With this setup, you can:
- Use GLM-4.5 for complex architectural decisions
- Switch to DeepSeek for cost-sensitive production work
- Try Kimi for creative problem-solving
Getting Started Today
The full code and setup instructions are available in my GitHub repository: https://github.com/myimilo/my-vibe-coding-setup.
This launcher has significantly improved my development workflow with AI models. Instead of fighting with configurations, I can focus on choosing the right tool for each task - and that makes all the difference in practical, day-to-day coding work.