Advanced Setup - This integration requires running a local proxy. For simpler setups, consider Aider or OpenCode.
Claude Code is Anthropic’s official CLI for AI-assisted coding. While it’s designed for Claude models, you can route requests to Infercom using ccproxy, a LiteLLM-based proxy.
How It Works
ccproxy intercepts Claude Code’s API calls and routes them to your configured provider (Infercom). This allows you to use Claude Code’s interface while running inference on EU sovereign infrastructure.
Claude Code CLI -> ccproxy (localhost) -> Infercom API
Prerequisites
Installation
Step 1: Install ccproxy
# Using npm
npm install -g ccproxy
# Or using Bun
bun install -g ccproxy
Step 2: Create Configuration
Create ~/.ccproxy/config.yaml:
model_list:
# Route Sonnet requests to Infercom MiniMax
- model_name: infercom
litellm_params:
model: openai/MiniMax-M2.5
api_base: https://api.infercom.ai/v1
api_key: "your-infercom-api-key"
# Keep Claude models for complex tasks (optional)
- model_name: claude-sonnet-4-6
litellm_params:
model: anthropic/claude-sonnet-4-6
api_base: https://api.anthropic.com
api_key: "your-anthropic-api-key"
litellm_settings:
drop_params: true # Required - drops unsupported params like reasoning_effort
Create ~/.ccproxy/ccproxy.yaml for routing rules:
ccproxy:
rules:
# Route Sonnet to Infercom
- name: infercom
rule: ccproxy.rules.MatchModelRule
params:
- model_name: claude-sonnet-4-6
litellm:
host: 127.0.0.1
port: 4000
drop_params: true is required because MiniMax doesn’t support all Claude parameters like reasoning_effort.
Map multiple Claude model names to ensure compatibility with different Claude Code versions.
Step 3: Start ccproxy
cd ~/.ccproxy && ccproxy start
You should see:
ccproxy running on http://127.0.0.1:4000
Keep this terminal open while using Claude Code.
Usage
In a new terminal, set the API endpoint:
export ANTHROPIC_BASE_URL="http://127.0.0.1:4000"
Run Claude Code
Switch Models
Use the /model command in Claude Code to select a model. The model name will show as “Claude” but requests are routed to MiniMax-M2.5.
Running ccproxy as a Service
macOS (launchd)
Create ~/Library/LaunchAgents/com.ccproxy.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.ccproxy</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/ccproxy</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
</dict>
</plist>
Load the service:
launchctl load ~/Library/LaunchAgents/com.ccproxy.plist
Linux (systemd)
Create ~/.config/systemd/user/ccproxy.service:
[Unit]
Description=ccproxy for Claude Code
After=network.target
[Service]
ExecStart=/usr/local/bin/ccproxy
Restart=always
[Install]
WantedBy=default.target
Enable and start:
systemctl --user enable ccproxy
systemctl --user start ccproxy
Limitations
Known Limitations:
- Model name displays as “Claude” in the interface even when using Infercom
- Some Claude-specific features may not work (vision, computer use)
- Requires keeping proxy running
- VS Code extension is less reliable than CLI
Troubleshooting
Connection Refused
Ensure ccproxy is running:
curl http://127.0.0.1:4000/health
Authentication Errors
Verify your Infercom API key in ccproxy.yaml:
curl -s https://api.infercom.ai/v1/models \
-H "Authorization: Bearer your-infercom-api-key"
Model Not Found
Check that your ccproxy.yaml maps the correct Claude model names:
model_name: "claude-sonnet-4-20250514" # Must match what Claude Code requests
Slow Responses
ccproxy adds minimal latency. If responses are slow, check:
- Your network connection to
api.infercom.ai
- Model load times (first request may be slower)
Security Considerations
- ccproxy runs locally and doesn’t expose your API key to external services
- All traffic to Infercom is encrypted (HTTPS)
- Your code stays within the EU infrastructure
Alternative: Direct API
For simpler setups without the proxy overhead, consider:
Next Steps