Setup time: ~1 minute
Prerequisites
- VS Code installed
- Cline extension installed
- Infercom API key
Configuration
Step 1: Open Cline Settings
- Open VS Code
- Click the Cline icon in the sidebar
- Click the gear icon to open settings
Step 2: Select Provider
- In the API Provider dropdown, select OpenAI Compatible
- Enter the base URL:
https://api.infercom.ai/v1 - Enter your Infercom API key
- Enter the model name:
MiniMax-M2.5
Step 3: Model Configuration
Under Model Configuration:| Setting | Value |
|---|---|
| Context Window Size | 163840 |
| Support Images | Disabled |
| Enable R1 Format | Disabled |
Step 4: Verify Connection
Click Test Connection or start a conversation to verify the setup works.Model
UseMiniMax-M2.5 - optimized for agentic coding with 160K context and 75.8% SWE-bench.
Usage
With Cline configured:- Click the Cline icon in VS Code sidebar
- Type your request in the chat
- Cline will autonomously:
- Read relevant files
- Write or edit code
- Run terminal commands
- Iterate until the task is complete
Example Tasks
- “Add error handling to the login function”
- “Write unit tests for the User class”
- “Refactor this file to use async/await”
Troubleshooting
Connection Failed
Verify your configuration:- Base URL:
https://api.infercom.ai/v1 - API key is valid
- Model name is exact:
MiniMax-M2.5
Model Not Found
Ensure the model name matches exactly (case-sensitive):MiniMax-M2.5
Slow Responses
MiniMax-M2.5 runs at 400+ tokens/sec. If responses seem slow:- Check your network connection
- Large context (many files) increases processing time
- First request may be slower due to model loading
Next Steps
- Cursor - Full AI-native IDE
- Continue - Open-source alternative
- Choosing a Tool - Compare all options