Aider is a terminal-based AI coding assistant that works directly with your git repository. It can make changes across multiple files, understand your codebase context, and commit changes automatically.
Verified working with MiniMax-M2.5 on Infercom.
Prerequisites
Installation
Verify installation:
Configuration
Option 1: Environment Variables (Recommended)
Set these environment variables:
export OPENAI_API_BASE="https://api.infercom.ai/v1"
export OPENAI_API_KEY="your-infercom-api-key"
Then run Aider:
aider --model openai/MiniMax-M2.5
Add the export commands to your shell profile (~/.bashrc, ~/.zshrc) for persistence.
Option 2: Config File
Create ~/.aider.conf.yml:
openai-api-base: https://api.infercom.ai/v1
openai-api-key: your-infercom-api-key
model: openai/MiniMax-M2.5
Then simply run:
Option 3: Command Line
Pass everything on the command line:
OPENAI_API_BASE="https://api.infercom.ai/v1" \
OPENAI_API_KEY="your-infercom-api-key" \
aider --model openai/MiniMax-M2.5
Model
Use MiniMax-M2.5 with the openai/ prefix:
The openai/ prefix is required. Using just MiniMax-M2.5 will not work.
Example Usage
Interactive Mode
cd your-project
aider --model openai/MiniMax-M2.5
Then chat with Aider:
> Add error handling to the main function in app.py
Non-Interactive Mode
aider --model openai/MiniMax-M2.5 \
--message "Add a hello world function" \
--yes \
main.py
Disable Auto-Commits
aider --model openai/MiniMax-M2.5 --no-auto-commits
Reasoning Output
MiniMax-M2.5 has built-in reasoning capabilities. To see the model’s thinking process, disable streaming:
aider --model openai/MiniMax-M2.5 --no-stream
You’ll see a THINKING section before each response showing the model’s reasoning.
Controlling Reasoning Effort
You can adjust reasoning intensity with --reasoning-effort:
aider --model openai/MiniMax-M2.5 \
--no-stream \
--reasoning-effort medium \
--no-check-model-accepts-settings
Valid values: low, medium, high
--no-stream is required to display reasoning output. The --no-check-model-accepts-settings flag is needed because Aider doesn’t recognize MiniMax’s reasoning support by default.
Troubleshooting
Unknown Model Warning
You may see this warning:
Warning for openai/MiniMax-M2.5: Unknown context window size and costs,
using sane defaults.
Fix: Create ~/.aider.model.metadata.json with MiniMax-M2.5 specs:
{
"openai/MiniMax-M2.5": {
"max_tokens": 16384,
"max_input_tokens": 160000,
"max_output_tokens": 16384,
"input_cost_per_token": 0.0000003,
"output_cost_per_token": 0.0000012,
"litellm_provider": "openai",
"mode": "chat",
"accepts_settings": ["reasoning_effort"]
}
}
This enables proper context window handling and cost tracking.
Alternative: Suppress the warning with --no-show-model-warnings:
aider --model openai/MiniMax-M2.5 --no-show-model-warnings
Connection Errors
Verify your configuration:
curl -s https://api.infercom.ai/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" | head -20
You should see a list of available models.
Model Not Found
Ensure you’re using the correct model name with the openai/ prefix:
# Correct
aider --model openai/MiniMax-M2.5
# Wrong
aider --model MiniMax-M2.5
- Token throughput: 400+ tokens/sec with MiniMax-M2.5
- Context window: 160K tokens (163,840) - handles large codebases
- Quality: 75.8% SWE-bench Verified
Next Steps