Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.infercom.ai/llms.txt

Use this file to discover all available pages before exploring further.

Codex CLI is OpenAI’s open-source agentic coding assistant that runs in your terminal. It can read files, write code, run commands, and iterate on its work using the Responses API.
Requires Responses API - Codex CLI uses the /v1/responses endpoint which Infercom supports with MiniMax-M2.5.

Prerequisites

Installation

npm install -g @openai/codex
Verify installation:
codex --version
macOS users: If macOS blocks the binary as “malware”, go to System Preferences > Security & Privacy and click “Allow Anyway”, or run:
xattr -d com.apple.quarantine $(which codex)

Configuration

Codex CLI supports custom providers via a TOML configuration file.

Step 1: Set Environment Variable

export INFERCOM_API_KEY="your-infercom-api-key"
Add this to your shell profile (~/.bashrc, ~/.zshrc) for persistence.

Step 2: Create Config File

Create ~/.codex/config.toml:
# Define Infercom as a provider
[model_providers.infercom]
name = "Infercom (EU Sovereign)"
base_url = "https://api.infercom.ai/v1"
env_key = "INFERCOM_API_KEY"
wire_api = "responses"

# Default profile using MiniMax-M2.5
model_provider = "infercom"
model = "MiniMax-M2.5"
For complex tasks requiring deeper reasoning, Codex CLI supports profiles with different models. You can use a frontier model (Claude, GPT, Gemini) for planning and MiniMax-M2.5 for execution. See Codex documentation for profile configuration.

Step 3: Verify Setup

Run Codex in any project directory:
codex
You should see Codex start with:
model: MiniMax-M2.5
provider: infercom

Model

Use MiniMax-M2.5 - optimized for agentic coding with 160K context, built-in reasoning, and 75.8% SWE-bench.

Usage

Interactive Mode

Start Codex in your project directory:
cd your-project
codex
Type your request and Codex will:
  • Read relevant files
  • Write or edit code
  • Run terminal commands
  • Iterate until the task is complete

Example Tasks

  • “Add error handling to the login function”
  • “Write unit tests for the User class”
  • “Refactor this file to use async/await”
  • “Find and fix the bug causing the test to fail”

Non-Interactive Mode

For scripted or one-shot use, use the exec subcommand:
codex exec "Add input validation to auth.py"

Configuration Options

Full ~/.codex/config.toml reference:
# Default model to use
model = "MiniMax-M2.5"

# Provider to use
model_provider = "infercom"

# Approval mode: "suggest" (default), "auto-edit", or "full-auto"
approval_mode = "suggest"

# Sandbox mode for command execution
sandbox = "docker"  # or "none" to disable

[model_providers.infercom]
name = "Infercom (EU Sovereign)"
base_url = "https://api.infercom.ai/v1"
env_key = "INFERCOM_API_KEY"
wire_api = "responses"

Approval Modes

ModeBehavior
suggestShows diffs and asks before applying (default)
auto-editAutomatically applies file changes, asks for commands
full-autoAutomatically applies all changes

Troubleshooting

Connection Errors

Verify your configuration:
# Check API key is set
echo $INFERCOM_API_KEY

# Test API directly
curl -s https://api.infercom.ai/v1/responses \
  -H "Authorization: Bearer $INFERCOM_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"MiniMax-M2.5","input":"Hello"}' | jq .status
Expected output: "completed"

Model Not Found

Ensure the model name is exact (case-sensitive): MiniMax-M2.5

Slow Responses

MiniMax-M2.5 runs at 400+ tokens/sec. If responses seem slow:
  1. Check your network connection
  2. Large context (many files) increases processing time
  3. First request may be slower due to model loading

Why Codex CLI with Infercom?

FeatureBenefit
EU SovereignData processed in Germany, GDPR compliant
Responses APINative support for Codex’s agentic architecture
Fast inference400+ tokens/sec with MiniMax-M2.5
No vendor lock-inOpen-source tool, standard API

Next Steps

  • Aider - Alternative terminal-based tool
  • OpenCode - Modern TUI with similar features
  • Responses API - API documentation for custom integrations