The Infercom Inference Service provides access to a broad selection of AI models through the Global Model Catalog. Models are available in two categories: EU-hosted models running on Infercom’s sovereign infrastructure in Germany, and additional models available via global infrastructure.
EU-hosted models
The following models run on Infercom’s EU infrastructure in Germany. For these models, all data processing happens entirely within the EU — no data leaves EU jurisdiction, with full GDPR compliance and no US CLOUD Act exposure.
| Developer | Model ID | Context length | Region | View on Hugging Face |
|---|
| DeepSeek | DeepSeek-V3.1 | 128k tokens | EU | Model card |
| Meta | Meta-Llama-3.3-70B-Instruct | 128k tokens | EU | Model card |
| OpenAI | gpt-oss-120b | 128k tokens | EU | Model card |
Global Model Catalog
In addition to EU-hosted models, the Infercom Inference Service provides access to a broader selection of models through the Global Model Catalog. Models not hosted in our EU datacenters are served via global infrastructure. You can always check a model’s hosting region via the API or the Playground.
Every model is clearly labeled with its hosting region — both in the API response and in the Playground. You always know where your data is being processed.
| Developer | Model ID | Context length | Region | View on Hugging Face |
|---|
| Alibaba | Qwen3-32B | 32k tokens | Global | Model card |
| Alibaba | Qwen3-235B | 64k tokens | Global | Model card |
| DeepSeek | DeepSeek-R1-0528 | 128k tokens | Global | Model card |
| DeepSeek | DeepSeek-R1-Distill-Llama-70B | 128k tokens | Global | Model card |
| DeepSeek | DeepSeek-V3-0324 | 128k tokens | Global | Model card |
| DeepSeek | DeepSeek-V3.1-Terminus | 128k tokens | Global | Model card |
| DeepSeek | DeepSeek-V3.2 | 8k tokens | Global | Model card |
| Meta | Llama-4-Maverick-17B-128E-Instruct | 128k tokens | Global | Model card |
| Meta | Meta-Llama-3.1-8B-Instruct | 16k tokens | Global | Model card |
See Identifying model regions below for how to check where each model runs.
Identifying model regions
You can identify where a model is hosted through the API or the Playground.
Via the API
The /v1/models endpoint includes an sn_metadata object for each model. Use the region field to determine where a model is hosted: "EU" for sovereign models on Infercom’s EU infrastructure, or a non-EU region (e.g. "US", "JP") for models on global infrastructure.
Use the ?verbose=true query parameter to retrieve detailed model metadata including sovereignty information:
curl -s "https://api.infercom.ai/v1/models?verbose=true" \
-H "Authorization: Bearer $INFERCOM_API_KEY" | \
jq '.data[] | {id, region: .sn_metadata.region}'
Example response for an EU-hosted model:
{
"id": "DeepSeek-V3.1",
"object": "model",
"sn_metadata": {
"is_external": false,
"region": "EU"
}
}
Example response for a globally-routed model:
{
"id": "DeepSeek-R1-0528",
"object": "model",
"sn_metadata": {
"is_external": true,
"region": "US"
}
}
Via the Playground
In the Infercom Playground, region flags are displayed next to each model name, letting you see at a glance where each model runs.
Data sovereignty
EU sovereignty applies to EU-hosted models only. When using models from the Global Model Catalog that are not hosted on EU infrastructure, requests are processed on global infrastructure outside the EU. Always check the model’s region before processing sensitive or regulated data.
For EU-hosted models, Infercom provides:
- EU data residency — inference runs in our EU datacenters
- GDPR compliance — full compliance with EU data protection regulations
- No US CLOUD Act exposure — your inference data is not subject to US jurisdiction
- AI Act readiness — designed for compliance with the EU AI Act