1
Get your API key.
To generate an API key, go to the API keys and URLs page. When generating API keys, be sure to save them securely, as they can’t be viewed again.
You can generate and use up to 25 API keys.
2
Pick a model.
View the available models and details on the Infercom Inference Service models page.
3
Make an API request.
You can make an inference request in multiple ways. See two examples below:
- SambaNova SDK - Use Javascript or Python for a more flexible integration.
- OpenAI client library – Use Javascript or Python for a more flexible integration.
- CURL command – Send a request directly from the command line.
SambaNova SDK
To get started, choose your preferred programming language. Next, open a terminal and run the command to install the SambaNova SDK."your-infercom-api-key" with your actual API key. Then, run the file in a terminal using the command shown below.
OpenAI client library
To get started, select your preferred programming language. Then, open a terminal window and run the command to install the OpenAI library."your-infercom-api-key" with your API key. Then run the file with the command below in a terminal window.