Download the library
Run the command below to download the library.Use Infercom APIs with OpenAI client libraries
Configuring your OpenAI client libraries to use Infercom inference APIs is as simple as setting two values: thebase_url and your api_key, as shown below.
Don’t have an Infercom API key? Get yours from the API keys and URLs page.
Non-streaming example
The following code demonstrates using the OpenAI Python client for non-streaming completions.Streaming example
The following code demonstrates using the OpenAI Python client for streaming completions.In streaming mode, the API returns chunks that contain multiple tokens. When calculating metrics like tokens per second or time per output token, ensure that you account for all tokens in each chunk.
Currently unsupported OpenAI features
The following features are not yet supported and will be ignored:logprobstop_logprobsnpresence_penaltyfrequency_penaltylogit_biasseed
Feature differences
temperature: The Infercom API supports a value between 0 and 1 whereas OpenAI supports values between 0 and 2.
Infercom API features not supported by OpenAI clients
The Infercom API supports thetop_k parameter, which is not supported by the OpenAI client libraries.