Make is a no-code workflow automation platform that lets you connect over 2,700 apps—including Slack, Google Drive, JIRA, and AWS S3—into automated workflows. With SambaNova’s high-speed LLM inference engine integrated into Make, you can easily bring Generative AI into your workflows without writing a single line of code.
SambaNova frequently updates its model offerings. This module helps you dynamically pull the current list of supported models, ensuring that your workflows don’t rely on outdated or deprecated models.
Chat completions power a wide range of GenAI use cases—from chatbots to code assistants and content generation. This module lets you use SambaNova’s endpoints for these tasks with industry-leading speeds.
Add the module to your Make scenario and open its configuration panel.
Create a SambaNova Connection using your base URL and API key, as described in the Prerequisites section above.
Leave default headers as-is and select a Model ID.
For SambaCloud, use the List Cloud Models module.
For SambaStack, contact your SambaNova representative.
The messages field contains prompts for different roles (system, user, etc.) and is mappable, meaning you can populate it dynamically from previous modules (e.g., a Slack or Facebook module) instead of entering it manually.
Click Show advanced settings to view and adjust generation parameters like:
temperature
top_p
top_k
stop sequences
Connect the module with other Make modules in your workflow and run your scenario.
If your application requires access to advanced functionality outside the chat completions framework, this module is your go-to. It supports all endpoints described in the SambaNova documentation, including:
Function calling
Structured outputs
Voice transcription/translation
Image understanding
You just need to specify the appropriate request headers and body as described in our API reference.