Starter Kits Help You Build Fast – They bootstrap application development for common AI use cases with open-source Python code on SambaNova GitHub. They let you see how the code works, and customize it to your needs, so you can prove the business value of AI.
How to get started with SambaNova Cloud –A comprehensive guide to help you begin your journey with SambaNova Cloud.
Meet Your Personal AI Content Agent. Transform your video content with AI-powered analysis, transcription, and insights. Get started in seconds..
Seamless LLM Integration in Google Workspace – Enhance your productivity by integrating powerful language models directly into Google Docs and Sheets via App Scripts.
This example features a voice agent using LiveKit for real-time audio. The agent runs an STT/LLM pipeline with SambaNovas low-latency inference and is equipped with Model Context Protocol (MCP) capabilities for advanced function calling.
Tools calling implementation and generic function calling module – The Agents application is a multi-agent AI system that provides deep research, automated artefact production, and advanced data science capabilities. It leverages a team of specialized agents to execute complex tasks and deliver comprehensive results.
Video2Game is an application which takes educational videos (like lectures, tutorials, or any type of the explainer videos) and transforms them into interactive games.
Example application for running multiple evaluation tests on LLM Using MCP.
Document Q&A on PDF, TXT, DOC, and more – Bootstrap your document Q&A application with this sample implementation of a Retrieval Augmented Generation semantic search workflow using the SambaNova platform, built with Python and a Streamlit UI.
Include web search results in responses – Expand your application's knowledge with this implementation of the semantic search workflow and prompt construction strategies, with configurable integrations with multiple SERP APIs.
Compare model performance – Quickly determine which models meet your speed and quality needs by comparing model outputs, Time to First Token, End-to-End Latency, Throughput, Latency, and more with configuration options in a chat interface.