1curl -H "Authorization: Bearer $API_KEY" \
2 -H "Content-Type: application/json" \
3 -d '{
4 "stream": true,
5 "model": "Llama-4-Maverick-17B-128E-Instruct",
6 "messages": [
7 {
8 "role": "user",
9 "content": [
10 {
11 "type": "text",
12 "text": "What do you see in this image"
13 },
14 {
15 "type": "image_url",
16 "image_url": {
17 "url": "<image_in_base_64>"
18 }
19 }
20 ]
21 }
22 ]
23 }' \
24 -X POST https://api.sambanova.ai/v1/chat/completions
1from openai import OpenAI
2
3client = OpenAI(
4 api_key="<YOUR API KEY>",
5 base_url="https://api.sambanova.ai/v1",
6)
7
8response = client.chat.completions.create(
9 model="Llama-4-Maverick-17B-128E-Instruct",
10 messages=[{"role":"user","content":[{"type":"text","text":"What do you see in this image"},{"type":"image_url","image_url":{"url":"<image_in_base_64>"}}]}],
11 temperature=0.1,
12 top_p=0.1
13)
14
15print(response.choices[0].message.content)
1import gradio as gr
2import sambanova_gradio
3gr.load("Llama-4-Maverick-17B-128E-Instruct", src=sambanova_gradio.registry, accept_token=True, multi_model=True).launch()
4
Experience the best open-source models powered by SambaNova RDUs. Learn more about each of them in our documentation.
Loading models...
* Preview models in SambaNova Cloud are available as early-access offerings intended primarily for evaluation purposes. During the preview phase, these models have limited capacity and being optimized for production. Please provide feedback in the SambaNova Community to help our teams optimize these models for your production applications.