Perplexity Sonar¶
1. Overview¶
Perplexity AI is an AI-powered conversational search engine built to provide direct and accurate answers using natural-language interaction.
2. Request¶
- Method:
POST -
Endpoint:
https://gateway.serevixai.ai/v1/chat/completions
3. Parameters¶
3.1 Header Parameters¶
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
Content-Type |
string | Yes | Sets the request content type. It must be application/json |
application/json |
Accept |
string | Yes | Sets the response content type. The recommended value is application/json |
application/json |
Authorization |
string | Yes | API key required for authentication, in the format Bearer $YOUR_API_KEY. |
Bearer $YOUR_API_KEY |
3.2 Body Parameters (application/json)¶
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
| model | string | Yes | The model ID to use. See Model List for available versions, such as sonar. |
sonar |
| messages | array | Yes | A chat message list in an OpenAI-compatible format. Each object contains role and content. |
[{"role": "user","content": "Hello"}] |
| role | string | No | Message role. Supported values: system, user, and assistant. |
user |
| content | string | No | The message content. | Hello, tell me a joke. |
| temperature | number | No | Sampling temperature in the range 0-2. Higher values make the output more random, while lower values make it more focused and deterministic. |
0.7 |
| top_p | number | No | Another way to control the sampling distribution, in the range 0-1. It is usually used instead of temperature. |
0.9 |
| n | number | No | How many completions to generate for each input message. | 1 |
| stream | boolean | No | Whether to enable streaming output. When set to true, the API returns ChatGPT-style streamed data. |
false |
| max_tokens | number | No | The maximum number of tokens that can be generated in a single reply, subject to the model context window. | 1024 |
| presence_penalty | number | No | -2.0 to 2.0. Positive values encourage the model to introduce new topics, while negative values reduce that tendency. | 0 |
| frequency_penalty | number | No | -2.0 to 2.0. Positive values reduce repetition, while negative values increase it. | 0 |
| search_recency_filter | string | No | Returns search results constrained to a time window. Supported values: month, week, day, and hour. |
month |
4. Request Examples¶
POST /v1/chat/completions
Content-Type: application/json
Accept: application/json
Authorization: Bearer $YOUR_API_KEY
{
"model": "sonar",
"messages": [
{
"role": "system",
"content": "Be precise and concise."
},
{
"role": "user",
"content": "How many stars are there in our galaxy?"
}
]
}
curl https://gateway.serevixai.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer $YOUR_API_KEY" \
-d "{
\"model\": \"sonar\",
\"messages\": [{
\"role\": \"system\",
\"content\": \"Be precise and concise.\"
},
{
\"role\": \"user\",
\"content\": \"How many stars are there in our galaxy?\"
}
]
}"
package main
import (
"context"
"fmt"
"github.com/openai/openai-go"
"github.com/openai/openai-go/option"
)
func main() {
apiKey := "sk-123456789012345678901234567890123456789012345678"
client := openai.NewClient(
option.WithAPIKey(apiKey),
option.WithBaseURL("https://gateway.serevixai.ai/v1"),
)
resp, err := client.Chat.Completions.New(
context.Background(),
openai.ChatCompletionNewParams{
Model: "sonar",
Messages: []openai.ChatCompletionMessageParamUnion{
openai.SystemMessage("Be precise and concise."),
openai.UserMessage("How many stars are there in our galaxy?"),
},
},
)
if err != nil {
fmt.Println("error:", err)
return
}
fmt.Println(resp.Choices[0].Message.Content)
}
#!/usr/bin/env python3
from openai import OpenAI
def main():
api_key = "sk-123456789012345678901234567890123456789012345678"
client = OpenAI(
api_key=api_key,
base_url="https://gateway.serevixai.ai/v1"
)
response = client.chat.completions.create(
model="sonar",
messages=[
{"role": "system", "content": "Be precise and concise."},
{"role": "user", "content": "How many stars are there in our galaxy?"}
]
)
print(response.choices[0].message.content)
if __name__ == "__main__":
main()
5. Response Example¶
{
"id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"model": "sonar",
"object": "chat.completion",
"created": 1724369245,
"citations": [
"https://www.astronomy.com/science/astro-for-kids-how-many-stars-are-there-in-space/",
"https://www.esa.int/Science_Exploration/Space_Science/Herschel/How_many_stars_are_there_in_the_Universe",
"https://www.space.com/25959-how-many-stars-are-in-the-milky-way.html",
"https://www.space.com/26078-how-many-stars-are-there.html",
"https://en.wikipedia.org/wiki/Milky_Way",
"https://www.littlepassports.com/blog/space/how-many-stars-are-in-the-universe/?srsltid=AfmBOoqWVymRloolU4KZBI9-LotDIoTnzhKYKCw7vVkaIifhjrEU66_5"
],
"choices": [
{
"index": 0,
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": "The number of stars in the Milky Way galaxy is estimated to be between 100 billion and 400 billion stars. The most recent estimates from the Gaia mission suggest that there are approximately 100 to 400 billion stars in the Milky Way, with significant uncertainties remaining due to the difficulty in detecting faint red dwarfs and brown dwarfs."
},
"delta": {
"role": "assistant",
"content": ""
}
}
],
"usage": {
"prompt_tokens": 14,
"completion_tokens": 70,
"total_tokens": 84
}
}