Start Building Smarter Chatbots Using OpenAI’s chat/completions API
Learn how to use OpenAI’s chat/completions API to build smart chatbots in Go, control tone with parameters, and stream real-time replies.
In this section, you will learn how to use OpenAI’s chat/completions endpoint to build intelligent, conversational bots. Whether you’re creating a customer support assistant, a knowledge-based query bot, or just experimenting with AI-driven dialogue systems, understanding the anatomy of a conversation request is essential.
We will walk through:
-
The structure of system, user, and assistant messages.
-
Parameters like temperature, top_p, max_tokens, and how they affect response behavior.
-
Streaming responses in real-time for more interactive UIs.
-
Golang code examples using net/http and encoding/json.
🧠 What Is the chat/completions Endpoint?
The chat/completions endpoint is designed for structured conversation. Each interaction consists of a list of messages in a specific format:
[
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Tell me a joke." }
]
-
System: Defines the assistant’s personality or rules.
-
User: Represents the human speaking to the bot.
-
Assistant: Used when you want to include previous bot replies.
🔧 Basic Golang Example to Make a Chat Request
Here’s how you can make a simple chat completion request in Golang:
package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"os"
)
const openaiURL = "https://api.openai.com/v1/chat/completions"
type Message struct {
Role string `json:"role"`
Content string `json:"content"`
}
type ChatRequest struct {
Model string `json:"model"`
Messages []Message `json:"messages"`
MaxTokens int `json:"max_tokens"`
Temperature float64 `json:"temperature"`
}
type Choice struct {
Message Message `json:"message"`
}
type ChatResponse struct {
Choices []Choice `json:"choices"`
}
func main() {
apiKey := os.Getenv("OPENAI_API_KEY")
messages := []Message{
{Role: "system", Content: "You are a helpful assistant."},
{Role: "user", Content: "What’s the capital of France?"},
}
reqBody := ChatRequest{
Model: "gpt-3.5-turbo",
Messages: messages,
MaxTokens: 100,
Temperature: 0.7,
}
bodyBytes, _ := json.Marshal(reqBody)
req, _ := http.NewRequest("POST", openaiURL, bytes.NewBuffer(bodyBytes))
req.Header.Set("Authorization", "Bearer "+apiKey)
req.Header.Set("Content-Type", "application/json")
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
fmt.Println("Error:", err)
return
}
defer resp.Body.Close()
respBody, _ := ioutil.ReadAll(resp.Body)
var chatResp ChatResponse
json.Unmarshal(respBody, &chatResp)
fmt.Println("Assistant:", chatResp.Choices[0].Message.Content)
}
🎛️ Controlling the Chatbot’s Behavior
You can control the tone and creativity of the chatbot using:
Parameter |
Purpose |
---|---|
temperature |
Controls randomness (0 = predictable, 1 = creative) |
top_p |
Nucleus sampling for randomness control |
max_tokens |
Limits length of response |
presence_penalty |
Encourages new topics |
frequency_penalty |
Reduces repetition |
Try tweaking the temperature in the Golang code above from 0.2 (serious) to 0.9 (casual and creative).
🔄 Streaming Chat Responses in Real-Time (Server-Sent Events)
For real-time UX, streaming is supported. Here is a simplified Golang snippet to consume the stream:
req.Header.Set("Accept", "text/event-stream")
reqBody.Stream = true // include this if you're using a custom request builder
// Use bufio.Scanner to read the stream line-by-line
Full streaming with net/http requires manual parsing of data: chunks. You may also consider using libraries like go-openai by sashabaranov for easier handling.
✅ Summary
By the end of this module, you will:
-
Understand how to format chat messages with different roles.
-
Control your chatbot’s tone and behavior with temperature and other parameters.
-
Send requests and receive intelligent responses using Golang.
-
Build the foundation for streaming and real-time AI conversations.