Image Generation with DALL·E and Stable Diffusion Using APIs
Learn how to generate AI images from text using DALL·E or Stable Diffusion APIs. Includes prompt tips and web or mobile integration steps.
AI-generated images are no longer science fiction. With models like DALL·E and Stable Diffusion, you can turn plain text into stunning visuals using simple API calls. Whether you’re building a web app, mobile project, or creative tool, these models give you immense flexibility.
In this guide, I’ll walk you through:
-
How to use DALL·E and Stable Diffusion via APIs (OpenAI, Replicate, Stability.ai)
-
Writing prompts that give you clean, accurate outputs
-
Displaying or handling the generated images in your frontend project
Prerequisites
Before you begin, make sure you have:
-
A developer account on OpenAI, Replicate, or Stability.ai
-
API keys for your preferred provider
-
Basic knowledge of JavaScript or Python
-
A working frontend (React/Angular/Vue) or backend (Node/Go/Python) project
Step 1: Choosing Between DALL·E and Stable Diffusion
Model |
Provider |
Notes |
---|---|---|
DALL·E 3 |
OpenAI |
High-quality, better for complex scenes |
Stable Diffusion 1.5 / 2.1 / XL |
Stability.ai, Replicate |
Faster, customizable, open-source |
For quick experiments or high-quality control, Stable Diffusion via Replicate is often a good choice. For commercial projects, DALL·E 3 with OpenAI’s API gives cleaner results.
Step 2: Designing Better Prompts
Prompt quality affects the output significantly. Here are some examples:
✅ Good Prompts:
-
“A cyberpunk city at night, neon lights, 4K, cinematic lighting”
-
“A hand-drawn sketch of a flying car, white background, top view”
🚫 Avoid Vague Prompts:
-
“A nice image” ❌
-
“Something cool” ❌
Use descriptive keywords like style, lighting, composition, and camera angle.
Step 3: Generating Images via API in Go
Using OpenAI’s DALL·E API (Go Example)
Install the OpenAI Go SDK (if not already done):
go get github.com/sashabaranov/go-openai
package main
import (
"context"
"fmt"
"log"
"os"
openai "github.com/sashabaranov/go-openai"
)
func main() {
client := openai.NewClient(os.Getenv("OPENAI_API_KEY"))
req := openai.ImageRequest{
Prompt: "A medieval castle on a cliff during sunset",
N: 1,
Size: "1024x1024",
ResponseFormat: openai.CreateImageResponseFormatURL,
}
resp, err := client.CreateImage(context.Background(), req)
if err != nil {
log.Fatalf("Failed to generate image: %v", err)
}
fmt.Println("Generated image URL:", resp.Data[0].URL)
}
Make sure you set the OPENAI_API_KEY environment variable.
Using Replicate API (Stable Diffusion via Go HTTP client)
package main
import (
"bytes"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
)
func main() {
apiKey := os.Getenv("REPLICATE_API_TOKEN")
url := "https://api.replicate.com/v1/predictions"
payload := map[string]interface{}{
"version": "a9758cb3...your_model_version_id...",
"input": map[string]string{
"prompt": "A futuristic train station, cyberpunk style",
},
}
data, _ := json.Marshal(payload)
req, _ := http.NewRequest("POST", url, bytes.NewBuffer(data))
req.Header.Set("Authorization", "Token "+apiKey)
req.Header.Set("Content-Type", "application/json")
client := &http.Client{}
res, err := client.Do(req)
if err != nil {
log.Fatal("Request error:", err)
}
defer res.Body.Close()
body, _ := io.ReadAll(res.Body)
fmt.Println("Response:", string(body))
}
This will return a JSON response, including an image URL once processing is complete. You can poll for the result using the returned GET endpoint.