TL;DR for Developers
- 3 lines of code: Change
baseURLin the OpenAI SDK, your existing ChatGPT code works immediately - 16 tools auto-called: Natal charts, transits, synastry, PDF reports, SVG rendering, all server-side
- BYOK: Use GPT-5, Claude Opus 4, Gemini 3 Flash, Grok 4, your key, your cost
- Vibe coding: Paste one prompt into Lovable, Cursor, or Claude Code, get a working widget
- Pricing: 2-25 credits/turn (~$0.001-$0.02) vs competitors' $0.25-$0.87/query
- Free tier: 50 requests/month, no credit card required
You want an astrology AI feature in your app. So you Google around. You find a few options.
That's what we built.
"Chat" Is the Wrong Word. This Is an Agent.
When people say "astrology chatbot," they picture a text box connected to an LLM. That's not what you need to build a good astrology product. You need an agent, and the difference comes down to three layers.
All three layers running together is what makes a response feel like a consultation rather than a horoscope lookup. Our API implements all three server-side.
I call it a chat API because that's what developers search for. But what you're actually building is an agent.
What Users Actually Want From an AI Astrologer
Before getting into the technical side, it's worth thinking about what makes an astrology chat good from the user's perspective. Because the API is just plumbing. The experience your users have depends on how you configure it.
Reviews of AI astrology products surface the same complaints over and over. Users don't hate that it's AI. They hate that it's generic.
"It feels like recycled horoscope copy." "It told me something that could apply to anyone born in March." "The tone is so clinical it doesn't feel like astrology at all."
The technical foundation matters enormously here. Most AI astrology products use an LLM with no real chart data. The model guesses based on training data. When a user asks about their Sun square Pluto, they want an interpretation of their actual aspect, not a probability distribution of what that aspect usually means. The difference is real data vs. plausible-sounding fiction.
Assuming you solve the data problem (which our API does, via Swiss Ephemeris calculations passed directly to the model), there are five things that determine whether users love or abandon your chat:
astrology.defaults.detail_level: brief (under 100 words), standard (200-350 words), or detailed (500-800 words). The user's own instruction ("briefly" or "go deep") overrides the default.session_id fixes this. Pass the same ID across turns and the server restores full conversation history. Users can say "what about what you just said about my Venus?" and get a coherent answer.tool_start, tool_end) let you build this with a few lines of UI code.The Integration
/chat/completions protocol, so if you already use the OpenAI SDK, the migration is two changed lines: the baseURL and the API key. Your existing code for streaming, message history, and system prompts works without changes.astrology namespace in the request body. This is where you pass birth data, choose which tools the agent can call, and set defaults like language and tradition. In TypeScript you'll need a // @ts-ignore comment since it's an extension field; in Python you pass it via extra_body.1const client = new OpenAI({2 baseURL: "https://api.astrology-api.io/api/v3/chat",3 apiKey: "your-astrology-api-key",4});5
6const response = await client.chat.completions.create({7 model: "astro-default",8 stream: true,9 messages: [10 { role: "system", content: "You are a warm, insightful astrologer." },11 { role: "user", content: "What does my natal chart say about my career?" }12 ],13 // @ts-ignore14 astrology: {15 subjects: [{ id: "me", birth_data: { year: 1990, month: 3, day: 15, hour: 14, minute: 30, city: "New York", country_code: "US" } }],16 enabled_tools: ["analysis_natal_report"],17 defaults: { language: "en", tradition: "psychological", detail_level: "standard" }18 }19});BYOK: Which Model Fits Your Product?
/chat/completions/byok endpoint. You choose the provider (OpenAI, Anthropic, or OpenRouter with 50+ models) and pay them directly. Our platform fee drops to 2 credits/turn.The choice of model is actually a product design decision, not just a technical one. Each model has a distinct voice that shapes the user experience:
| Model | Speed | Style | Best fit |
|---|---|---|---|
| Gemini 3 Flash | ~4s | Structured, analytical | High-volume apps, quick daily guidance |
| GPT-5-nano | ~5s | Direct, factual | Batch content generation |
| GPT-5 | ~10s | Rich, balanced | General-purpose premium chat |
| Claude Haiku 4 | ~6s | Warm, conversational | Wellness and mindfulness apps |
| Claude Sonnet 4 | ~12s | Psychological depth | Therapy-adjacent products |
| Claude Opus 4 | ~20-25s | Richest narrative, Jungian | High-end subscriptions charging $50+/reading |
| Grok 4 | ~8s | Direct, bold, irreverent | Dating apps, Gen Z audiences |
Gemini 3 Flash at ~4 seconds is the fastest model available. Claude Opus 4 at 20-25 seconds is the slowest, but it writes the most narrative-rich interpretations we've seen from any model, with a Jungian depth that feels genuinely different from GPT-style output. With streaming on, users see the first tokens in under 2 seconds even at the slow end.
My take: Claude Sonnet 4 is the sweet spot for most astrology products. Fast enough (12s), warm tone, solid depth. Opus 4 is worth the wait if you're selling premium consultations. Gemini Flash if you're running daily horoscope generation at scale.
What the Agent Can Do
enabled_tools, and the model decides which to invoke.The distinction between "analysis" and "chart" tools matters for your UX. If your users want to see the chart image, enable the render tools. If you're building a text-only chat, disable them to keep the agent focused.
Multi-Subject Sessions
"me" and "partner"). The LLM references them by ID when calling tools, and the server resolves them to full chart data. No re-sending birth information on every turn.session_id and the server restores full conversation history via Redis), this enables real multi-turn consultations where the AI remembers everything it said before.Works With Existing Chat UIs
useChat hook, Open WebUI (change one env var), chatbot-ui, LibreChat, Chainlit for Python.Silver bullet prompt for vibe coders
If you're using Lovable, Cursor, Windsurf, or Claude Code, paste this directly into the chat. The AI generates correct code on the first try because it knows the OpenAI protocol.
1Build an astrology chat widget using the OpenAI SDK.2Base URL: https://api.astrology-api.io/api/v3/chat3API docs: https://api.astrology-api.io/md/chat-completions4
5Include: birth data form (date, time, city), streaming chat interface,6tool call events shown as progress ("Calculating natal chart...").7Use astrology.subjects and enabled_tools as documented.No debugging custom formats. No adapter code. Just paste and ship.
How We Compare
| Feature | Astrology API | AstrologyAPI.com | Vedika.io |
|---|---|---|---|
| Protocol | OpenAI /chat/completions | Custom JSON | Custom JSON |
| BYOK | 50+ models | locked model | locked model |
| Response time | 2-5s streaming | ~3s | 8-24s |
| Price per turn | $0.001-$0.02 | ~$0.05 | $0.25-$0.87 |
| Streaming | SSE + tool events | none | none |
| Multi-subject | yes | no | no |
| Binary artifacts | PDF, SVG via chat | no | no |
| Session memory | 24h Redis | no | no |
| Free tier | 50 req/month | no | no |
| AI coding tool compatible | yes (native) | custom adapter | custom adapter |
Pricing
Every turn has two components: platform credits and endpoint credits for tools called.
Hosted mode (we choose the model): 25 credits/turn covering platform and LLM, plus endpoint credits per tool.
BYOK mode: 2 credits/turn platform fee, plus endpoint credits per tool, plus your LLM provider cost.
Endpoint credits per tool call: natal/transit/synastry chart = 1 credit, astrocartography = 5 credits, PDF report = 35 credits. A natal interpretation turn comes out to 26 credits hosted (about $0.02) or 3 credits BYOK (about $0.005 total including LLM).
On the Professional plan ($37/month, 55,000 credits): about 2,100 hosted turns or 18,000 BYOK turns per month.
FAQ
For vibe coders
useChat hook, Lovable's built-in chat components, and Cursor scaffolded projects all support it out of the box. Tool events (tool_start, tool_end) come through the same stream so you can show "Calculating natal chart..." progress without extra code.baseURL and your API key.astrology.subjects alongside the messages. Each subject has an id, optional name, and birth_data with year, month, day, hour, minute, city, and country_code:1"astrology": {2 "subjects": [{3 "id": "me",4 "name": "Oleg",5 "birth_data": {6 "year": 1985, "month": 5, "day": 11,7 "hour": 18, "minute": 15,8 "city": "Kharkiv", "country_code": "UA"9 }10 }],11 "enabled_tools": ["analysis_natal_report"]12}astrology.subjects[0].birth_data." It wires the form directly to this shape.enabled_tools and which ones should I use?
enabled_tools is an array that controls which tools the agent is allowed to call. You don't call them yourself; the LLM decides when it needs one. The full list:- Analysis:
analysis_natal_report,analysis_synastry,analysis_transit,analysis_lunar— interpretive text - Charts:
charts_natal,charts_transit,charts_synastry— raw calculated data (positions, aspects, houses) - Rendering:
render_natal,render_transit,render_synastry— SVG chart images with shareable URLs - PDF:
pdf_natal_report— multi-page professional report document
analysis_transit alone is enough. Start with ["analysis_natal_report"] and add more as you need them.system message with your persona: "You are Luna, a warm astrologer for the Starlight app." The model adopts it fully. Users never see our branding.General
ChatOpenAI(base_url="https://api.astrology-api.io/api/v3/chat") in LangChain.astrology.defaults.language: English, German, French, Spanish, Italian, Portuguese, Russian, Chinese, Hindi.openai npm package. Flutter uses dart_openai.


