Guides

Build an AI Astrology Chatbot in 10 Minutes

Add a fully-agentic astrology AI chatbot to any app using the OpenAI SDK. BYOK, streaming, 16 tools — no custom integration code. Works with Lovable, Cursor, Claude Code.

OK

Oleg Kopachovets

CTO & Co-Founder

April 26, 2026
12 min read
237 views
Build an AI astrology chatbot with OpenAI-compatible API
Build an AI astrology chatbot with OpenAI-compatible API
0%

TL;DR for Developers

  • 3 lines of code: Change baseURL in the OpenAI SDK, your existing ChatGPT code works immediately
  • 16 tools auto-called: Natal charts, transits, synastry, PDF reports, SVG rendering, all server-side
  • BYOK: Use GPT-5, Claude Opus 4, Gemini 3 Flash, Grok 4, your key, your cost
  • Vibe coding: Paste one prompt into Lovable, Cursor, or Claude Code, get a working widget
  • Pricing: 2-25 credits/turn (~$0.001-$0.02) vs competitors' $0.25-$0.87/query
  • Free tier: 50 requests/month, no credit card required

You want an astrology AI feature in your app. So you Google around. You find a few options.

Option A: Build it yourself with raw astrology data APIs + LangChain. You write 400+ lines of agent orchestration, define 16 tool schemas, handle streaming, manage sessions, debug hallucinated birth dates. Takes 3-6 weeks.
Option B: Use a competing "astrology chat API." Custom JSON format. Custom SDK. Proprietary model you can't change. $0.87/query. 19-second response times.
Option C: Change one line of code in your existing OpenAI integration.
Option D: If you're vibe coding with Lovable, Cursor, or Claude Code, paste a single prompt and let the AI generate the widget for you. It works on the first try because the OpenAI protocol is in every model's training data.

That's what we built.

"Chat" Is the Wrong Word. This Is an Agent.

When people say "astrology chatbot," they picture a text box connected to an LLM. That's not what you need to build a good astrology product. You need an agent, and the difference comes down to three layers.

Layer 1: Tools. Planetary positions change every day. A natal chart depends on exact birth time and location. An LLM can't answer from training data alone without hallucinating. The agent needs to call real calculation tools against Swiss Ephemeris and get actual computed data back. Without this, every answer is a hallucination dressed up as astrology.
Layer 2: Knowledge (RAG). Some astrological knowledge is stable: what a trine means, the symbolism of Pluto, the difference between house systems. A well-designed agent retrieves relevant interpretive knowledge at query time and combines it with the live chart data from Layer 1. Computationally grounded and symbolically rich.
Layer 3: Memory. This is where most products fail. An astrologer who forgets the conversation after every sentence is a fortune cookie dispenser. Session memory means the agent knows your birth data, remembers what it said two turns ago, and can answer "what about that transit you mentioned?" coherently.

All three layers running together is what makes a response feel like a consultation rather than a horoscope lookup. Our API implements all three server-side.

I call it a chat API because that's what developers search for. But what you're actually building is an agent.

What Users Actually Want From an AI Astrologer

Before getting into the technical side, it's worth thinking about what makes an astrology chat good from the user's perspective. Because the API is just plumbing. The experience your users have depends on how you configure it.

Reviews of AI astrology products surface the same complaints over and over. Users don't hate that it's AI. They hate that it's generic.

"It feels like recycled horoscope copy." "It told me something that could apply to anyone born in March." "The tone is so clinical it doesn't feel like astrology at all."

The technical foundation matters enormously here. Most AI astrology products use an LLM with no real chart data. The model guesses based on training data. When a user asks about their Sun square Pluto, they want an interpretation of their actual aspect, not a probability distribution of what that aspect usually means. The difference is real data vs. plausible-sounding fiction.

Assuming you solve the data problem (which our API does, via Swiss Ephemeris calculations passed directly to the model), there are five things that determine whether users love or abandon your chat:

1. It needs to feel personal, not templated. Every response should read like it was written about this person, not about "Sun in Aries in general." This comes from how rich the chart data is that you pass to the model, and how specifically the system prompt instructs it to use that data rather than speak in generalities.
2. Warmth without sacrificing accuracy. Users want an astrologer's tone, not a data scientist's. The system prompt is your main lever here. "You are a warm, insightful astrologer" produces genuinely different output than no system prompt at all. Claude models tend toward empathetic and growth-oriented language naturally. GPT-5 is more encyclopedic. Pick the model voice that fits your product's personality.
3. Response length that matches the question. A quick "what's my moon sign?" doesn't need 600 words. A "tell me about my relationship patterns" does. You can control this with astrology.defaults.detail_level: brief (under 100 words), standard (200-350 words), or detailed (500-800 words). The user's own instruction ("briefly" or "go deep") overrides the default.
4. Multi-turn memory. The biggest gap between AI astrology and real consultations is continuity. A human astrologer remembers what you said three minutes ago. Most AI products don't. Each message is a fresh context. Session persistence via session_id fixes this. Pass the same ID across turns and the server restores full conversation history. Users can say "what about what you just said about my Venus?" and get a coherent answer.
5. Visible thinking, not a blank spinner. Users tolerate 5-10 second waits when they can see something happening. "Calculating your transit chart..." followed by the answer appears is a much better experience than a spinner for 8 seconds followed by text. SSE streaming events (tool_start, tool_end) let you build this with a few lines of UI code.

The Integration

The API implements the standard OpenAI /chat/completions protocol, so if you already use the OpenAI SDK, the migration is two changed lines: the baseURL and the API key. Your existing code for streaming, message history, and system prompts works without changes.
The key addition is the astrology namespace in the request body. This is where you pass birth data, choose which tools the agent can call, and set defaults like language and tradition. In TypeScript you'll need a // @ts-ignore comment since it's an extension field; in Python you pass it via extra_body.
typescript
1const client = new OpenAI({
2 baseURL: "https://api.astrology-api.io/api/v3/chat",
3 apiKey: "your-astrology-api-key",
4});
5
6const response = await client.chat.completions.create({
7 model: "astro-default",
8 stream: true,
9 messages: [
10 { role: "system", content: "You are a warm, insightful astrologer." },
11 { role: "user", content: "What does my natal chart say about my career?" }
12 ],
13 // @ts-ignore
14 astrology: {
15 subjects: [{ id: "me", birth_data: { year: 1990, month: 3, day: 15, hour: 14, minute: 30, city: "New York", country_code: "US" } }],
16 enabled_tools: ["analysis_natal_report"],
17 defaults: { language: "en", tradition: "psychological", detail_level: "standard" }
18 }
19});
Full schema documentation at astrology-api.io/demo, including all available fields, tool names, and BYOK configuration.

BYOK: Which Model Fits Your Product?

If you don't want to use the hosted model, pass your own LLM key via the /chat/completions/byok endpoint. You choose the provider (OpenAI, Anthropic, or OpenRouter with 50+ models) and pay them directly. Our platform fee drops to 2 credits/turn.

The choice of model is actually a product design decision, not just a technical one. Each model has a distinct voice that shapes the user experience:

ModelSpeedStyleBest fit
Gemini 3 Flash~4sStructured, analyticalHigh-volume apps, quick daily guidance
GPT-5-nano~5sDirect, factualBatch content generation
GPT-5~10sRich, balancedGeneral-purpose premium chat
Claude Haiku 4~6sWarm, conversationalWellness and mindfulness apps
Claude Sonnet 4~12sPsychological depthTherapy-adjacent products
Claude Opus 4~20-25sRichest narrative, JungianHigh-end subscriptions charging $50+/reading
Grok 4~8sDirect, bold, irreverentDating apps, Gen Z audiences

Gemini 3 Flash at ~4 seconds is the fastest model available. Claude Opus 4 at 20-25 seconds is the slowest, but it writes the most narrative-rich interpretations we've seen from any model, with a Jungian depth that feels genuinely different from GPT-style output. With streaming on, users see the first tokens in under 2 seconds even at the slow end.

My take: Claude Sonnet 4 is the sweet spot for most astrology products. Fast enough (12s), warm tone, solid depth. Opus 4 is worth the wait if you're selling premium consultations. Gemini Flash if you're running daily horoscope generation at scale.

What the Agent Can Do

The server exposes 16 tools the LLM picks from automatically based on the user's question. You don't call these directly. Declare which ones are available via enabled_tools, and the model decides which to invoke.
Analysis tools produce interpretations: natal reports, synastry compatibility scores, lunar analysis. Chart tools return raw calculated data: planetary positions, aspects, house cusps. Rendering tools generate image artifacts. Ask "show me my natal chart" and the agent calls the render tool, stores the SVG in Supabase, and returns a shareable URL. The PDF tool generates a professional multi-page natal report document.

The distinction between "analysis" and "chart" tools matters for your UX. If your users want to see the chart image, enable the render tools. If you're building a text-only chat, disable them to keep the agent focused.

Multi-Subject Sessions

This is the one feature no competitor offers: passing multiple birth charts in a single conversation. For a compatibility checker or synastry app, you pass both subjects upfront with semantic IDs ("me" and "partner"). The LLM references them by ID when calling tools, and the server resolves them to full chart data. No re-sending birth information on every turn.
Combined with session persistence (pass the same session_id and the server restores full conversation history via Redis), this enables real multi-turn consultations where the AI remembers everything it said before.

Works With Existing Chat UIs

Since we're OpenAI-compatible, any frontend built for OpenAI works out of the box: Vercel AI SDK's useChat hook, Open WebUI (change one env var), chatbot-ui, LibreChat, Chainlit for Python.

Silver bullet prompt for vibe coders

If you're using Lovable, Cursor, Windsurf, or Claude Code, paste this directly into the chat. The AI generates correct code on the first try because it knows the OpenAI protocol.

text
1Build an astrology chat widget using the OpenAI SDK.
2Base URL: https://api.astrology-api.io/api/v3/chat
3API docs: https://api.astrology-api.io/md/chat-completions
4
5Include: birth data form (date, time, city), streaming chat interface,
6tool call events shown as progress ("Calculating natal chart...").
7Use astrology.subjects and enabled_tools as documented.

No debugging custom formats. No adapter code. Just paste and ship.

How We Compare

FeatureAstrology APIAstrologyAPI.comVedika.io
ProtocolOpenAI /chat/completionsCustom JSONCustom JSON
BYOK50+ modelslocked modellocked model
Response time2-5s streaming~3s8-24s
Price per turn$0.001-$0.02~$0.05$0.25-$0.87
StreamingSSE + tool eventsnonenone
Multi-subjectyesnono
Binary artifactsPDF, SVG via chatnono
Session memory24h Redisnono
Free tier50 req/monthnono
AI coding tool compatibleyes (native)custom adaptercustom adapter

Pricing

Every turn has two components: platform credits and endpoint credits for tools called.

Hosted mode (we choose the model): 25 credits/turn covering platform and LLM, plus endpoint credits per tool.

BYOK mode: 2 credits/turn platform fee, plus endpoint credits per tool, plus your LLM provider cost.

Endpoint credits per tool call: natal/transit/synastry chart = 1 credit, astrocartography = 5 credits, PDF report = 35 credits. A natal interpretation turn comes out to 26 credits hosted (about $0.02) or 3 credits BYOK (about $0.005 total including LLM).

On the Professional plan ($37/month, 55,000 credits): about 2,100 hosted turns or 18,000 BYOK turns per month.

FAQ

For vibe coders

Can I just paste a prompt into Lovable / Cursor / Claude Code? Yes. Paste the silver bullet prompt from the section above. Because we follow the OpenAI protocol exactly, AI coding tools already know the SDK — they generate correct code on the first try, with no custom adapter, no debugging custom formats.
Does the generated code work with streaming and tool call events? Yes. The same SSE streaming you'd use with OpenAI works here. Vercel AI SDK's useChat hook, Lovable's built-in chat components, and Cursor scaffolded projects all support it out of the box. Tool events (tool_start, tool_end) come through the same stream so you can show "Calculating natal chart..." progress without extra code.
What prompt should I give Lovable/Cursor to build a full astrology app? Start with the prompt in the "Silver bullet prompt for vibe coders" section above. Then iterate: "Add a birth data form", "Show the SVG natal chart", "Add synastry comparison for two people." Each iteration is one prompt because the AI already understands the API structure.
I'm building in Bolt / Replit Agent / Windsurf — will it work? Yes. Any AI coding tool that knows the OpenAI SDK works. They all do. The only thing you configure is baseURL and your API key.
How do I pass birth data from a form to the API? Put it in astrology.subjects alongside the messages. Each subject has an id, optional name, and birth_data with year, month, day, hour, minute, city, and country_code:
json
1"astrology": {
2 "subjects": [{
3 "id": "me",
4 "name": "Oleg",
5 "birth_data": {
6 "year": 1985, "month": 5, "day": 11,
7 "hour": 18, "minute": 15,
8 "city": "Kharkiv", "country_code": "UA"
9 }
10 }],
11 "enabled_tools": ["analysis_natal_report"]
12}
Tell the AI coding tool: "collect birth date, time, and city in a form, then pass to astrology.subjects[0].birth_data." It wires the form directly to this shape.
What are enabled_tools and which ones should I use? enabled_tools is an array that controls which tools the agent is allowed to call. You don't call them yourself; the LLM decides when it needs one. The full list:
  • Analysis: analysis_natal_report, analysis_synastry, analysis_transit, analysis_lunar — interpretive text
  • Charts: charts_natal, charts_transit, charts_synastry — raw calculated data (positions, aspects, houses)
  • Rendering: render_natal, render_transit, render_synastry — SVG chart images with shareable URLs
  • PDF: pdf_natal_report — multi-page professional report document
If you're building a text chat, use the analysis tools. If you want to show chart images, add the render tools. For a daily horoscope widget, analysis_transit alone is enough. Start with ["analysis_natal_report"] and add more as you need them.
Can I white-label the chatbot persona? Yes. Pass a system message with your persona: "You are Luna, a warm astrologer for the Starlight app." The model adopts it fully. Users never see our branding.

General

Does it hallucinate birth data? No. The LLM receives actual planetary positions and aspect orbs calculated from Swiss Ephemeris and synthesizes interpretations from that data.
Can I use LangChain or LlamaIndex? Yes. Any framework that speaks the OpenAI protocol works. ChatOpenAI(base_url="https://api.astrology-api.io/api/v3/chat") in LangChain.
What languages are supported? 9 languages via astrology.defaults.language: English, German, French, Spanish, Italian, Portuguese, Russian, Chinese, Hindi.
Does it work with React Native or Flutter? Yes. React Native uses the standard openai npm package. Flutter uses dart_openai.

Get Started

Free tier: 50 requests/month, no credit card. Full API reference at astrology-api.io/demo.

Oleg Kopachovets

CTO & Co-Founder

Technical founder at Astrology API, specializing in astronomical calculations and AI-powered astrology