Product-updates

Astrology Chat API Launch: LLM-Ready Chart Context

Astrology Chat API is live. Pre-summarized natal and transit context tuned for Claude, GPT, and Gemini context windows. Build grounded astrology chatbots without manual prompt assembly.

OK

Oleg Kopachovets

CTO & Co-Founder

April 27, 2026
3 min read
237 views
Astrology Chat API launch announcement
Astrology Chat API launch announcement
0%

What shipped

The Astrology Chat API launched on April 26, 2026. It is a dedicated endpoint that returns pre-summarized natal and transit context shaped specifically for LLM consumption — Claude, GPT-4/5, Gemini, or any other model with a context window.

If you have ever built an LLM-powered astrology chatbot, you have hit the same wall: dumping the full natal chart JSON into the prompt blows past token limits, costs extra per turn, and produces sluggish responses. Most teams solve this by writing custom summarization logic. We built it into the API.

What's in the endpoint

  • Pre-summarized natal context — planetary placements, aspects, dignities condensed into LLM-friendly prose, typically 500-800 tokens
  • Daily transit summary — current transits relevant to the user's chart, refreshed daily, typically 200-400 tokens
  • Moon phase, mercury retrograde state, void-of-course status — the "what is the sky doing right now" trio, condensed
  • Language selection — output context in any of nine supported languages
  • Style controls — tone presets (clinical, conversational, mystical) so the LLM stays on-brand for your bot

Why it matters

The grounded-chatbot pattern requires real astrological data in the prompt. The naive way costs you 3,000+ tokens of natal JSON per turn. The Chat API gives you the same data in 700-1,200 tokens of pre-summarized context. At scale that is the difference between a sustainable inference budget and one that eats your margin.

Three things specifically optimized for LLM consumption:

  1. Token efficiency. We strip placeholder fields and serialize aspects as natural sentences rather than nested objects.
  2. Prose, not JSON. LLMs handle prose better than structured data in system prompts. Output reads like an astrologer's notes.
  3. No hallucination triggers. Every claim in the summary is anchored to a specific placement, so when you instruct the LLM to "use only the provided context" it actually has clean data to anchor on.

How to integrate

typescript
1const ctx = await fetch('https://api.astrology-api.io/v1/astrology-chat/context', {
2 method: 'POST',
3 headers: { 'Authorization': `Bearer ${process.env.ASTROLOGY_API_KEY}` },
4 body: JSON.stringify({
5 natalChart: user.natalChart,
6 date: today(),
7 language: 'en',
8 tone: 'conversational',
9 }),
10}).then(r => r.json());
11
12const llmResponse = await anthropic.messages.create({
13 model: 'claude-opus-4-7',
14 system: `You are an astrologer. Use ONLY this context:
15
16NATAL: ${ctx.natalSummary}
17TRANSITS: ${ctx.transitSummary}
18MOON: ${ctx.moonPhase}
19`,
20 messages: [{ role: 'user', content: userMessage }],
21});
Cache ctx per user per day. One API call serves an entire day's worth of conversation turns.

MCP integration

The Chat API also powers our MCP server. If you're building a chatbot that runs inside Claude Desktop, ChatGPT custom GPTs, or another MCP-compatible host, you don't need to write the integration code — MCP exposes the endpoint as a native tool. The LLM discovers it automatically and calls it when needed.

Where to learn more

Pricing

Available on Professional ($37/mo) and above. One context fetch per request, same as any other endpoint. Cache aggressively per user per day to stay inside your tier.

See /pricing for the full tier table.

What's next

V2 will add streaming context (deliver the summary token-by-token to reduce first-token latency in your LLM call), tool-call schemas for function calling (so the LLM can request narrower context on demand), and a conversation-memory endpoint that lets you store conversation summaries server-side instead of re-sending them every turn.

Feedback via /contact.
Oleg Kopachovets

Oleg Kopachovets

CTO & Co-Founder

Technical founder at Astrology API, specializing in astronomical calculations and AI-powered astrology

More from Astrology API