Documentation
Everything you need to know about Starling.
Getting Started
Starling combines live news with Polymarket prediction markets, AI consensus analysis, and paper trading.
- Connect your wallet using the button in the top right. Supports MetaMask, Coinbase Wallet, Phantom, WalletConnect, and Google.
- Watch live news on the home page and see which prediction markets are related.
- Check AI consensus for swarm intelligence predictions on each market.
- Paper trade with virtual Points tokens to practice.
Roadmap
Where we're going next. Some of these are weeks away, some are quarters out. The through-line: prediction markets are still under- tooled for serious money, and we'd rather build a few sharp tools than another generalist dashboard.
01. Competitive Creative Hedging
For larger investors · 3-click UX
Bring a position you already hold — a tradfi long, a commodity futures contract, a Polymarket bet — and we surface PM markets you can use to hedge. Filtered by orderbook depth and liquidity so the suggestions actually fit a real position size.
We skip the per-query AI cost by curating 3-6 hand-built strategies per macro slice(tradfi, commodities, geopolitics, crypto). Each strategy is a recipe of bull / bear PM legs. The user picks a strategy, enters their size, clicks go — three clicks, no LLM in the hot path. Authored by domain experts (the “built by 2 PhDs” angle is real, not a tagline). Pairs naturally with the parlay system below: once liquidity is there, you'll be able to compose hedges like “war risk × oil price × Fed cuts” into a single instrument.
02. Vertical Bloomberg
Depth over breadth
Stop being a generalist tool. Pick one slice — macro/Fed markets, sports, crypto, or geopolitics — and become the Bloomberg terminal for it. Price overlays paired with the right context for that vertical:
Macro / Fed
FOMC dot plot, SEP releases, swap-implied paths overlaid on PM rate-cut markets
Sports
Injury reports, depth charts, weather, ref tendencies overlaid on game lines
Crypto
On-chain flows, exchange netflows, funding rates overlaid on price + event markets
Geopolitics
OSINT feeds, sat imagery proxies, sanctions trackers overlaid on conflict markets
Verticalization is underserved — every existing tool wants to be everything, so none of them are great at any one thing.
03. One-Click Tax Forms
From your proxy wallet · IRS-ready
Read your on-chain Polymarket activity straight from your proxy wallet, classify every fill / redemption / wrap as a taxable event, and produce a year-end 1099-B / Form 8949 / Schedule D ready to drop into TurboTax or hand to a CPA. We already index every position for the portfolio page; the tax engine is just an aggregation layer on top.
Crypto tax is painful. Prediction-market tax is worse — the IRS treats outcomes inconsistently and most users have no idea how to report. We make this go away.
04. Custom Parlays (We Provide the LP)
Perp-DEX-inspired pricing & risk caps
User picks 2 to N market outcomes — “Trump wins” and “Fed cuts in March” and“BTC > $200k by year-end” — and we offer an instant fill at a combined price. We take the other side as LP, which means you get fillable parlays even on legs that are individually too thin to cross natively on Polymarket.
Pricing and risk follow the perp-DEX playbook: per-leg leverage caps, position-size limits as a function of OI, dynamic spreads, circuit breakers on correlated risk. We're studying jup.ag as the closest reference design and have reached out to their team for input on how they tuned the leverage / liquidation parameters.
Have a feature you wish Starling had? Open an issue on the GitHub repo. We read everything.
News Stream
The home page combines a live video feed with real-time headline analysis to surface relevant prediction markets.
Live Stream
Watch 7 live news channels (Al Jazeera, FOX, ABC, CBS, DW News, The Young Turks, plus Alex Jones on Rumble) with a 4x multi-view mode.
Breaking News Feed
Headlines from 6 sources (BBC, NYT, Al Jazeera, Guardian, Sky News) plus OSINT intelligence from Telegram channels, updated every 5 minutes. Filter by source or category (Iran, Ukraine, Crypto, Finance, Politics, Tech).
AI-Powered Related Markets
Each headline is automatically matched to relevant Polymarket prediction markets using a 3-step AI pipeline:
Step 1: Keyword Extraction
GPT-4o-mini reads each headline and extracts search keywords. “Iran War Cease-Fire Tested” becomes [“iran”, “ceasefire”, “strait”, “hormuz”].
Step 2: Market Search
Those keywords are searched against 200+ active Polymarket events via the Gamma API. Only real, verified markets with active slugs are returned — no hallucinated URLs.
Step 3: AI Validation
GPT validates each headline–market pair: “Is this market actually about the same topic?” Bad matches like “Iran war” → “Iran FIFA World Cup” are rejected.
Headlines with matches show a gold ★ See Related Markets button. Click to expand and see up to 3 related markets with live Yes/No prices. Markets process incrementally — 3 headlines every 60 seconds, up to 20 total.
AI Live Market Ticker
Below the stream, an AI-selected ticker shows the 8–10 Polymarket events most relevant to the current news cycle. Powered by GPT-4o-mini analyzing live stream titles and RSS headlines against 200+ active markets. Refreshes every 15 minutes.
AI Consensus
We select the top 10 prediction markets ending between 1 day and 3 months from now, filtered by volume (≥$50K), category diversity (max 4 per category, max 2 per topic), edge-price exclusion (drops anything below 5% or above 95%), and a quality blocklist (no Bigfoot, alien, rapture, etc.). Then 20 GPT-4o-mini personas each research and vote on every market across 2 rounds, and we aggregate the 40 votes per market via statistical bootstrap.
The whole pipeline runs once a day via Vercel cron at 06:00, 06:15, and 06:30 UTC (one cron per step). Results are written to Postgres and the /ai page reads from there, so opening the page is instant. The admin can also trigger a fresh run manually from /admin.
Our Implementation: 20 personas, 2 rounds, 1 bootstrap
For each market we run a 3-step pipeline. Each step is its own Vercel function so we never hit the 60-second timeout. All GPT calls use gpt-4o-mini.
Step 1 (06:00 UTC) — Persona-Styled Research + Vote
All 20 personas run in parallel. Each one calls OpenAI's Responses API with the web_search_preview tool, but with a search-style hint matched to its perspective: the Historian searches for past analogues, the INTP Logician searches for verified primary-source data, the ESFP Performer searches for media buzz, etc. The persona then writes a probability + 3-5 bullet points based on what it found. Bullets and the underlying web context are saved to consensus_persona_predictions. ~20 web searches + ~20 chat completions.
Step 2 (06:15 UTC) — Re-Assess
A separate cron picks up runs that finished step 1. The same 20 personas now see all 20 round-1 probabilities and bullets from the DB and re-vote. No new web search — they reason off the round-1 dataset and decide whether to hold firm or update. Another 20 rows saved.
Step 3 (06:30 UTC) — Bootstrap Aggregation
No AI calls. We pull the 40 probabilities (rounds 1+2) and run 10,000 bootstrap resamples — each resample picks 40 values with replacement from the 40 originals and computes a mean. The distribution of those 10,000 means gives us the headline number plus its uncertainty. Pure JS, runs in <100ms per market.
What we report
From the bootstrap distribution we surface three numbers per market:
- Mean — the average of the 10,000 bootstrapped means. This is the headline probability.
- 90% confidence interval — the 5th and 95th percentile of the distribution. Shown as
±Xnext to the mean. A tight band means the personas agreed; a wide band means they disagreed. - Mode — the most-common bucket in the distribution. Statistically should be near the mean (the bootstrap distribution is approximately normal); we show it as a sanity check.
Cost is roughly $0.50 per market (web search dominates), about $5 per daily run. The admin can also force a manual run that wipes today's snapshot and re-computes.
The 20 Personas
5 originals plus 15 MBTI-inspired archetypes. Each one has a distinct reasoning style AND a distinct web-search style:
Why this design
The persona-styled web search is the most important piece. The same question searched by a Historian and an ESFP Performer surfaces fundamentally different sources, which feeds genuine diversity into round 1. Round 2 lets each persona react to what the others found without piling on. Bootstrap aggregation then gives us a real confidence interval instead of a fake-precise single number — when the 20 personas disagree, the band widens and you can see it. Fail-soft: if a persona's web search times out we drop just that persona; the run continues if at least 15 of 20 succeed.
How It Works (Step by Step)
Meet the Personas (5 of 20 shown)
The full pipeline runs 20 personas — the 5 originals below plus 15 MBTI-inspired archetypes (INTJ Architect, ENTP Challenger, ESFP Performer, etc.). Each one not only THINKS differently, it also SEARCHES the web differently — a Historian asks Google about precedents, an INTP Logician asks for primary-source data. Click an agent to see how they reason.
Market Question
"Will X happen by 2026?"
↓ sent to all 20 personas (5 representatives shown)
↓ 40 votes (20 personas × 2 rounds) → 10K bootstrap resamples
20-Persona Bootstrap (2 rounds · 10K resamples)
67% ± 4%
Mean 67% · mode 67% · 90% CI [63%, 71%]
vs Market: 60% — AI says +7% higher
Why Does This Work?
Wisdom of crowds. When you ask one person a question, they might be wrong. But when you ask many different people and average their answers, the average is usually closer to the truth. We use 20 different AI personas as a diverse crowd of experts.
Diverse perspectives AND diverse research.If all 20 personas thought the same way they'd be useless. The key is that each one not only REASONS differently, it also SEARCHES the web differently. A Historian asks Google about precedents, an INTP Logician asks for primary-source data, an ESFP Performer asks what's viral on social. They surface fundamentally different facts before they vote.
Bootstrap aggregation gives a real confidence interval.Instead of fake-precise “67%” we resample the 40 persona votes 10,000 times and report the spread of resampled means. When the 20 personas mostly agree, the band is tight (e.g. 67 ± 2%). When they disagree wildly, the band widens (67 ± 12%) and you can see it in the headline. The mode is reported as a sanity check — bootstrap distributions are approximately normal so mode and mean should land within ~0.5% of each other.
Inspired by, not equal to, real research. The OASIS frameworksimulates up to 1 million social-network agents that follow, argue, and influence each other. Our 20-persona setup is much simpler — we trade their scale for cost (~$5/day) and latency (one daily snapshot via cron). Same core principle though: diverse perspectives + statistical aggregation = better predictions than any single AI guess.
Paper Trading
Practice prediction market trading without risk using live Polymarket data. Lives at /airdrop?tab=trade and /airdrop?tab=portfolio.
Trade Tab
Two sections, both pulling live odds from the Polymarket CLOB:
- AI Swarm Consensus Markets — The latest snapshot of markets from the daily consensus pipeline (up to 10 — fewer if today's filter only matched a smaller set). Each row shows the AI's mean prediction next to live Yes/No prices and the volume from the catalog.
- Live Sports Markets — Every live or starting-soon (within 24h) sports moneyline. Pulled from
/api/sports/eventsacross all enabled leagues.
Portfolio Tab
Shows your Points balance, daily claim, and all open paper positions with live P&L. Closed positions stay hidden across reloads. Buy-in price is frozen at trade time so PnL = (current CLOB midpoint − buy-in price) × shares.
How Trading Works
- Buy: Select a market, choose Yes or No, enter shares. You buy in at the current live CLOB midpoint. Your buy-in price + the market's clobTokenId are stored on the position row so it survives even if the market falls off the trade list later.
- Close: In the Portfolio tab, click Close on any position. The system checks the live Polymarket odds and calculates your P&L based on the price change since your buy-in.
Points tokens are virtual and have no real value. Prices update every few seconds from the Polymarket CLOB API.
Points & Referral
Earn virtual Points tokens:
Sports Betting
Browse live and upcoming sports markets from Polymarket with real-time odds.
Supported Leagues
MLB, NBA, NFL, NHL, Premier League, La Liga, Bundesliga, Champions League, UFC, IPL, MLS, NCAAB — with more added regularly.
Game Cards
- Team rows — Each game shows both teams with abbreviation badges, moneyline odds, spread, and total columns.
- Expandable — Click a game card to see price history chart, volume, and a link to the full game detail page.
- Live detection — Games within 4 hours of their
gameStartTimeshow a red LIVE badge.
Game Detail Page
Click “Game View” on any game to see the full detail page with:
- ESPN live scoreboard — Team logos, abbreviations, records, and live score from ESPN's free API (refreshes every 30s).
- All market types — Moneyline, Spread, Total (Over/Under), and Player Props from Polymarket.
- Live odds — Enriched with CLOB midpoint prices for accuracy.
Data Sources
Gamma API — gamma-api.polymarket.com/events/keyset?series_id=X — market data, outcomes, prices, slugs. (Legacy /events + /markets deprecated 2026-05-01; we're fully on the cursor-based keyset variants.)
CLOB API — clob.polymarket.com/midpoint?token_id=X — real-time midpoint prices.
ESPN API — site.api.espn.com/apis/site/v2/sports/... — live scores, team logos, records. Free, no key needed.
Technical Architecture
For team members working on the codebase. Starling is a Next.js 16 app deployed on Vercel with a Neon PostgreSQL database.
Stack
Key API Routes
/api/newsGET — RSS + Telegram headlines, 5-min cache/api/news/marketsPOST — AI headline→market matching (cursor-based incremental). GET — cached results/api/markets/liveGET — AI-selected markets matching live news, 15-min cache/api/polymarket/eventsGET — market data from Gamma API with CLOB enrichment/api/polymarket/pricesGET — single CLOB midpoint price lookup/api/polymarket/price-historyGET — historical price data for charts/api/sports/leaguesGET — curated league list with ESPN logos/api/sports/eventsGET — games per league with parsed markets/api/sports/gameGET — full game detail with ESPN scores + all markets/api/cron/consensus-step1GET (cron 06:00 UTC) — picks top 10 markets, 20 personas each do persona-styled web search + initial vote/api/cron/consensus-step2GET (cron 06:15 UTC) — same 20 personas re-vote after seeing all round-1 outputs/api/cron/consensus-step3GET (cron 06:30 UTC) — bootstrap 10K resamples → mean / mode / 90% CI / histogram. Also prunes old run rows./api/admin/consensus-run-nowPOST — admin-triggered manual run of all 3 steps inline (Phantom-auth)/api/consensus/latestGET — latest finished run per market (used by /ai page + airdrop trade tab)/api/consensus/run/[id]GET — drill-down for one run with all 20 persona predictions + bullets/api/consensusPOST — DEPRECATED v1 on-demand 5-persona endpoint, kept for rollback/api/tradePOST — paper trade execution (buy/sell)/api/airdropPOST — daily Points claim/api/leaderboardGET — top 50 users by balance/api/adminGET/POST — admin dashboard (restricted to owner wallet)/api/userGET/POST/PATCH — user CRUD + display nameHow Related Markets Work (for devs)
The headline→market pipeline processes 3 headlines every 60 seconds using a cursor-based system:
- Frontend POSTs all 15 headline titles to
/api/news/markets - API loads cache from DB (
consensus_cachetable, keynews-mkt-v14) - Cursor advances: headlines[cursor..cursor+3] are the current batch
- GPT extracts keywords from each headline (cheap text call, no web search)
- Gamma API search with keywords across 200 events — returns REAL verified market slugs
- GPT validates each headline↔market pair, rejects bad matches
- Results cached in DB with cursor position. Frontend polls every 60s to process next batch
- Cap: 20 headlines max. Each headline gets up to 3 markets.
Environment Variables
Required in .env.local (get from team lead, never commit to git):
DATABASE_URL=postgresql://...
OPENAI_API_KEY=sk-...
YOUTUBE_API_KEY=AIza...
NEXT_PUBLIC_WALLETCONNECT_PROJECT_ID=...
NEXT_PUBLIC_WEB3AUTH_CLIENT_ID=...
POLYMARKET_BUILDER_API_KEY=... (server-only, read by /api/polymarket/builder-headers)
POLYMARKET_BUILDER_SECRET=...
POLYMARKET_BUILDER_PASSPHRASE=...
Team Workflow
git clonethe repo,npm install, add.env.localnpm run devto test locallynpm run buildto verify before pushinggit pushto master — Vercel auto-deploys- Schema changes: only the team lead runs
npx drizzle-kit push
Builder Program
Starling participates in Polymarket's Builder Program for third-party integration.
- Markets link directly to Polymarket for real trading.
- Future: trade directly from Starling via the Builder API (credentials stored in env vars, ready for integration).
- All market data comes from Polymarket's Gamma API in real time.