-
-
Architecture
-
Main Dashboard with Perplexity powered Chat assistant
-
Stock transactions
-
Investment Thesis using Sonar Reasoning model
-
Investment Thesis using Sonar Reasoning model Job Handling
-
Earnings calandar
-
Detailed research for earnings for a stock using deep research sonar
-
Perplexity Model Selection
-
Risk Analysis Page using sonar model
-
Plot sell and buy transactions with customized chat instructions
-
Strategy Backtracking using sonar pro model
-
Settings page
Inspiration – Tariffs, Trust & Information Overload
Over the last two years the market narrative has flipped almost weekly: chip-export controls on Monday, 100 % EV tariffs on Tuesday, a dovish FOMC minutes leak on Wednesday. As an individual investor I found myself:
- Refreshing four different sites to see whether an overnight headline truly mattered to my NVDA long.
- Questioning the credibility of each source (“clickbait or real?”).
- Trying to map high-level policy moves (tariffs, supply-chain bans, new IRA credits) onto my actual P/L in Stock Investing Account – manually.
The thesis was simple: if I could validate the signal (is this news item real & material?) and quantify the exposure (how much of my portfolio is at risk?) inside a single dashboard, I would make faster, calmer decisions.
Large-context LLMs—especially Perplexity’s Sonar Deep Research model—turned out to be the missing piece. They ingest the headline, cross-reference primary sources, and respond with a grounded risk summary or thesis critique. This project was born from that “what if an LLM looked over my shoulder every time tariffs move?” moment.
How It Was Built
| Layer | Tech & Reasoning |
|---|---|
| Backend | Flask (Python) for rapid routing; SQLite for zero-setup persistence; yfinance for historical & live prices; threading for long-running AI jobs. |
| Data Ingestion | Raw CSV parsed into a normalised transactions table. Split-aware price adjustment logic keeps pre- and post-split fills consistent with Yahoo’s adjusted series. |
| Business Logic | ‑ Tariff risk classifier (analyze_tariff_risk) categorises each holding as high / medium / low based on hard-coded sector heuristics.- Event engines pull FOMC, CPI, and earnings calendars to build a combined “macro risk” view. - Job queues for thesis validation or earnings deep-dives store progress and keep the UI non-blocking. |
| AI Layer | Abstracted provider (get_settings) lets the user toggle OpenAI vs. Perplexity. Default is sonar-deep-research—chosen for its larger context window and source-citation JSON schema. Chat history, portfolio snapshot and tariff context are injected as system prompts. |
| Frontend | Jinja2 templates + Bootstrap 5 + ApexCharts. Pages: Dashboard, Stock Detail, Thesis Validator, Earnings Companion, Event-Risk Calendar. |
| Dev-Ops | .env driven config, venv, make-like shell scripts (run_server.sh, clean.sh, rebuild_db.sh) for one-command setup / reset. |
What I Learned
- Prompt-engineering for finance – Splitting context into “portfolio snapshot”, “macro backdrop” and “user query” dramatically improved Sonar’s answer relevance.
- Caching strategy matters – A 300 s mem-cache for
yfinancecalls reduced API hits by 92 %. Separate 24-h cache for split history avoided subtle mis-pricings. - Front-to-back traceability – Surfacing every server exception (try/except blocks with
print) while preserving UX taught me the value of graceful degradation (e.g., show “N/A” when a delisted ticker 404s). - User trust hinges on transparency – Embedding the exact sources returned by Perplexity (URL + quote) beside each thesis verdict turned skepticism into confidence.
- Threading pitfalls – Flask’s dev server + Python threads can dead-lock; switching long jobs to background threads but reading progress via polling JSON endpoints kept things responsive.
Challenges Faced
API Rate Limits & Cost
OpenAI’s frequent 429s forced a dynamic fallback to Perplexity. Maintaining conversation state across providers without leaking proprietary headers was non-trivial.Corporate Actions
Adjusting historic transactions for stock splits, while still displaying split-adjusted Yahoo prices, required reversing the usual split logic (multiply price, divide quantity for after-split trades).Delisted & Unlisted Symbols
Many meme-era tickers return sparse metadata. A tri-state categoriser (“mag7 / other / unlisted”) plus defensiveNonehandling kept the UI from breaking.Macro Event Correlation
Mapping CPI release impact to a single equity is more art than science. I settled on a simplistic sector-based impact score, leaving room for future ML refinements.UI Clarity vs. Feature Creep
The temptation to add every cool dataset (Treasury yields, options chain) was real. A strict “does this help validate the thesis?” filter kept the scope in check.
Next Steps
- Vector-store the entire chat history + source docs for RAG-style follow-ups.
- Add Monte-Carlo or historical VaR back-testing in the Strategy page.
- Migrate long-running jobs to a real task queue (Celery + Redis) for production stability.
- Build a plugin for automatic nightly import of new stock investing statements.
“In a world of noisy headlines and faster tariffs, context is alpha.
This project is our attempt to put that context—validated by an AI research assistant—one click away from every trade.”

Log in or sign up for Devpost to join the conversation.