A real-time risk assessment and event monitoring dashboard for maritime activities, built with Next.js.
- Global Risk Map – Interactive map with ship markers and alert indicators (Tanker Security Alert, Storm Alerts, Missile Test)
- Event Feed – Chronological list of recent events with severity tags (High/Med) and activity chart
- Risk & Oil Price Trend – Dual-line chart showing risk and oil price over time with event markers
- Shipping Status – Key metrics (tankers in region, avg speed, reroutes) with mini map
- Crisis Briefing – Event summary, risk score, predicted impact, top factors, and audio briefing button (positioned above the Event Feed)
- StraitWatch Assistant – In-app chatbot powered by Google Gemini for questions about risk, events, and shipping
Frontend (required): From the repo root:
cd Frontend
npm install
npm run devOpen http://localhost:3000 in your browser.
Backend (optional, for auth, dashboard data, and StraitWatch Assistant chat + voice):
cd Backend
cp .env.example .env
# Edit .env: set GEMINI_API_KEY (chat), ELEVENLABS_API_KEY and ELEVENLABS_VOICE_ID (voice)
npm install
npm run devSet NEXT_PUBLIC_API_URL=http://localhost:8000 in Frontend/.env.local to use the backend. The StraitWatch Assistant (Gemini chat + ElevenLabs voice) only works when the backend is running and these keys are configured.
- Next.js 16 (App Router)
- TypeScript
- Tailwind CSS
- Recharts (line charts)
- Lucide React (icons)
The StraitWatch Assistant is an AI-powered chatbot that helps users understand maritime risks, geopolitical events, and shipping disruptions in real time. It translates complex data from multiple sources—such as news feeds, ML predictions, and shipping activity—into clear explanations and actionable insights.
Backend
File: index.js
- Uses the
@google/generative-aiSDK - Model: gemini-2.0-flash
- Endpoint:
POST /api/chat
The backend receives a conversation history from the frontend and sends it to Gemini for processing. The system prompt configures the assistant with knowledge about:
- maritime shipping routes
- geopolitical incidents
- oil supply chain risks
- StraitWatch risk scores
- ML model predictions
Gemini then generates a contextual response that explains the situation in natural language.
Frontend
File: ChatBot.tsx
- Sends user messages to
/api/chat - Maintains conversation history
- Displays messages in a conversational UI
- Supports voice input using the Web Speech API
This allows users to interact with the system using natural language instead of navigating dashboards or raw datasets.
Gemini responses are augmented with summarized outputs from:
- machine learning risk prediction models
- shipping traffic signals
- global incident datasets
- energy market indicators
This ensures the assistant provides data-grounded explanations rather than generic AI responses.
To improve accessibility and safety, StraitWatch converts AI responses into natural spoken briefings using ElevenLabs.
This enables users—such as analysts, operators, or drivers—to receive risk updates without needing to look at their screens.
Backend
File: index.js
- Uses
@elevenlabs/elevenlabs-js - Endpoint:
POST /api/tts - Model: eleven_multilingual_v2
During development, Antigravity AI played a key role in accelerating engineering workflows and maintaining code quality.
Antigravity assisted with debugging, development troubleshooting, and repository management, allowing the team to focus on building the core prediction market platform.
- Helped diagnose and fix backend integration issues across the AI pipeline
- Assisted with debugging API calls and data flow between services
- Provided guidance for implementing model integrations and chatbot logic
Antigravity also helped resolve critical repository issues that were blocking development:
- Resolved complex Git divergence and rebase conflicts
- Performed history cleanup to remove extremely large files
- Removed cached GDELT data files exceeding 1GB
- Restored the repository to a state where it could be successfully pushed to GitHub
This allowed the team to maintain a clean, manageable codebase and continue development without repository size limitations. Input:
{
"text": "AI generated response",
"voiceId": "optional"
}Solana powers the decentralized prediction market layer of the platform, allowing users to place tokenized bets on future oil price movements.
- Users can submit predictions on future oil prices
- Each prediction requires a certain number of tokens to participate
- Smart contracts record the prediction and lock tokens into the market
When prediction outcomes are evaluated:
- Users with accurate predictions receive token rewards
- Users with incorrect predictions lose their stake
This creates an incentivized market-driven forecasting system, where collective intelligence contributes to more accurate predictions.
MongoDB Atlas provides the primary database infrastructure for storing user activity, prediction history, and system data.
MongoDB stores:
- User accounts and authentication data
- Prediction submissions from users
- Token bet records
- Timestamps for all predictions and bets
- Historical snapshots of oil price predictions generated by the system
These stored prediction snapshots allow the system to:
- Track prediction performance over time
- Validate outcomes of the prediction market
- Maintain a transparent historical record of forecasts
Using MongoDB Atlas enables scalable, real-time data storage while supporting rapid queries across user predictions and market activity.
The platform’s predictive capabilities are supported by several custom machine learning models designed to forecast short-term oil price movements using both shipment traffic and news data.
For predictions based on news data, we performed structured feature engineering and leveraged the financial language model FinBERT to efficiently extract relevant signals from news articles. This approach enabled improved training efficiency and model accuracy. We also implemented a GARCH-X volatility model, which captures time-varying market volatility while incorporating exogenous news features. In addition, we trained a gradient boosting model using LightGBM to further enhance predictive performance. Model performance was evaluated using Root Mean Squared Error (RMSE) and the coefficient of determination (R²).
For shipment-traffic-based predictions, we trained multiple ensemble learning models including XGBoost, Random Forest, and Gradient Boosting. To better understand the relative importance of shipping-related features, we applied Principal Component Analysis (PCA) for dimensionality reduction and feature impact analysis. These models were also evaluated using RMSE and R² to assess predictive accuracy.
Mathematical formulations for FinBERT, GARCH-X, LightGBM, XGBoost, Gradient Boosting, and Random Forest.
FinBERT is a BERT-based transformer fine-tuned on financial text for sentiment classification.
| Symbol | Description |
|---|---|
| Key/query dimension per head | |
| Number of attention heads | |
Final hidden state of the [CLS] token |
|
| Predicted probability for class |
Extends GARCH by adding an exogenous variable
| Symbol | Description |
|---|---|
| Variance intercept | |
| ARCH (shock) coefficients | |
| GARCH (persistence) coefficients | |
| Exogenous variable coefficient | |
| Lagged exogenous input (e.g. VIX, realized variance) |
Gradient boosting using leaf-wise tree growth with histogram-based splitting.
| Symbol | Description |
|---|---|
| First-order gradient | |
| Second-order gradient (Hessian) | |
| Number of leaves in tree |
|
| Weight (output value) of leaf |
|
| L2 and leaf-count regularization |
Gradient boosting with level-wise tree growth and explicit second-order Taylor expansion.
| Symbol | Description |
|---|---|
| Learning rate (shrinkage) | |
| L1 regularization on leaf weights | |
| L2 regularization on leaf weights | |
| Minimum gain required to split |
General framework: sequentially fit weak learners
| Task | Loss |
Pseudo-Residual |
|---|---|---|
| Regression | ||
| Classification | ||
| Quantile ( |
$\tau(y-F)+ + (1-\tau)(F-y)+$ |
Ensemble of
Classification — Gini Impurity:
Regression — Variance Reduction:
Regression (average):
where
| Symbol | Description |
|---|---|
| Number of trees | |
| Features sampled per split ( |
|
| Fraction of samples reaching node |
|
| Number of classes |