Inspiration

Manufacturing downtime is expensive — often measured in tens or hundreds of thousands of dollars per hour for critical lines. We built Predict A.I. to move organizations from reactive firefighting to proactive, auditable, and scalable operations: connect heterogeneous telemetry, apply explainable predictive models, and automatically turn predictions into safe, auditable actions — all while keeping analytics and business vocabulary inside Tableau Cloud so operators, analysts, and engineers share the same view of truth.

Key inspirations:

  • The real cost of unplanned downtime and the operational friction that prevents fast remediation.
  • The power of semantic modeling (a single canonical domain model) to reduce ambiguity between analysts and operators.
  • The need for explainable predictions (so humans trust and act on AI).
  • The benefit of tightly coupling analytics (Tableau) with automation (Agentforce, Salesforce Field Service) and communication (Slack) in an auditable fashion (optional Solana anchoring).

What it does

Predict A.I. is an end-to-end predictive maintenance demo that shows how data → prediction → action → audit can be closed inside an enterprise workflow. At a glance:

User-facing flows

  • Landing page → Dashboard → Equipment detail → Analytics → Create Work Order → Alerts & monitoring.
  • Interactive Tableau dashboards (embedded or via Hyper pre-aggregates) that let operators explore predictions and launch actions.
  • Slack alerts with thread-aware updates and interactive buttons (View Dashboard / Create Work Order / Acknowledge).

Core capabilities

  • Semantic modeling: canonical entities (Equipment, SensorReadings, Predictions, Maintenance, Parts) defined so Tableau, models, and orchestration speak the same language.
  • Real-time ingestion & streaming: MQTT/HTTP → Edge → Kafka/PubSub → stream processing → TSDB and feature store.
  • Feature store & Hyper pre-aggregates: online/offline feature parity and materialized Hyper extracts for fast Tableau queries.
  • Predictive Engine: failure probability + RUL within a 7–30 day horizon; model versions are tracked.
  • Explainability: per-prediction feature importance (SHAP-like) surfaced alongside predictions.
  • Action orchestration: Agentforce playbooks auto-create Salesforce WorkOrders, reserve parts in ERP, and push tasks to technicians.
  • Alerting & dedupe: Slack service with Redis TTL-based dedupe/cooldown and thread management.
  • Immutable audit option: candidate writes of prediction hashes to Solana ledger for tamper-evident auditability.
  • Admin controls: threshold tuning, routing (team → Slack channel), model version promotions, and refresh policies.

*Operational outcomes *

  • Example KPI impacts used in demos: ↓40% unplanned downtime, ↓25% maintenance cost, MTTR ↓60%, ↑30% parts availability.

How we built it

This section summarizes architecture, technology choices, and what lives where in the repo/demo.

Architecture (high level)

 IoT Sensors --> Edge Gateway --> Kafka/PubSub --> Stream Processor --> Feature Store / TSDB
                                               \--> Predictive Engine --> Semantic Layer --> Hyper API --> Tableau Cloud (Dashboards)
                                               \--> Agentforce/Orchestrator --> Salesforce / ERP / Slack
                                               \--> Audit Hash --> Solana (hash only)

Key components & stacks

  • Frontend / Dashboard

    • React + embedded Tableau extensions for in-app flows (viz extensions that call our webhooks).
    • Path: viz-extensions/, web-app/ in repo (demo pages embed Tableau views).
  • Ingest & Stream

    • MQTT / HTTP edge proxy → Kafka (or hosted PubSub). Lightweight examples use Python/Node producers and Kafka consumers for the streaming pipeline.
  • Feature Store & TSDB

    • TimescaleDB / InfluxDB for raw timeseries; Feast/Hopsworks-style patterns for feature parity; pipeline jobs materialize Hyper extracts for Tableau.
  • Predictive Engine

    • Model server (FastAPI) serving POST /predict and POST /predict/batch. Models: XGBoost/LightGBM (tabular), optional LSTM ensemble for sequences. Explainability via SHAP.
    • Path: predictive_engine/
  • Semantic Layer

    • Semantic model definitions (Tableau Next DMO) map source tables to domain entities. Pre-aggregates built and published to Tableau Cloud via Hyper API.
  • Agentforce / Orchestration

    • Playbooks defined in JSON/YAML. Agentforce triggers call REST endpoints, create Field Service WorkOrders (Salesforce), and reserve parts in ERP.
  • Alerting

    • Slack notifier service (Python) with signed request verification, backoff, Redis dedupe and thread support. See slack_notifier_service.py.
  • Solana audit

    • Only hashes are written on-chain; full payloads live encrypted off-chain (S3). Solana writes are optional and optimized to minimize cost.
  • Salesforce integration

    • SFDX metadata package for Apex + Named Credential + permission set + agent actions; JWT OAuth illustrated for server-to-server flows.
  • Infra & deployment

    • Dockerfiles for each service, docker-compose.yml for local dev, Helm charts / k8s manifests for production; CI/CD via GitHub Actions; secrets via KMS/Vault.

What APIs and developer tools were used in your project?

Predict A.I. uses a combination of Tableau, Salesforce, cloud, and open-source APIs to connect data ingestion, predictive intelligence, automation, and visualization into a single operational workflow.

Tableau APIs & Developer Tools

  • Tableau Cloud REST API – programmatic publishing of data sources, extracts, and refresh control.
  • Hyper API – generation of high-performance Hyper extracts for pre-aggregated predictions and feature snapshots.
  • Tableau Viz Extensions API – embedded extensions enabling user-initiated actions (e.g., create work orders, view explanations) directly from dashboards.
  • Tableau Next Semantic Model – canonical domain modeling to unify operational meaning across dashboards, predictions, and automation.
  • Tableau Embedding API – embedding dashboards inside the Predict A.I. web application.

Machine Learning & Analytics

  • FastAPI – hosted model inference service (REST-based) for failure probability and remaining useful life predictions.
  • Scikit-learn / XGBoost / LSTM models – predictive modeling for anomaly detection and failure forecasting.
  • SHAP / feature attribution techniques – explainability surfaced to users alongside predictions.

Automation & Enterprise APIs

  • Salesforce Agentforce APIs – orchestration of AI-driven actions triggered by predictions.
  • Salesforce Field Service APIs – automatic creation of Work Orders and maintenance tasks.
  • Salesforce Platform Events & Flows – event-driven, low-code automation and admin-editable thresholds.
  • Salesforce Named Credentials + JWT OAuth – secure, credential-free authentication for service integrations.

Real-Time & Messaging

  • Slack Web API & Events API – proactive alerts, threaded incident updates, and dashboard drill-through links.
  • Redis – deduplication windows, alert cooldowns, and exactly-once-ish event handling.
  • Kafka / Streaming APIs – buffering and processing real-time sensor events (simulated in the demo).

Infrastructure & DevOps

  • Docker – containerization of ingestion, model, and alert services.
  • Kubernetes + Helm – deployment, scaling, and configuration management.
  • SFDX (Salesforce DX) – source-driven Salesforce metadata deployment and testing.
  • GitHub Actions – CI/CD for model services and infrastructure artifacts.

Why this matters: These APIs and tools allow Predict A.I. to go beyond dashboards — transforming Tableau insights into trusted, explainable, and automated operational actions with enterprise-grade security and auditability.


Challenges we ran into

Building an integrated demo spanning streaming, ML, Tableau, and enterprise orchestration exposed many practical constraints:

  1. Heterogeneous source formats & SCADA
    • PLCs, legacy SCADA, CSV exports, and vendor APIs require many adapters. Locked vendor protocols limited our coverage; we focused on common protocols: MQTT, OPC-UA, REST.
  2. Semantic consistency across real-time & batch
    • Ensuring the same field names and semantics across streaming windows, the feature store, and Tableau required a strict semantic-first design and constant mapping checks.
  3. Explainability vs latency
    • Generating SHAP explanations for complex ensembles was compute-heavy; we implemented hybrid approaches (approximate explanations for real-time, full SHAP in batch).
  4. Operational safety for automation
    • Automating case creation and parts reservation required idempotency, backoff, and safe defaults to avoid spamming downstream systems.
  5. On-chain cost & practicality
    • Storing payloads on Solana is expensive — we store only SHA hashes on-chain and keep enriched audit records off-chain (S3/GCS) indexed by hash.
  6. End-to-end testing & demo reliability
    • Reproducing realistic failure events and ensuring the entire flow (sensor → model → Tableau → Agentforce → Salesforce) worked deterministically for judges required a synthetic data harness and chaos testing scenarios.

Accomplishments that we're proud of

  • End-to-end demonstrable loop from telemetry to action: the demo actually creates (simulated) Field Service WorkOrders and posts Slack incidents with actionable buttons.
  • Tableau-first integration: semantic model + Hyper pre-aggregates give a smooth, low-latency analyst experience while preserving canonical business terms.
  • Explainability surfaced to operators: top contributing features appear directly in equipment views so human responders understand why a decision was made.
  • Robust alerting: Redis-backed dedupe and thread-aware Slack handling reduce noise and keep incident history tidy.
  • Admin controls & governance: model versioning, refresh cadence, RBAC, and a Solana-backed immutable trail for high-stakes auditability.
  • Production-aware artifacts: SFDX package for Salesforce integration, JWT-based OAuth examples for secure server-to-server auth, Docker + k8s manifests, and a CI/CD pipeline for model promotion.
  • Demo readiness: a deployable demo site and reproducible local stack (docker-compose), plus synthetic data scripts for judges to run scenarios.

What we learned

  • Semantic modeling pays off. When the model is the single source of truth, dashboards, ML, and automation stay in sync — reducing errors and time spent on ETL mappings.
  • Explainability is essential for adoption. Operators need a quick human-readable reason for a prediction before they will act — surfaced explainability removed friction.
  • Automation needs safety layers. Idempotency keys, cooldowns, and per-equipment routing are non-negotiable to prevent operational harm.
  • On-chain auditability is useful but costly. Hash-only anchoring gives a tamper-evident trail without exploding costs.
  • Testing pipelines is hard but crucial. Synthetic replay, chaos scenarios, and deterministic mock data are required to prove resilience to judges and to test model rollback.
  • Operator UX matters. Shortcuts — one-click create work order, dashboard links in Slack, and mobile offline support — make the system adoptable in the field.

What's next for Predict AI

Planned roadmap (prioritized):

  1. Continuous learning loop: automatic retrain on confirmed failures with safe human-in-the-loop gating and model promotion flow.
  2. Technician mobile app (offline support): native app for field confirmations and work logging in poor-connectivity scenarios.
  3. Cross-factory semantic federation: scale the semantic model across multiple plants, enabling cross-factory analysis and transfer learning.
  4. Differential privacy mode: for regulated environments where sharing raw telemetry is constrained.
  5. Model & audit cost optimization: refine on-chain strategy (batch commits, layer-2 techniques), and prune audit payloads while preserving compliance guarantees.
  6. Commercialization-ready hardening: formal SLOs, multi-tenant isolation, RBAC baked into the semantic layer, and enterprise-grade logging/monitoring.

Built With

Share this project:

Updates