Skip to content

nshah2006/NexDrop

Repository files navigation

Enrichment Pipeline (Corporate / Regulatory / Financial)

This lane complements the base geospatial building layer by adding non-geospatial enrichment signals and scoring outputs.

Scope

  • Ingest external city-rate, policy, and corporate signals into data/raw/
  • Clean and normalize each source into data/processed/
  • Build final building-level enrichment features in src/features/
  • Compute water-capture, ROI, and overall viability scores in src/scoring/
  • Expose starter API route handlers in src/api/routes/

Quick Start

Run modules in order from repo root using package/module execution:

python -m src.ingest.pull_city_rates
python -m src.ingest.pull_policy_signals
python -m src.ingest.pull_corporate_signals
python -m src.clean.clean_city_rates
python -m src.clean.clean_policy_signals
python -m src.clean.clean_corporate_signals
python -m src.features.build_final_building_features

Scoring helpers are imported from src/scoring/ and route handler starters live in src/api/routes/.

Execution Note

  • Prefer python -m ... from repo root.
  • Running cleaner scripts by direct file path (for example python src/clean/clean_city_rates.py) can lead to import resolution issues in some environments.

Ollama Deployment (Investment Analysis)

NexDrop uses Ollama as an external service for generating concise investment_analysis text.
The FastAPI app does not embed Ollama; it calls Ollama by URL.

Environment Variables

Set these in your environment or .env:

OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=qwen2.5:7b
OLLAMA_TIMEOUT_SECONDS=60

Optional batch cap:

INVESTMENT_ANALYSIS_BATCH_LIMIT=500

Install and Run Ollama

  1. Install Ollama on the host that will serve model inference:
  2. Start Ollama service:
    ollama serve
  3. Pre-pull the configured model before API startup:
    ollama pull qwen2.5:7b

If Ollama runs in another container or internal host, set OLLAMA_BASE_URL accordingly, for example:

  • http://ollama:11434 (same Docker network)
  • http://10.0.1.25:11434 (internal host)

Run Batch Investment Analysis Generator

From repo root:

PYTHONPATH=backend python -m src.features.generate_investment_analyses

What it does:

  • verifies Ollama connectivity and configured model availability (/api/tags)
  • fetches rows missing investment_analysis
  • generates concise analyses
  • updates final_building_features.investment_analysis

Run API and Trigger Per-Building Generation

Start API:

PYTHONPATH=backend .venv/bin/python -m uvicorn src.api.main:app --host 0.0.0.0 --port 8001

Read current analysis:

curl -sS "http://127.0.0.1:8001/buildings/123/investment-analysis"

Generate/regenerate and persist:

curl -sS -X POST "http://127.0.0.1:8001/buildings/123/investment-analysis/generate"

Production Recommendation

  • Host Ollama separately from the FastAPI app when possible.
  • Use OLLAMA_BASE_URL to point FastAPI to the Ollama host.
  • Pre-pull the model (ollama pull <model>) before application startup.
  • Keep Ollama and API on private/internal networking and expose only the API edge.

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors