This lane complements the base geospatial building layer by adding non-geospatial enrichment signals and scoring outputs.
- Ingest external city-rate, policy, and corporate signals into
data/raw/ - Clean and normalize each source into
data/processed/ - Build final building-level enrichment features in
src/features/ - Compute water-capture, ROI, and overall viability scores in
src/scoring/ - Expose starter API route handlers in
src/api/routes/
Run modules in order from repo root using package/module execution:
python -m src.ingest.pull_city_rates
python -m src.ingest.pull_policy_signals
python -m src.ingest.pull_corporate_signals
python -m src.clean.clean_city_rates
python -m src.clean.clean_policy_signals
python -m src.clean.clean_corporate_signals
python -m src.features.build_final_building_featuresScoring helpers are imported from src/scoring/ and route handler starters live in src/api/routes/.
- Prefer
python -m ...from repo root. - Running cleaner scripts by direct file path (for example
python src/clean/clean_city_rates.py) can lead to import resolution issues in some environments.
NexDrop uses Ollama as an external service for generating concise investment_analysis text.
The FastAPI app does not embed Ollama; it calls Ollama by URL.
Set these in your environment or .env:
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=qwen2.5:7b
OLLAMA_TIMEOUT_SECONDS=60Optional batch cap:
INVESTMENT_ANALYSIS_BATCH_LIMIT=500- Install Ollama on the host that will serve model inference:
- Start Ollama service:
ollama serve
- Pre-pull the configured model before API startup:
ollama pull qwen2.5:7b
If Ollama runs in another container or internal host, set OLLAMA_BASE_URL accordingly, for example:
http://ollama:11434(same Docker network)http://10.0.1.25:11434(internal host)
From repo root:
PYTHONPATH=backend python -m src.features.generate_investment_analysesWhat it does:
- verifies Ollama connectivity and configured model availability (
/api/tags) - fetches rows missing
investment_analysis - generates concise analyses
- updates
final_building_features.investment_analysis
Start API:
PYTHONPATH=backend .venv/bin/python -m uvicorn src.api.main:app --host 0.0.0.0 --port 8001Read current analysis:
curl -sS "http://127.0.0.1:8001/buildings/123/investment-analysis"Generate/regenerate and persist:
curl -sS -X POST "http://127.0.0.1:8001/buildings/123/investment-analysis/generate"- Host Ollama separately from the FastAPI app when possible.
- Use
OLLAMA_BASE_URLto point FastAPI to the Ollama host. - Pre-pull the model (
ollama pull <model>) before application startup. - Keep Ollama and API on private/internal networking and expose only the API edge.