Modeling soccer recruitment with similarity scores
April 30, 2026 — ML, FastAPI, Player Analytics
A player-similarity and recruitment modeling platform around the kind of comparisons scouts actually make.
Read context
This came from treating recruitment like a practical matching problem instead of a generic prediction demo. The useful layer is comparing players across measurable traits, surfacing similar profiles, and giving scouts a reason to inspect a recommendation instead of blindly trusting a score.
The stack direction was intentionally simple: Python for modeling, FastAPI for serving, and enough ML tracking to keep experiments reproducible.
Trying Vertex AI Search from a small TypeScript surface
April 30, 2026 — Vertex AI, Search, TypeScript
A focused search experiment to understand the practical shape of Google's managed retrieval stack.
Read context
The point was not to make a large product around search. It was to understand what Vertex AI Search feels like from a developer surface: setup, request shape, response handling, and where managed retrieval helps or gets in the way.
This belongs in the timeline because search and retrieval show up repeatedly in the portfolio, especially around RAG and agent workflows.
Testing image-generation workflows on Vertex AI
April 30, 2026 — Vertex AI, Images, TypeScript
A lightweight pass at image generation APIs, request shape, and how the output could fit into app flows.
Read context
This was an API-shape exploration: how image requests are formed, how generated assets come back, and what the minimum app wrapper needs to look like.
It pairs with the Vertex AI Search work because both are about learning the edges of managed AI services from a real code surface rather than from documentation alone.
A tiny preflight script for Cloudflare deploys
April 28, 2026 — Cloudflare, Deploy Checks, GitHub
Not a big project, just a practical check for platform fit before wasting time on a deploy path.
Read context
This is deliberately framed small. It is a utility idea: inspect a repository and catch obvious Cloudflare deploy blockers early, before turning a simple deployment into a debugging session.
The lesson is operational, not glamorous: a boring preflight check can save time when moving static sites, workers, or small apps between environments.
Turning a jobs dashboard into something I can rerun and inspect
April 27, 2026 — Flask, DigitalOcean, Runbooks
The useful part was making the dashboard operational: bootable, inspectable, and easy to verify later.
Read context
The jobs dashboard was less about a new UI and more about maintainability. A dashboard becomes more valuable when there is a clear way to start it, check it, inspect logs, and understand whether the data path is still healthy.
That is why this shows up as a runbook/operations note rather than a portfolio-feature announcement.
Trying Google Cloud Memory Bank from a plain Python SDK
April 24, 2026 — Google Cloud, Memory, Gemini Enterprise
A small SDK surface for testing how external memory should be written, recalled, and trusted by agents.
Read context
The question here was how agent memory should work when it moves outside a single chat session. Writing and reading memories is the easy part; the harder part is deciding what should be trusted, refreshed, or ignored.
This connects directly to the OpenClaw and Hermes work because durable memory changes how an agent behaves over time.
Adding citation spans so RAG answers are easier to audit
April 24, 2026 — RAG, Citations, Analytics
Moved from broad document references toward answer evidence that can be checked line by line.
Read context
This is about making RAG answers less hand-wavy. A citation that points to an entire document is better than nothing, but a span-level citation makes it possible to check the actual evidence behind a sentence.
For analytics copilots, that matters because a confident answer without inspectable backing is hard to trust.
Writing down the Cloudflare AI Gateway + MiniMax path
April 20, 2026 — Cloudflare, MiniMax, Observability
Captured the routing, caching, and logging path so MiniMax calls were easier to debug through Cloudflare.
Read context
This was documentation around a real integration path: using Cloudflare AI Gateway to route and observe MiniMax calls. The useful pieces were provider naming, gateway behavior, cache/log visibility, and how failures show up.
It belongs here because observability is often the difference between a demo model call and something you can actually debug.
What OpenClaw made obvious about always-on agents
April 16, 2026 — OpenClaw, Agent Workspaces, Tooling
OpenClaw pushed the question from "can an agent code?" to "how should an agent live, remember, and operate?"
Read context
OpenClaw is not interesting only because it can call tools. The more interesting part is the product shape around agents: local workflows, skills, long-running context, and integrations that make the agent feel present instead of disposable.
This entry is about learning from that shape and applying it to workspace, memory, and maintenance ideas elsewhere.
Pulling memory, sync, and execution ideas into an OpenClaw workspace
April 16, 2026 — OpenClaw, Memory, Execution
A workspace pass around the pieces that make local agents feel continuous instead of session-bound.
Read context
This was about organizing the building blocks around an OpenClaw-style workspace: what gets remembered, what gets synced, and what execution path should stay local and inspectable.
The important part is continuity. Agents become more useful when the surrounding workspace carries context forward safely.
Cleaning up tracker experiments into one shared pattern
April 16, 2026 — Trackers, Templates, Consolidation
Several small tracker ideas started sharing the same data, UI, and alerting patterns, so I cleaned the shape.
Read context
There were multiple real-time tracker experiments with overlapping needs: data refresh, status display, alerts, and simple UI shells. Instead of treating each one as a separate one-off, this work pulled the repeated shape into a shared pattern.
That makes future tracker ideas faster to start and less messy to maintain.
Fixing the first model choice in a multi-provider CLI
April 4, 2026 — Coding Agents, Providers, Configuration
A provider-aware CLI still has to get the first model selection right, before the rest of the routing matters.
Read context
This came from a small but important edge in multi-provider coding CLIs. If the first model choice ignores the active provider, the rest of the provider abstraction can look correct while the user still starts in the wrong place.
The fix was about respecting environment and provider selection at the earliest decision point.