coverer

Coverseer – intelligent process observer using LLM

Coverseer is a Python CLI tool for intelligently monitoring and automatically restarting processes. Unlike classic watchdog solutions, it analyzes the application’s text output using the LLM model and makes decisions based on context, not just the exit code.

The project is open source and available on GitHub:
https://github.com/demensdeum/coverseer

What is Coverser

Coverseer starts the specified process, continuously monitors its stdout and stderr, feeds the latest chunks of output to the local LLM (via Ollama), and determines whether the process is in the correct running state.

If the model detects an error, freeze, or incorrect behavior, Coverseer automatically terminates the process and starts it again.

Key features

  • Contextual analysis of output – instead of checking the exit code, log analysis is used using LLM
  • Automatic restart – the process is restarted when problems or abnormal termination are detected
  • Working with local models – Ollama is used, without transferring data to external services
  • Detailed logging – all actions and decisions are recorded for subsequent diagnostics
  • Standalone execution – can be packaged into a single executable file (for example, .exe)

How it works

  1. Coverseer runs the command passed through the CLI
  2. Collects and buffers text output from the process
  3. Sends the last rows to the LLM model
  4. Gets a semantic assessment of the process state
  5. If necessary, terminates and restarts the process

This approach allows you to identify problems that cannot be detected by standard monitoring tools.

Requirements

  • Python 3.12 or later
  • Ollama installed and running
  • Loaded model gemma3:4b-it-qat
  • Python dependencies: requests, ollama-call

Use example


python coverseer.py "your command here"

For example, watching the Ollama model load:


python coverseer.py "ollama pull gemma3:4b-it-qat"

Coverseer will analyze the command output and automatically respond to failures or errors.

Practical application

Coverseer is especially useful in scenarios where standard supervisor mechanisms are insufficient:

  • CI/CD pipelines and automatic builds
  • Background services and agents
  • Experimental or unstable processes
  • Tools with large amounts of text logs
  • Dev environments where self-healing is important

Why the LLM approach is more effective

Classic monitoring systems respond to symptoms. Coverser analyzes behavior. The LLM model is able to recognize errors, warnings, repeated failures and logical dead ends even in cases where the process formally continues to operate.

This makes monitoring more accurate and reduces the number of false alarms.

Conclusion

Coverseer is a clear example of the practical application of LLM in DevOps and automation tasks. It expands on the traditional understanding of process monitoring and offers a more intelligent, context-based approach.

The project will be of particular interest to developers who are experimenting with AI tools and looking for ways to improve the stability of their systems without complicating the infrastructure.

Flame Steel: Mars Miners

Flame Steel: Mars Miners is a tactical strategy game with unusual pacing and an emphasis on decision making rather than reflexes. The game takes place on Mars, where players compete for control of resources and territories in the face of limited information and constant pressure from rivals.

The gameplay is based on the construction of hub stations that form the infrastructure of your expedition. Nodes allow you to extract resources, expand your zone of influence, and build logistics. Every placement matters: one mistake can open the enemy’s path to key sectors or deprive you of a strategic advantage.

The rhythm of the game is deliberately controlled and intense. It is somewhere between chess, Go and naval combat: positioning, predicting the opponent’s actions and the ability to work with uncertainty are important here. Part of the map and the enemy’s intentions remain hidden, so success depends not only on calculation, but also on reading the situation.

Flame Steel: Mars Miners supports online play, which makes each game unique – strategies evolve, and the meta is being formed right now. The game is at an early stage of development, and this is its strength: players have the opportunity to be the first to dive into a new, non-standard project, influence its development and discover mechanics that do not copy the usual templates of the genre.

If you’re interested in tactical games with depth, experimental design, and an emphasis on thinking, Flame Steel: Mars Miners is worth checking out now.

GAME RULES

* The playing field consists of cells on which players place their objects one by one. Each turn a player can perform one construction action.

* Only two types of objects are allowed to be built: hub stations and mines. Any construction is possible exclusively on one free cell located next to an existing player node vertically or horizontally. Diagonal placement is not allowed.

* Hub stations form the basis of territory control and serve as expansion points. Mines are placed according to the same rules, but are counted as resource objects and directly affect the final result of the party.

* If a player builds a continuous line of his node stations vertically or horizontally, such a line automatically turns into a weapon. The weapon makes it possible to attack the enemy and destroy his infrastructure.

* To fire a gun, the player selects one cell belonging to his gun and points to any enemy node station on the field. The selected enemy node station is destroyed and removed from the playing field. Mines cannot be attacked directly – only through the destruction of nodes that provide access to them.

* The game continues until the set end of the game. The winner is the player who at this moment has the largest number of resource mines on the playing field. In case of equality, the decisive factor may be territory control or additional conditions determined by the game mode.

https://mediumdemens.vps.webdock.cloud/mars-miners

Antigravity

In a couple of days, with the help of Antigravity, I transferred the Masonry-AR backend from PHP + MySQL to Node.js + MongoDB + Redis -> Docker. The capabilities of AI are truly amazing, I remember how in 2022 I wrote the simplest shaders on shadertoy.com via ChatGPT and it seemed that this toy couldn’t do anything higher.
https://www.shadertoy.com/view/cs2SWm

Four years later, I watch how, in ~10 prompts, I effortlessly transferred my project from one back platform to another, adding containerization.
https://mediumdemens.vps.webdock.cloud/masonry-ar/

Cool, really cool.

Kaban Board

KabanBoard is an open-source web application for managing tasks in Kanban format. The project is focused on simplicity, understandable architecture and the possibility of modification for the specific tasks of a team or an individual developer.

The solution is suitable for small projects, internal team processes, or as the basis for your own product without being tied to third-party SaaS services.

The project repository is available on GitHub:
https://github.com/demensdeum/KabanBoard

Main features

KabanBoard implements a basic and practical set of functions for working with Kanban boards.

  • Creating multiple boards for different projects
  • Column structure with task statuses
  • Task cards with the ability to edit and delete
  • Moving tasks between columns (drag & drop)
  • Color coding of cards
  • Dark interface theme

The functionality is not overloaded and is focused on everyday work with tasks.

Technologies used

The project is built on a common and understandable stack.

  • Frontend:Vue 3, Vite
  • Backend: Node.js, Express
  • Data storage: MongoDB

The client and server parts are separated, which simplifies the support and further development of the project.

Project deployment

To run locally, you will need a standard environment.

  • Node.js
  • MongoDB (locally or via cloud)

The project can be launched either in normal mode via npm or using Docker, which is convenient for quick deployment in a test or internal environment.

Practical application

KabanBoard can be used in different scenarios.

  • Internal task management tool
  • Basis for a custom Kanban solution
  • Training project for studying SPA architecture
  • Starting point for a pet project or portfolio

Conclusion

KabanBoard is a neat and practical solution for working with Kanban boards. The project does not pretend to replace large corporate systems, but is well suited for small teams, individual use and further development for specific tasks.

Gofis

Gofis is a lightweight command line tool for quickly searching files in the file system.
It is written in Go and makes heavy use of parallelism (goroutines), which makes it especially efficient
when working with large directories and projects.

The project is available on GitHub:
https://github.com/demensdeum/gofis

🧠 What is Gofis

Gofis is a CLI utility for searching files by name, extension or regular expression.
Unlike classic tools like find, gofis was originally designed
with an emphasis on speed, readable output, and parallel directory processing.

The project is distributed under the MIT license and can be freely used
for personal and commercial purposes.

⚙️ Key features

  • Parallel directory traversal using goroutines
  • Search by file name and regular expressions
  • Filtering by extensions
  • Ignoring heavy directories (.git, node_modules, vendor)
  • Human-readable output of file sizes
  • Minimal dependencies and fast build

🚀 Installation

Requires Go installed to work.

git clone https://github.com/demensdeum/gofis
cd gofis
go build -o gofis main.go

Once built, the binary can be used directly.

There is also a standalone version for modern versions of Windows on the releases page:
https://github.com/demensdeum/gofis/releases/

🔍 Examples of use

Search files by name:

./gofis -n "config" -e ".yaml" -p ./src

Quick positional search:

./gofis "main" "./projects" 50

Search using regular expression:

./gofis "^.*\.ini$" "/"

🧩 How it works

Gofis is based on Go’s competitive model:

  • Each directory is processed in a separate goroutine
  • Uses a semaphore to limit the number of active tasks
  • Channels are used to transmit search results

This approach allows efficient use of CPU resources
and significantly speeds up searching on large file trees.

👨‍💻 Who is Gofis suitable for?

  • Developers working with large repositories
  • DevOps and system administrators
  • Users who need a quick search from the terminal
  • For those learning the practical uses of concurrency in Go

📌 Conclusion

Gofis is a simple but effective tool that does one thing and does it well.
If you often search for files in large projects and value speed,
this CLI tool is definitely worth a look.

ollama-call

If you use Ollama and don’t want to write your own API wrapper every time,
the ollama_call project significantly simplifies the work.

This is a small Python library that allows you to send a request to a local LLM with one function
and immediately receive a response, including in JSON format.

Installation

pip install ollama-call

Why is it needed

  • minimal code for working with the model;
  • structured JSON response for further processing;
  • convenient for rapid prototypes and MVPs;
  • supports streaming output if necessary.

Use example

from ollama_call import ollama_call

response = ollama_call(
    user_prompt="Hello, how are you?",
    format="json",
    model="gemma3:12b"
)

print(response)

When it is especially useful

  • you write scripts or services on top of Ollama;
  • need a predictable response format;
  • there is no desire to connect heavy frameworks.

Total

ollama_call is a lightweight and clear wrapper for working with Ollama from Python.
A good choice if simplicity and quick results are important.

GitHub
https://github.com/demensdeum/ollama_call

SFAP: a modular framework for modern data acquisition and processing

In the context of the active development of automation and artificial intelligence, the task of effectively collecting,
Cleaning and transforming data becomes critical. Most solutions only close
separate stages of this process, requiring complex integration and support.

SFAP (Seek · Filter · Adapt · Publish) is an open-source project in Python,
which offers a holistic and extensible approach to processing data at all stages of its lifecycle:
from searching for sources to publishing the finished result.

What is SFAP

SFAP is an asynchronous framework built around a clear concept of a data processing pipeline.
Each stage is logically separate and can be independently expanded or replaced.

The project is based on the Chain of Responsibility architectural pattern, which provides:

  • pipeline configuration flexibility;
  • simple testing of individual stages;
  • scalability for high loads;
  • clean separation of responsibilities between components.

Main stages of the pipeline

Seek – data search

At this stage, data sources are discovered: web pages, APIs, file storages
or other information flows. SFAP makes it easy to connect new sources without changing
the rest of the system.

Filter – filtering

Filtering is designed to remove noise: irrelevant content, duplicates, technical elements
and low quality data. This is critical for subsequent processing steps.

Adapt – adaptation and processing

The adaptation stage is responsible for data transformation: normalization, structuring,
semantic processing and integration with AI models (including generative ones).

Publish – publication

At the final stage, the data is published in the target format: databases, APIs, files, external services
or content platforms. SFAP does not limit how the result is delivered.

Key features of the project

  • Asynchronous architecture based on asyncio
  • Modularity and extensibility
  • Support for complex processing pipelines
  • Ready for integration with AI/LLM solutions
  • Suitable for highly loaded systems

Practical use cases

  • Aggregation and analysis of news sources
  • Preparing datasets for machine learning
  • Automated content pipeline
  • Cleansing and normalizing large data streams
  • Integration of data from heterogeneous sources

Getting started with SFAP

All you need to get started is:

  1. Clone the project repository;
  2. Install Python dependencies;
  3. Define your own pipeline steps;
  4. Start an asynchronous data processing process.

The project is easily adapted to specific business tasks and can grow with the system,
without turning into a monolith.

Conclusion

SFAP is not just a parser or data collector, but a full-fledged framework for building
modern data-pipeline systems. It is suitable for developers and teams who care about
scalable, architecturally clean, and data-ready.
The project source code is available on GitHub:
https://github.com/demensdeum/SFAP

FlutDataStream

A Flutter app that converts any file into a sequence of machine-readable codes (QR and DataMatrix) for high-speed data streaming between devices.

Peculiarities
* Dual Encoding: Represents each data block as both a QR code and a DataMatrix code.
*High-speed streaming: Supports automatic switching interval up to 330ms.
* Smart Chunking: Automatically splits files into custom chunks (default: 512 bytes).
* Detailed Scanner: Read ASCII code in real time for debugging and instant feedback.
* Automatic recovery: Instantly recovers and saves files to your downloads directory.
* System Integration: Automatically opens the saved file using the default system application after completion.

https://github.com/demensdeum/FlutDataStream