AI

Running An LLM With Llama.cpp Using Docker On A Raspberry Pi

I've been curious about integrating AI agents into my workflow recently, and so I started looking at how this could be done using my current equipment. Data sovereignty is important to me so sending all my data to train a remote AI agent doesn't appeal. I was expecting to need to buy a new gaming rig with a couple of high end graphics cards in it, but after some research I found that this wasn't the case.

I found a system called llama.cpp, which is an efficient LLM engine written in C++. The idea behind llama.cpp is that you can host small, efficient AI agents without having to throw thousands at equipment to get them running. As I have a Raspberry PI model 5, with 16GB of RAM in my office I thought this was a good candidate to get running.

Programming Using AI

I've been thinking about this article for a while but it is only recently that I have been able to sit down and really have a think about it properly. Or at least collate all of my thoughts into a single article.

Over the last couple of years the term "AI" has become a sort of marketing term that is banded about (and abused) by all sorts of companies with the intent of trying to make life easier.

In this article we will define the term AI in the context of programming, look at some services that you can use to produce code, and go through some pros and cons of using AI systems to code.