<feed xmlns="http://www.w3.org/2005/Atom"><generator uri="https://github.com/mliezun/mliezun.github.io" version="20240110">mliezun.com</generator><link href="https://mliezun.com/feed.xml" rel="self" type="application/atom+xml"/><link href="https://mliezun.com/" rel="alternate" type="text/html"/><id>https://mliezun.com/</id><title>mliezun.com</title><updated>2026-02-11T00:00:00Z</updated><subtitle>I'm Miguel. Here I write mainly about programming and side projects.</subtitle><entry><title type="html">Vercel-like development setup for Django using Caddy</title><id>https://mliezun.com/2026/02/11/vercel-like-domain-django.html</id><updated>2026-02-11T00:00:00Z</updated><link href="https://mliezun.com/2026/02/11/vercel-like-domain-django.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2026/02/11/vercel-like-domain-django.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Vercel-like development setup for Django using Caddy&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/vercel-deployment-caddy.png'/&gt;&lt;/p&gt;&lt;p&gt;Vercel provides sites like &lt;span class='single-quote'&gt;{branch}.vercel.app&lt;/span&gt; that show a preview of your changes and it usually builds really fast, which makes it a seamless experience.&lt;/p&gt;&lt;p&gt;This post gets you the same for Django (or any ASGI/WSGI app) on a single VM with Caddy, Python and rsync.&lt;/p&gt;&lt;p&gt;On each push, package the commit as a &lt;span class='single-quote'&gt;.tar.gz&lt;/span&gt; and rsync it to &lt;span class='single-quote'&gt;/workspace/branches/{branchName}&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;The VM runs &lt;a target='_blank' href='https://caddyserver.com/'&gt;Caddy&lt;/a&gt; with &lt;a target='_blank' href='https://github.com/mliezun/caddy-snake'&gt;caddy-snake&lt;/a&gt;. Which means you get automatic HTTPS, support for Websockets and fast static files serving.&lt;/p&gt;&lt;p&gt;Your production app runs on &lt;span class='single-quote'&gt;yourdomain.com&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;Branches get a subdomain like &lt;span class='single-quote'&gt;feature-x.dev.yourdomain.com&lt;/span&gt;.&lt;/p&gt;&lt;h2&gt;Overview&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;mermaid&quot;&gt;graph LR
    subgraph repo[&quot;Code Repository&quot;]
        main[main]
        featureA[featureA]
        featureB[featureB]
    end

    subgraph cicd[&quot;CI/CD Pipeline&quot;]
        pipeline[Build &amp; Deploy]
    end

    subgraph vm[&quot;Single VM Server&quot;]
        direction TB
        maindir[&quot;📁 /workspace/branches/main/&quot;]
        fadir[&quot;📁 /workspace/branches/featureA/&quot;]
        fbdir[&quot;📁 /workspace/branches/featureB/&quot;]
        caddy[&quot;Caddy Server&quot;]
        
        maindir --&gt; caddy
        fadir --&gt; caddy
        fbdir --&gt; caddy
    end

    subgraph dns[&quot;DNS Records&quot;]
        direction TB
        dns1[&quot;yourdomain.com&lt;br/&gt;A → serverIP&quot;]
        dns2[&quot;*.dev.yourdomain.com&lt;br/&gt;A → serverIP&quot;]
    end

    subgraph domains[&quot;Public Access&quot;]
        d1[&quot;yourdomain.com&quot;]
        d2[&quot;featureA.dev.yourdomain.com&quot;]
        d3[&quot;featureB.dev.yourdomain.com&quot;]
    end

    main --&gt; pipeline
    featureA --&gt; pipeline
    featureB --&gt; pipeline
    
    pipeline --&gt; maindir
    pipeline --&gt; fadir
    pipeline --&gt; fbdir
    
    dns1 -.-&gt; vm
    dns2 -.-&gt; vm
    
    caddy --&gt; d1
    caddy --&gt; d2
    caddy --&gt; d3

    style repo fill:#e1f5ff,stroke:#0066cc
    style cicd fill:#fff4e1,stroke:#ff9800
    style vm fill:#f3e5f5,stroke:#9c27b0
    style dns fill:#ffe0e0,stroke:#d32f2f
    style domains fill:#e8f5e9,stroke:#4caf50
    style caddy fill:#c5cae9,stroke:#3f51b5&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;CI/CD&lt;/strong&gt;: On each push, build a &lt;span class='single-quote'&gt;.tar.gz&lt;/span&gt; and rsync to &lt;span class='single-quote'&gt;/workspace/branches/$BRANCH/&lt;/span&gt;.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;VM&lt;/strong&gt;: Caddy + caddy-snake. Production at &lt;span class='single-quote'&gt;yourdomain.com&lt;/span&gt;, previews at &lt;span class='single-quote'&gt;*.dev.yourdomain.com&lt;/span&gt;.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;DNS&lt;/strong&gt;: A records for &lt;span class='single-quote'&gt;yourdomain.com&lt;/span&gt; and &lt;span class='single-quote'&gt;*.dev.yourdomain.com&lt;/span&gt; → VM IP.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Caddy&lt;/strong&gt;: Single configuration file and automatic HTTPS.&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;1. Deploy pattern: tar.gz + rsync&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;On every push (e.g. in GitHub Actions or GitLab CI), do something like:&lt;/p&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;# Example: in CI, after checkout
BRANCH_NAME=&quot;${GITHUB_REF#refs/heads/}&quot;
tar -czf app.tar.gz \
  --exclude='.git' \
  --exclude='__pycache__' \
  --exclude='.venv' \
  .
rsync -avz --delete app.tar.gz \
  deploy@YOUR_VM_IP:/workspace/branches/&quot;$BRANCH_NAME&quot;/
ssh deploy@YOUR_VM_IP &quot;cd /workspace/branches/$BRANCH_NAME \
  &amp;&amp; tar -xzf app.tar.gz \
  &amp;&amp; rm app.tar.gz \
  &amp;&amp; uv sync \
  &amp;&amp; uv run manage.py migrate \
  &amp;&amp; uv run manage.py collectstatic --noinput&quot;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Sanitize branch names in CI if needed (e.g. replace &lt;span class='single-quote'&gt;/&lt;/span&gt; with &lt;span class='single-quote'&gt;-&lt;/span&gt;) so the path and subdomain match.&lt;/p&gt;&lt;p&gt;The server can create the venv per branch with &lt;span class='single-quote'&gt;uv sync&lt;/span&gt; (as in the example) or you can ship a pre-built &lt;span class='single-quote'&gt;.venv&lt;/span&gt;.&lt;/p&gt;&lt;h2&gt;2. Caddy + caddy-snake&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Install Caddy with the &lt;a target='_blank' href='https://github.com/mliezun/caddy-snake'&gt;caddy-snake&lt;/a&gt; plugin (see repo for build steps; &lt;a target='_blank' href='https://github.com/mliezun/caddy-snake/releases'&gt;releases&lt;/a&gt; often include a binary).&lt;/p&gt;&lt;h3&gt;Example Caddyfile&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Production&lt;/strong&gt;: &lt;span class='single-quote'&gt;yourdomain.com&lt;/span&gt; → fixed &lt;span class='single-quote'&gt;working_dir&lt;/span&gt; for &lt;span class='single-quote'&gt;main&lt;/span&gt;.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Previews&lt;/strong&gt;: &lt;span class='single-quote'&gt;*.dev.yourdomain.com&lt;/span&gt; → &lt;span class='single-quote'&gt;working_dir&lt;/span&gt; uses the placeholder; &lt;span class='single-quote'&gt;autoreload&lt;/span&gt; so the next request after a push serves the new code.&lt;/li&gt;&lt;/ul&gt;&lt;pre class=&quot;triple-quote Caddyfile&quot;&gt;yourdomain.com {
  handle_path /static/* {
    root * /workspace/branches/main/staticfiles
    file_server
  }
  route {
    python {
      module_wsgi app:wsgi.application
      working_dir /workspace/branches/main/
      venv .venv
    }
  }
}

*.dev.yourdomain.com {
  handle_path /static/* {
    root * /workspace/branches/{http.request.host.labels.3}/staticfiles
    file_server
  }
  route {
    python {
      module_wsgi app:wsgi.application
      working_dir /workspace/branches/{http.request.host.labels.3}/
      venv .venv
      workers_runtime thread
      autoreload
    }
  }
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Notes:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;span class='single-quote'&gt;workers_runtime thread&lt;/span&gt; is required with placeholders and &lt;span class='single-quote'&gt;autoreload&lt;/span&gt;.&lt;/li&gt;&lt;li&gt;Use your Django WSGI path (e.g. &lt;span class='single-quote'&gt;mysite.wsgi:application&lt;/span&gt;) instead of &lt;span class='single-quote'&gt;app:wsgi.application&lt;/span&gt;.&lt;/li&gt;&lt;li&gt;The branch directory must exist before the first request (CI creates it). Caddy-snake loads and caches the app on first request; &lt;span class='single-quote'&gt;autoreload&lt;/span&gt; watches for &lt;span class='single-quote'&gt;.py&lt;/span&gt; changes after each deploy.&lt;/li&gt;&lt;li&gt;Set Django’s &lt;span class='single-quote'&gt;STATIC_ROOT&lt;/span&gt; to &lt;span class='single-quote'&gt;staticfiles&lt;/span&gt; so &lt;span class='single-quote'&gt;collectstatic&lt;/span&gt; matches the &lt;span class='single-quote'&gt;staticfiles/&lt;/span&gt; paths in the Caddyfile.&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Subdomain → branch name&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Caddy splits the host into labels, &lt;strong&gt;right to left&lt;/strong&gt;: for &lt;span class='single-quote'&gt;featureA.dev.yourdomain.com&lt;/span&gt;, &lt;span class='single-quote'&gt;labels.0&lt;/span&gt; = &lt;span class='single-quote'&gt;com&lt;/span&gt;, &lt;span class='single-quote'&gt;labels.1&lt;/span&gt; = &lt;span class='single-quote'&gt;yourdomain&lt;/span&gt;, &lt;span class='single-quote'&gt;labels.2&lt;/span&gt; = &lt;span class='single-quote'&gt;dev&lt;/span&gt;, &lt;span class='single-quote'&gt;labels.3&lt;/span&gt; = &lt;span class='single-quote'&gt;featureA&lt;/span&gt;. So the branch is &lt;span class='single-quote'&gt;{http.request.host.labels.3}&lt;/span&gt; for &lt;span class='single-quote'&gt;*.dev.yourdomain.com&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;Adjust the index if your dev domain has a different structure.&lt;/p&gt;&lt;h2&gt;3. DNS configuration&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Add an &quot;A record&quot; for &lt;span class='single-quote'&gt;@.dev.yourdomain.com&lt;/span&gt; (use wildcard depending on your DNS provider) pointing to the VM’s IP.&lt;/p&gt;&lt;p&gt;After propagation, Caddy can obtain a cert for &lt;span class='single-quote'&gt;*.dev.yourdomain.com&lt;/span&gt; via ACME and route each subdomain to the matching branch directory.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;4. What happens at runtime&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;First request&lt;/strong&gt; to &lt;span class='single-quote'&gt;feature-x.dev.yourdomain.com&lt;/span&gt;: Caddy resolves the placeholder to &lt;span class='single-quote'&gt;/workspace/branches/feature-x/&lt;/span&gt;; caddy-snake loads and caches the app. If that directory doesn’t exist yet (branch not deployed), the request fails until CI has run.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;After a push&lt;/strong&gt;: CI overwrites the branch directory; the next request triggers &lt;span class='single-quote'&gt;autoreload&lt;/span&gt; and serves the new code.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;New branch&lt;/strong&gt;: Push → CI creates the directory → first request to &lt;span class='single-quote'&gt;new-branch.dev.yourdomain.com&lt;/span&gt; loads it.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Push a branch, wait for CI, then open &lt;span class='single-quote'&gt;{branch}.dev.yourdomain.com&lt;/span&gt;.&lt;/p&gt;&lt;h2&gt;Bonus: clean up on merge&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;When a branch is merged and deleted, trigger a pipeline step that removes its directory on the VM (e.g. &lt;span class='single-quote'&gt;ssh deploy@VM &quot;rm -rf /workspace/branches/$BRANCH_NAME&quot;&lt;/span&gt;). Caddy-snake sees the directory is gone and stops serving that app; the next request to &lt;span class='single-quote'&gt;{branch}.dev.yourdomain.com&lt;/span&gt; fails until a new branch with that name is deployed. That keeps preview URLs only for active branches and frees disk space.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Get {branch}.dev.yourdomain.com previews for Django (or any WSGI app) with a single VM, Caddy, and rsync—no Kubernetes required.</summary></entry><entry><title type="html">Work Hard</title><id>https://mliezun.com/2026/01/26/work-hard.html</id><updated>2026-01-26T00:00:00Z</updated><link href="https://mliezun.com/2026/01/26/work-hard.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2026/01/26/work-hard.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Work Hard&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;If working hard is not working long hours nor performing extenuating tasks, then what is it?&lt;/p&gt;&lt;p&gt;Working hard is working with purpose and care.&lt;/p&gt;&lt;p&gt;We often relate the following conditions to hard work but they're not part of its essence:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Working long hours.&lt;/li&gt;&lt;li&gt;Not taking breaks/days off.&lt;/li&gt;&lt;li&gt;Performing physically extenuating activities.&lt;/li&gt;&lt;li&gt;Toxic work environment.&lt;/li&gt;&lt;li&gt;Away from friends and family.&lt;/li&gt;&lt;li&gt;Repetitive and boring tasks.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Those properties are part of any job, but none of them are actually necessary to doing hard work.&lt;/p&gt;&lt;p&gt;An example from my own work as a Software Engineer is that when I try to solve a problem I never stop working, my mind is always connected to the topic and trying to draw inspiration from the environment.&lt;/p&gt;&lt;p&gt;I had plenty of epiphany moments on how to solve a problem or improve a system while doing something completely different: jogging, watching a movie, walking around the city, talking with friends.&lt;/p&gt;&lt;p&gt;This happens when you're tasked with something challenging that sparks your creativity, and you care about the result.&lt;/p&gt;&lt;p&gt;The more you care, the more you engage that creative side that is working on the background all the time. But then it has to be followed with the discipline to deliver what you have envisioned.&lt;/p&gt;&lt;p&gt;Being creative is what makes us human and in the age of AI is more important than ever.&lt;/p&gt;&lt;p&gt;Try to put yourself in the line of work where you care about what you do, that let's you work towards a purpose that you find valuable and express your creativity. That will make you work hard, and then you'll not mind working longer hours or during weekends.&lt;/p&gt;&lt;p&gt;What is caring?&lt;/p&gt;&lt;p&gt;In a workplace it can mean different things:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Solve a problem.&lt;/li&gt;&lt;li&gt;Improve some part of the system.&lt;/li&gt;&lt;li&gt;Being brave to speak up and challenge the current status quo.&lt;/li&gt;&lt;li&gt;Respect others opinions even if you think they're wrong.&lt;/li&gt;&lt;li&gt;Engage in dialogue.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;To summarize&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For me working hard can be summarized into 3 words:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Care&lt;/li&gt;&lt;li&gt;Purpose&lt;/li&gt;&lt;li&gt;Responsibility&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;If those things are part of your work then you're working hard.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html"></summary></entry><entry><title type="html">What's in the future?</title><id>https://mliezun.com/2025/12/22/whats-in-the-future.html</id><updated>2025-12-22T00:00:00Z</updated><link href="https://mliezun.com/2025/12/22/whats-in-the-future.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/12/22/whats-in-the-future.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;What's in the future?&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/length-of-tasks-log.png'/&gt;&lt;/p&gt;&lt;p&gt;Capabilities of AI tools are still advancing rapidly, doubling the length of tasks AI can do every 7 months.&lt;/p&gt;&lt;p&gt;At the end of 2025, we're sitting at around 5 hours. This means that AI tools can now complete tasks that would take a human around 5h and they succeed 50% of the time.&lt;/p&gt;&lt;p&gt;By the end of 2026 this will be close to 20 hours roughly, that's almost equivalent to a human work-week.&lt;/p&gt;&lt;p&gt;But what's more striking is that for Software Engineering this is accelerating at a higher pace.&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/time-horizon-swe.png'/&gt;&lt;/p&gt;&lt;p&gt;For coding, we're seeing capabilities double in just 70 days!&lt;/p&gt;&lt;p&gt;By extrapolating this trend, discussing this with friends and seeing what people are saying online we can see that the burden of software is less and less coding and increasingly about proper design and verification.&lt;/p&gt;&lt;p&gt;Even Karpathy is saying: &lt;a target='_blank' href='https://x.com/karpathy/status/1990116666194456651'&gt;Software 2.0 easily automates what you can verify.&lt;/a&gt;&lt;/p&gt;&lt;p&gt;He's talking about writing software to solve problems that can be verified to be correct or not with a certain grade of accuracy.&lt;/p&gt;&lt;p&gt;For coding in particular this means we need tools to verify the quality of the code written. The better our tools are to formally verify the code the more we can trust our automation to work.&lt;/p&gt;&lt;p&gt;Seeing the landscape of what's out there I think we're missing a tool that has all of the following:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Ownership of the software process: Design + Implement + Test (like Cursor)&lt;/li&gt;&lt;li&gt;Hosts the code itself (like Github/Gitlab)&lt;/li&gt;&lt;li&gt;Runs the code and makes it available (like Vercel)&lt;/li&gt;&lt;li&gt;Checks that the code is correct and provides guardrails to users (static analysis)&lt;/li&gt;&lt;li&gt;Is opinionated in what technology should be used (e.g. Python only hosting)&lt;/li&gt;&lt;li&gt;Provides integration with other services (e.g. easy Stripe integration)&lt;/li&gt;&lt;li&gt;Open source&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The closest we have to this is Lovable, Replit and v0 from Vercel.&lt;/p&gt;&lt;p&gt;But those platforms still rely on external services and are not open source. I think LLMs will work best when we're able to let them handle the full environment were our code lives and making them open source will enable people to trust them and run them on their own.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Capabilities of AI tools are still advancing rapidly, doubling the length of tasks AI can do every 7 months. But what's more striking is that for Software Engineering this is accelerating at a higher pace.</summary></entry><entry><title type="html">AI generated content in this blog</title><id>https://mliezun.com/2025/11/19/ai-blog.html</id><updated>2025-11-19T00:00:00Z</updated><link href="https://mliezun.com/2025/11/19/ai-blog.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/11/19/ai-blog.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;AI generated content in this blog&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/ai-generated-content/not-ai.png'/&gt;&lt;/p&gt;&lt;p&gt;In the last year I increased the usage of LLM based tools for code generation.&lt;/p&gt;&lt;p&gt;They have been of great help to improving the CSS styles and adding support for Dark and Light mode in this blog.&lt;/p&gt;&lt;p&gt;The previous post: &lt;a target='_blank' href='https://mliezun.com/2025/10/16/benchmarking-wagtail'&gt;Benchmarking Wagtail&lt;/a&gt; was almost completely generated by Cursor. I did review it and decided to publish it but something felt off.&lt;/p&gt;&lt;p&gt;Which led me to think about the purpose of this blog itself.&lt;/p&gt;&lt;p&gt;I started this blog with the intention of being a place for tinkering, for expressing myself and to be creative.&lt;/p&gt;&lt;p&gt;I don't want to use LLM tools to generate the content of my posts anymore, since I wouldn't be really expressing myself.&lt;/p&gt;&lt;p&gt;I will still use them to modify some parts of the blog's engine and probably still use them for CSS styles.&lt;/p&gt;&lt;p&gt;For the sake of the readers I introduced some new tags that let you know if the content of a post contains AI-generated text or is completely human written.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html"></summary></entry><entry><title type="html">Benchmarking Wagtail CMS Across Python Versions</title><id>https://mliezun.com/2025/10/16/benchmarking-wagtail.html</id><updated>2025-10-16T00:00:00Z</updated><link href="https://mliezun.com/2025/10/16/benchmarking-wagtail.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/10/16/benchmarking-wagtail.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Benchmarking Wagtail CMS Across Python Versions&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Testing real-world performance across 8 different Python versions&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;Published: October 16, 2025&lt;/em&gt;&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/wagtail-benchmark/python_versions.png'/&gt;&lt;/p&gt;&lt;h2&gt;Introduction&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I've been curious about how different Python versions perform in real-world web applications. Python keeps getting faster with each release, but I wanted to see how much of a difference it actually makes when running a real application under load.&lt;/p&gt;&lt;p&gt;So I decided to benchmark &lt;strong&gt;Wagtail CMS&lt;/strong&gt; a Django-based content management system, tested across 8 different Python versions. This is the first in what I hope will be a series of benchmarks testing different Python web applications.&lt;/p&gt;&lt;p&gt;Here's what I tested:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Python 3.10&lt;/strong&gt; (our baseline)&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Python 3.11&lt;/strong&gt; &lt;/li&gt;&lt;li&gt;&lt;strong&gt;Python 3.12&lt;/strong&gt;&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Python 3.13&lt;/strong&gt;&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Python 3.13t&lt;/strong&gt; (free-threaded, GIL disabled)&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Python 3.14&lt;/strong&gt;&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Python 3.14-tailcall&lt;/strong&gt; (experimental tailcall optimization)&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Python 3.14t&lt;/strong&gt; (free-threaded, GIL disabled)&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The results were pretty interesting! You can check out the full benchmark setup and results at &lt;a target='_blank' href='https://github.com/mliezun/python-web-benchmarks'&gt;https://github.com/mliezun/python-web-benchmarks&lt;/a&gt;.&lt;/p&gt;&lt;h2&gt;How I Set Up the Tests&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;h3&gt;The Setup&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I wanted to keep things realistic, so I used:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Wagtail CMS 6.3&lt;/strong&gt; with Django 5.1.4 (a real production setup)&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Gunicorn 21.2.0&lt;/strong&gt; as the WSGI server&lt;/li&gt;&lt;li&gt;&lt;strong&gt;SQLite&lt;/strong&gt; with actual content (15+ blog posts, multiple pages)&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Apache Bench (ab)&lt;/strong&gt; for load testing&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Docker&lt;/strong&gt; to keep everything isolated and reproducible&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;For the experimental Python versions, I used custom Docker images:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;span class='single-quote'&gt;ghcr.io/mliezun/python-web-benchmarks/python-freethreaded:3.14t-slim&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span class='single-quote'&gt;ghcr.io/mliezun/python-web-benchmarks/python-freethreaded:3.13t-slim&lt;/span&gt;  &lt;/li&gt;&lt;li&gt;&lt;span class='single-quote'&gt;ghcr.io/mliezun/python-web-benchmarks/python-freethreaded:3.14-tailcall-slim&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Test Scenarios&lt;/h3&gt;&lt;p&gt;I ran three scenarios that you'd actually encounter in real usage:&lt;/p&gt;&lt;p&gt;1. &lt;strong&gt;Homepage Load&lt;/strong&gt;: Full page with navigation, sections, and dynamic content&lt;/p&gt;&lt;p&gt;2. &lt;strong&gt;Admin Login&lt;/strong&gt;: Authentication and admin interface access&lt;/p&gt;&lt;p&gt;3. &lt;strong&gt;Blog Browsing&lt;/strong&gt;: Listing and viewing blog posts&lt;/p&gt;&lt;p&gt;Each scenario was tested with &lt;strong&gt;1, 5, 10, and 20 concurrent users&lt;/strong&gt; to see how things scale.&lt;/p&gt;&lt;h3&gt;What I Measured&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Requests per Second (RPS)&lt;/strong&gt;: How many requests we can handle&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Response Time&lt;/strong&gt;: Average, p50, p95, and p99 percentiles&lt;/li&gt;&lt;li&gt;&lt;strong&gt;CPU Usage&lt;/strong&gt;: How hard the server was working&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Memory Usage&lt;/strong&gt;: RAM consumption during tests&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;The Results&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;h3&gt;Overall Throughput Performance&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/wagtail-benchmark/rps_comparison.png'/&gt;&lt;/p&gt;&lt;p&gt;Here's how many requests per second each Python version can handle. Higher is better, obviously.&lt;/p&gt;&lt;h3&gt;Response Time Analysis&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/wagtail-benchmark/response_time_comparison.png'/&gt;&lt;/p&gt;&lt;p&gt;This shows how response times change as we add more concurrent users. Lower is better here.&lt;/p&gt;&lt;h3&gt;Relative Performance Improvements&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/wagtail-benchmark/performance_gains.png'/&gt;&lt;/p&gt;&lt;p&gt;This is probably the most interesting chart - it shows how much faster each version is compared to Python 3.10 (our baseline). Positive numbers mean it's faster.&lt;/p&gt;&lt;h3&gt;Resource Utilization&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/wagtail-benchmark/resource_usage.png'/&gt;&lt;/p&gt;&lt;p&gt;CPU and memory usage is important for capacity planning. You want good performance without burning through resources.&lt;/p&gt;&lt;h2&gt;What I Found&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;h3&gt;🏆 Python 3.11 is the Winner&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Python 3.11 came out on top&lt;/strong&gt; with about 3.2% better performance than Python 3.10 on average. That might not sound like much, but when you're handling thousands of requests per day, it adds up.&lt;/p&gt;&lt;h3&gt;The Big Picture&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;1. &lt;strong&gt;Experimental versions are interesting&lt;/strong&gt;: Python 3.14t (free-threaded) shows some promising results for concurrent workloads, but I'd be careful about using it in production just yet.&lt;/p&gt;&lt;p&gt;2. &lt;strong&gt;All versions scale well&lt;/strong&gt;: Every version handled 20 concurrent users without breaking a sweat. The newer versions just do it slightly better.&lt;/p&gt;&lt;p&gt;3. &lt;strong&gt;Memory usage is consistent&lt;/strong&gt;: All versions used roughly the same amount of RAM (~300-360 MB), so upgrading won't cost you more in memory.&lt;/p&gt;&lt;p&gt;4. &lt;strong&gt;Everything was stable&lt;/strong&gt;: Zero errors across all tests. All versions are production-ready.&lt;/p&gt;&lt;h2&gt;My Recommendations&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;h3&gt;For Production&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Go with Python 3.11&lt;/strong&gt; if you want the best performance. It's stable, fast, and has been around long enough that most packages support it well.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Python 3.12&lt;/strong&gt; is also a solid choice if you want something newer but still conservative.&lt;/p&gt;&lt;p&gt;I'd hold off on the experimental versions (3.14t, 3.14-tailcall) for production use until they're more mature.&lt;/p&gt;&lt;h3&gt;If You're Upgrading&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;If you're currently on Python 3.10 or earlier, here's what I'd suggest:&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Important note&lt;/strong&gt;: Python 3.10 reaches end-of-life on &lt;em&gt;&lt;/em&gt;October 31, 2026** according to the &lt;a target='_blank' href='https://devguide.python.org/versions/'&gt;Python Developer's Guide&lt;/a&gt;. After that date, it will only receive security fixes, so you'll want to plan your upgrade well before then.&lt;/p&gt;&lt;p&gt;1. &lt;strong&gt;Test everything&lt;/strong&gt;: Run your full test suite on the target version first&lt;/p&gt;&lt;p&gt;2. &lt;strong&gt;Check your dependencies&lt;/strong&gt;: Make sure all your packages support the new version&lt;/p&gt;&lt;p&gt;3. &lt;strong&gt;Benchmark your specific app&lt;/strong&gt;: My results are for Wagtail - your mileage may vary&lt;/p&gt;&lt;p&gt;4. &lt;strong&gt;Deploy carefully&lt;/strong&gt;: Use canary deployments or blue-green strategies&lt;/p&gt;&lt;p&gt;The good news is that memory usage is pretty consistent across versions, so you won't need to resize your servers.&lt;/p&gt;&lt;h2&gt;Wrapping Up&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;So what did I learn from all this testing?&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Python version upgrades are worth it.&lt;/strong&gt; You get measurable performance improvements without needing more resources. Python 3.11 is the sweet spot right now - fast, stable, and well-supported.&lt;/p&gt;&lt;p&gt;The experimental versions are definitely interesting to watch. The free-threaded versions (3.13t, 3.14t) could be a game-changer for certain workloads, but I'd wait until they're more mature before using them in production.&lt;/p&gt;&lt;p&gt;I'm planning to benchmark more Python web applications in the future - maybe Django REST Framework, FastAPI, or even some data processing workloads. If you have suggestions for what to test next, let me know!&lt;/p&gt;&lt;p&gt;You can find all the benchmark code, results, and setup instructions at &lt;a target='_blank' href='https://github.com/mliezun/python-web-benchmarks'&gt;https://github.com/mliezun/python-web-benchmarks&lt;/a&gt;. Feel free to run your own tests or contribute to the project.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">I spent some time benchmarking Wagtail CMS across different Python versions to see how performance varies. The results are pretty interesting - Python 3.11 comes out on top with some solid improvements over 3.10.</summary></entry><entry><title type="html">Serving Python apps using the Caddy web server</title><id>https://mliezun.com/2025/10/03/caddy-snake-v2.html</id><updated>2025-10-03T00:00:00Z</updated><link href="https://mliezun.com/2025/10/03/caddy-snake-v2.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/10/03/caddy-snake-v2.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Serving Python apps using Caddy web server&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Caddy is an enterprise grade web server and reverse proxy. I recently released a new version of &lt;span class='single-quote'&gt;caddysnake&lt;/span&gt; a plugin to serve python apps more easily using Caddy without the need of installing another server like gunicorn, uvicorn, hypercorn or others.&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/OIP-3356949409.jpg'/&gt;&lt;/p&gt;&lt;p&gt;In this latest release there are some new features to explore:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Distribution of a single caddy binary with bundled python.&lt;/li&gt;&lt;li&gt;Added &lt;span class='single-quote'&gt;python-server&lt;/span&gt; subcommand.&lt;/li&gt;&lt;li&gt;Multiple worker process.&lt;/li&gt;&lt;li&gt;Package in PyPI available for a quick way to install: &lt;span class='single-quote'&gt;caddysnake&lt;/span&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Distribution of single binary&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The latest release is &lt;a target='_blank' href='https://github.com/mliezun/caddy-snake/releases'&gt;available on github&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Now you can see for example &lt;span class='single-quote'&gt;caddy-standalone-3.10-x86_64_v2-unknown-linux-gnu.zip&lt;/span&gt; as an available asset to download.&lt;/p&gt;&lt;p&gt;After downloading and unzipping you get a single binary named &lt;span class='single-quote'&gt;caddy&lt;/span&gt;. This packages an entire Python distribution inside so you now don't have the need to install it on the target system.&lt;/p&gt;&lt;p&gt;This would be useful for docker images since now you don't have to install the correct python version, just include the caddy binary and that carries along all of its dependencies.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;Python server subcommand&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;With the new release you get a shorthand &lt;span class='single-quote'&gt;caddy python-server&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;This will configure a virtual &lt;span class='single-quote'&gt;Caddyfile&lt;/span&gt; and run the server for you.&lt;/p&gt;&lt;p&gt;For example if you have &lt;span class='single-quote'&gt;main.py&lt;/span&gt; file with a Flask app inside you'll simply run:&lt;/p&gt;&lt;pre class=&quot;triple-quote &quot;&gt;caddy python-server -t wsgi -a main:app&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This will start serving request on http://localhost:9080/ by default.&lt;/p&gt;&lt;p&gt;See more settings with &lt;span class='single-quote'&gt;caddy python-server --help&lt;/span&gt;.&lt;/p&gt;&lt;h2&gt;Multiple worker processes&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Up until now caddysnake has been serving request from a single python processes, given that the GIL hasn't been removed yet that means we were not fully utilizing all the cores available.&lt;/p&gt;&lt;p&gt;Now by default caddysnake starts as many processes as cores you have in your cpu, configurable with &lt;span class='single-quote'&gt;--workers &lt;n&gt;&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;This increases throughput but also overhead of communication between the Go and Python processes.&lt;/p&gt;&lt;h2&gt;Package available in PyPI&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Now it's possible to install &lt;span class='single-quote'&gt;caddysnake&lt;/span&gt; as a package using pip:&lt;/p&gt;&lt;pre class=&quot;triple-quote &quot;&gt;pip install caddysnake
caddysnake python-server --help&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is a quick way to get started without the need to compile caddy yourself.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Caddy is an enterprise grade web server and reverse proxy. I recently released a new version of `caddysnake` a plugin to serve python apps more easily using Caddy without the need of installing another server like gunicorn, uvicorn, hypercorn or others.</summary></entry><entry><title type="html">The Enduring Simplicity of C</title><id>https://mliezun.com/2025/09/22/enduring-simplicity-c.html</id><updated>2025-09-22T00:00:00Z</updated><link href="https://mliezun.com/2025/09/22/enduring-simplicity-c.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/09/22/enduring-simplicity-c.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;The Enduring Simplicity of C&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I recently came across a video by Salvatore Sanfilippo, the creator of Redis, titled &quot;Why I use C and not Rust&quot;. It made me stop and think about the constant push for newer, &quot;safer&quot; programming languages.&lt;/p&gt;&lt;p&gt;Watch the video &lt;a target='_blank' href='https://www.youtube.com/watch?v=5SLsH755XAA'&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;The gist of his argument is that while C has well-known memory safety issues, the modern alternatives, particularly Rust, come with their own significant costs. This isn't just about nostalgia for an older language, it's a pragmatic choice based on a different set of values.&lt;/p&gt;&lt;p&gt;His points really resonated with me:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Safety has a complexity cost.&lt;/strong&gt; Rust's main selling point is memory safety, but this safety net makes certain things, like a doubly linked list, much more complex to implement. Sanfilippo argues that he'd rather have the directness and simplicity of C and manage the risks with good practices and tools like Valgrind.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;The curse of hyper-dependencies.&lt;/strong&gt; He makes a point that C's lack of a built-in package manager is a feature, not a bug. In ecosystems like Rust or Node.js, it's incredibly easy to accumulate massive dependency trees for simple projects. C forces you to be more deliberate.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;The need for speed.&lt;/strong&gt; The feedback loop in programming is critical. Sanfilippo notes that C compiles in seconds (or fractions of a second), while Rust is known for &quot;biblical&quot; compilation times. He argues that waiting for the compiler kills productivity and flow.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;AI understands C better.&lt;/strong&gt; This was the most surprising insight. Because of C's simple semantics and the sheer volume of C code in the world, LLMs are currently much better at reasoning about and generating C code than Rust. In an age where AI is becoming a co-pilot, this is a huge practical advantage.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;It's not about being against progress. It's a realization that the &quot;new and safe&quot; way isn't always the &quot;better&quot; way. In engineering we always face compromise and deciding which language to use is just one of those cases.&lt;/p&gt;&lt;p&gt;Personally, I like both languages, I'm more fond of C because of its simplicity, but the Rust toolchain and community pushing it forward makes it really easy to start a project and get everything you need to start building.&lt;/p&gt;&lt;p&gt;A systems level programmer like Salvatore might pick C because he has the need to build an object system perfectly tailored for his application. But I mostly spend time writing APIs and glue code in Python, I would pick up Rust when I'm in the need for a little extra speed because is easy to integrate with Python code thanks to PyO3.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">I recently came across a video by Salvatore Sanfilippo, the creator of Redis, titled "Why I use C and not Rust". It made me stop and think about the constant push for newer, "safer" programming languages.</summary></entry><entry><title type="html">Removing read time</title><id>https://mliezun.com/2025/07/18/removing-read-time.html</id><updated>2025-07-18T00:00:00Z</updated><link href="https://mliezun.com/2025/07/18/removing-read-time.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/07/18/removing-read-time.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Removing read time&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For a long time I've had &quot;Reading time: x minutes&quot; at the top of all articles in my blog. I cannot recall why I choose to do this, I suppose I saw it in another place and decided to copy it without thinking the reason. Maybe there was a reason and I just forgot.&lt;/p&gt;&lt;p&gt;Now I'm making a statement and removing it.&lt;/p&gt;&lt;p&gt;Other sites put this &quot;time advertisement&quot; so users can know wether it's gonna be a long or short read and they can decide to stay and read the full thing or to bookmark the article for later.&lt;/p&gt;&lt;p&gt;I don't like that, I want to start reading and keep on going if I feel captivated without thinking about the time.&lt;/p&gt;&lt;p&gt;Sometimes I also feel bad if the estimation says 5 minutes and it takes my 20 to read it and understand it.&lt;/p&gt;&lt;p&gt;For me, when writing, there's also this insecurity of an article not being good enough if it doesn't reach a certain number of minutes.&lt;/p&gt;&lt;p&gt;I'm getting rid of all that and doing one of my favorite activities in programming: deleting code.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">For a long time I've had 'Read time</summary></entry><entry><title type="html">AWS Spot instance advisor</title><id>https://mliezun.com/2025/07/16/aws-spot-instance-advisor.html</id><updated>2025-07-16T00:00:00Z</updated><link href="https://mliezun.com/2025/07/16/aws-spot-instance-advisor.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/07/16/aws-spot-instance-advisor.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;AWS Spot instance advisor&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Spot instances are very useful for background jobs where you can afford them to be interrupted and retried, in return you usually pay a fraction of the cost, up to a 90% discount on the on-demand price. AWS provides a tool called Instance Advisor where you can see which instances are more requested and try to avoid them.&lt;/p&gt;&lt;p&gt;Let's see a comparison between Graviton 3 and Graviton 4 ARM instances.&lt;/p&gt;&lt;p&gt;m7gd instances are Graviton 3 with local SSD and they get interrupted a lot &gt;20% of the time.&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/aws-spot-instances/m7gd.png'/&gt;&lt;/p&gt;&lt;p&gt;m8gd are the latest Graviton 4 with local SSD and they get interrupted between 5-20% of the time based on which instance type you choose. This could be due to slower adoption of newly released instances.&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/aws-spot-instances/m8gd.png'/&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In general it's a good idea to check the frequency of interruption here to optimize your spot instance usage:&lt;/p&gt;&lt;p&gt;&lt;a target='_blank' href='https://aws.amazon.com/ec2/spot/instance-advisor/'&gt;https://aws.amazon.com/ec2/spot/instance-advisor/&lt;/a&gt;&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Spot instances are very useful for background jobs where you can afford them to be interrupted and retried. There's a useful tool that AWS provides called Instance Advisor where you can see which instances are more requested and try to avoid them.</summary></entry><entry><title type="html">Rick and Morty and the end of history</title><id>https://mliezun.com/2025/07/04/rick-and-morty-and-the-end-of-history.html</id><updated>2025-07-04T00:00:00Z</updated><link href="https://mliezun.com/2025/07/04/rick-and-morty-and-the-end-of-history.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/07/04/rick-and-morty-and-the-end-of-history.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Rick and Morty and the end of history&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In Season 2 Episode 6 named &quot;The Ricks Must Be Crazy&quot; we see how Rick and Morty go inside Rick's microverse car battery.&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/rick-and-morty-end-of-history/S2e6_Microverse_Battery.webp'/&gt;&lt;/p&gt;&lt;p&gt;The gist of it is that there's a universe inside a battery and the people have to pull cranks to make power. Some of that power is used in their own planet, the rest is used for Rick's car.&lt;/p&gt;&lt;p&gt;But then inside each universe there's a new scientist that develops the same technology. In total we get 3 levels of nestedness inside batteries throughout the episode.&lt;/p&gt;&lt;p&gt;There's a particular joke in this episode that is repeated each time a new character learns about the people inside the battery being used to generate the power.&lt;/p&gt;&lt;p&gt;The joke goes something like this: &lt;/p&gt;&lt;p&gt;&gt; Morty: &quot;You have a whole planet sitting around making power for you? That's slavery!&quot;&lt;/p&gt;&lt;p&gt;&gt; Rick: &quot;It's society, they work for each other, Morty. They pay each other. They buy houses. They get married and make children that replace them when they get too old to make power.&quot;&lt;/p&gt;&lt;p&gt;&gt; Morty: &quot;That just sounds like slavery with extra steps.&quot;&lt;/p&gt;&lt;p&gt;&gt; Rick: &quot;Oh-la-la! Someone's gonna get laid in college.&quot;&lt;/p&gt;&lt;p&gt;Until recently I just took this joke at face-value, I interpreted it as Morty being progressive and caring about people, and that's why he would get laid in college.&lt;/p&gt;&lt;p&gt;But I just recently got recommended this video on youtube:&lt;/p&gt;&lt;p&gt;&lt;img alt='' src='/assets/images/rick-and-morty-end-of-history/youtube-video.png'/&gt;&lt;/p&gt;&lt;p&gt;&lt;a target='_blank' href='https://www.youtube.com/watch?v=4pG-8XLLaE0'&gt;Watch video&lt;/a&gt;&lt;/p&gt;&lt;h2&gt;Francis Fukuyama and the end of history&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The video explains how all others political ideologies have been defeated by liberalism. In modern times called neoliberalism.&lt;/p&gt;&lt;p&gt;It basically goes on to explain how starting from the 80s the western world has shifted from thinking of people as workers to calling them consumers.&lt;/p&gt;&lt;p&gt;And this is because we know see everything through an economic lense, we think how much a person is worth in terms of money, we want to go to school to get better jobs to make more money instead of to just learn.&lt;/p&gt;&lt;p&gt;Consumerism is the perfection of slavery, because people are enslaved by money, but they have a sense of choice (car, school, house, etc.), they perceive themselves as free and have no reason to rebel.&lt;/p&gt;&lt;p&gt;There's an &lt;a target='_blank' href='https://pages.ucsd.edu/~bslantchev/courses/pdf/Fukuyama%20-%20End%20of%20History.pdf'&gt;article published by Francis Fukuyama&lt;/a&gt; where this is explained in more detail.&lt;/p&gt;&lt;p&gt;It basically says that we have reached the end of history in terms of political ideology because we cannot escape liberalism. Is a system that within itself allows for contradiction and even then remains stable. Where individuals feel free and in control and have no reason to rebel.&lt;/p&gt;&lt;p&gt;What is more surprising is that he cites Hegel and Kojève as people that had already realized this was in motion in 1806 and 1930 respectively.&lt;/p&gt;&lt;p&gt;For me this was eye-opening, I had never been exposed to this ideas, at least in such clear and distilled form. Something inside me tells me this is true, we're at the end of history. I don't think this is good or bad, but a powerful realization that allows me to see the world through a different lens.&lt;/p&gt;&lt;h3&gt;EDIT&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After posting this originally I found another video from Noam Chomsky which reinforces the idea of liberal capitalism as a form of slavery.&lt;/p&gt;&lt;p&gt;&lt;a target='_blank' href='https://www.youtube.com/watch?v=iR1jzExZ9T0'&gt;Watch video&lt;/a&gt;&lt;/p&gt;&lt;p&gt;He basically says that having a job is like surrendering yourself to a master that has control over you life even bigger than the state. As a &quot;free&quot; person in a liberal society you're free to have a job or starve to death. That job is basically a tyranny.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">AI agents are like dishwashers. Yes, you can probably be more precise and clean the dishes better by hand, some of them might be even not dishwasher-friendly and require that you clean them by hand. But for most cases you just load the dishwasher and turn it on and come back in a few hours.</summary></entry><entry><title type="html">AI is a dishwasher</title><id>https://mliezun.com/2025/06/26/ai-is-a-dishwasher.html</id><updated>2025-06-26T00:00:00Z</updated><link href="https://mliezun.com/2025/06/26/ai-is-a-dishwasher.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/06/26/ai-is-a-dishwasher.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;AI is a dishwasher&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;AI agents are like dishwashers. Yes, you can probably be more precise and clean the dishes better by hand, some of them might be even not dishwasher-friendly and require that you clean them by hand. But for most cases you just load the dishwasher and turn it on and come back in a few hours.&lt;/p&gt;&lt;p&gt;By &quot;AI agents&quot; I mean particularly LLMs used for code generation like: Cursor, Claude Code, Codex, Windsurf and others.&lt;/p&gt;&lt;p&gt;I've seen many people claim that AI is not good at software development and that you shouldn't use it. I think that's like claiming you should not use a dishwasher, a washing machine, an oven or any other appliances that you have in your house.&lt;/p&gt;&lt;h2&gt;&quot;But I can do it faster without AI&quot;&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I have said this many times, I can do it faster and I can do it better. Many of the times this is the case in software development. Because we're all highly skilled individuals, we know what we're doing and we can do it well.&lt;/p&gt;&lt;p&gt;I'm not an AI fanatic, but refusing to use a convenient tool just doesn't make sense.&lt;/p&gt;&lt;h2&gt;Where objections make sense&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I've seen Open Source projects reject AI-generated code: &lt;a target='_blank' href='https://www.linkedin.com/posts/danielstenberg_hackerone-curl-activity-7324820893862363136-glb1'&gt;cURL&lt;/a&gt; and &lt;a target='_blank' href='https://github.com/qemu/qemu/commit/3d40db0efc22520fa6c399cf73960dced423b048'&gt;Qemu&lt;/a&gt; come to mind.&lt;/p&gt;&lt;p&gt;And to be honest, I think they're pretty good reasons.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;cURL is rejecting it because there's too much poor quality reports.&lt;/li&gt;&lt;li&gt;Qemu is rejecting it for licensing reasons.&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Learn to use your tools&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;You can generate a lot of code with AI, but that doesn't mean you should do it. You shouldn't mindlessly give LLMs control of your projects.&lt;/p&gt;&lt;p&gt;When you do laundry you need to correctly use the machine and know how to take care of your clothes based on their fabrics, colors and composition. When you fail to separate your whites from your blacks you screw up your clothes. The same happens to your codebase if you do not keep control of AI Agent outputs.&lt;/p&gt;&lt;h2&gt;AI is not really a dishwasher&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This post is a simplification, there are many more nuances about the topic. Moral, ethic, legal and political implications about current AI models.&lt;/p&gt;&lt;p&gt;You should use your best judgement to when is appropriate to use it.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">AI agents are like dishwashers. Yes, you can probably be more precise and clean the dishes better by hand, some of them might be even not dishwasher-friendly and require that you clean them by hand. But for most cases you just load the dishwasher and turn it on and come back in a few hours.</summary></entry><entry><title type="html">DjangoCon Europe 2025</title><id>https://mliezun.com/2025/05/14/djangocon-europe-2025.html</id><updated>2025-05-14T00:00:00Z</updated><link href="https://mliezun.com/2025/05/14/djangocon-europe-2025.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/05/14/djangocon-europe-2025.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;DjangoCon Europe 2025&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I attended DjangoCon Europe this year, it was my first software conference, I always wanted to go to one, but never took the time until this year. A great experience overall, the most rewarding thing was meeting so much wonderful people.&lt;/p&gt;&lt;p&gt;I had sparking conversations with many individuals, and I have taken notes from those before the memories fade.&lt;/p&gt;&lt;p&gt;But I worry that I may have forgotten already about some of those, because there were so many.&lt;/p&gt;&lt;p&gt;You can see more about the event here: &lt;a target='_blank' href='https://2025.djangocon.eu/'&gt;https://2025.djangocon.eu/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;I gave a lightning talk about caddy-snake. A lot of people were confused about the name, so I might change it.&lt;/p&gt;&lt;p&gt;Here's the talk: &lt;a target='_blank' href='https://mliezun.com/caddy-snake-djangocon-2025.pdf'&gt;caddy-snake-djangocon-2025.pdf&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;If anyone is thinking about joining the Django community, please don't hesitate and do it. The people are great.&lt;/p&gt;&lt;p&gt;There's a program called &lt;a target='_blank' href='https://djangonaut.space/'&gt;Djangonaut Space&lt;/a&gt; which welcomes newcomers. You can find your way there.&lt;/p&gt;&lt;p&gt;There are also many &lt;a target='_blank' href='https://django.social/'&gt;Django Social&lt;/a&gt; events that you can join.&lt;/p&gt;&lt;p&gt;Cheers!&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">I attended DjangoCon Europe this year, it was my first software conference, I always wanted to go to one, but never took the time until this year. A great experience overall, the most rewarding thing was meeting so much wonderful people.</summary></entry><entry><title type="html">Advent of Code 2024</title><id>https://mliezun.com/2025/03/20/advent-of-code-2024.html</id><updated>2025-03-20T00:00:00Z</updated><link href="https://mliezun.com/2025/03/20/advent-of-code-2024.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2025/03/20/advent-of-code-2024.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Advent of Code 2024&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I discovered AOC quite late in my career. I feel it's more like something one does when it's in college, my first participation was in 2023, and 2024 was my second. The first time I was very into it and I had a lot of fun, the second time I just found challenges to be repetitive, very similar to the year before but with a little bit different of a story.&lt;/p&gt;&lt;h2&gt;What happened in 2024?&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I did a lot of things in 2024, like moving to Paris, France. And I have also moved apartments within Paris. During December 2024 I was still adapting to life and responsibilities in Paris, so I felt like I didn't have much time to dedicate to AOC.&lt;/p&gt;&lt;p&gt;The first time I did it I was in a much more comfortable and I could dedicate as many hours as I wanted to solve a problem. Even do brute forcing and let it run for hours.&lt;/p&gt;&lt;p&gt;In December 2024 I didn't have much time, and every minute that I was spending in AOC I felt like it was being wasted. That I could be out hanging out, doing something fun, visiting some place new or just meeting with people.&lt;/p&gt;&lt;h2&gt;I feel like is a game for kids&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;It's ok, I discovered really late and I wanted to play it at least once.&lt;/p&gt;&lt;p&gt;I have a feeling that if I was in college and found out about it I would have played it every year.&lt;/p&gt;&lt;p&gt;But now, juggling work and life, trying to participate in AOC is just not worth it.&lt;/p&gt;&lt;p&gt;There's a lot more things that I'd rather be doing/building than spending time in front of the computer solving christmas themed puzzles.&lt;/p&gt;&lt;p&gt;December is also a month in which you have many festivities, and have to plan for them, like visiting family and friends on Christmas, going on a trip for New Year's or just decorating your house. Those things are a lot more work, and in my opinion way more fun.&lt;/p&gt;&lt;p&gt;Also, if you're fresh out of college and you mention participating in AOC I feel like it would sound cool and could get you hired. If you're an experienced software engineer it just doesn't matter.&lt;/p&gt;&lt;h2&gt;Bye bye&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Advent of Code is really cool. But I think it's just not for me anymore.&lt;/p&gt;&lt;p&gt;It's ok to let things go when you recognize that they don't bring much value to your life anymore.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">I discovered AOC quite late in my career. I feel it's more like something one does when it's in college, my first participation was in 2023, and 2024 was my second. The first time I was very into it and I had a lot of fun, the second time I just found challenges to be repetitive, very similar to the year before but with a little bit different of a story.</summary></entry><entry><title type="html">Caddy Snake improvements</title><id>https://mliezun.com/2024/11/07/todo-caddy-snake.html</id><updated>2024-11-07T00:00:00Z</updated><link href="https://mliezun.com/2024/11/07/todo-caddy-snake.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2024/11/07/todo-caddy-snake.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Caddy Snake improvements&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I've been planning on improving Caddy Snake, making it more stable and easier to use. Right now, I want to add automatic test for Django, and build binaries and Docker images for arm64 and riscv64. I'd also like to turn it into a Python package so you can plug it straight into your code.&lt;/p&gt;&lt;h2&gt;Fix/add/test support for Django&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I've tried to use it a couple times just to test that it works and seems to have been broken. I'd like to test it a little bit more to make sure it works and solve all the bugs.&lt;/p&gt;&lt;p&gt;Also add an automatic test so that I'm sure it doesn't break after I make some changes.&lt;/p&gt;&lt;h2&gt;Make sure we don't get segfaults on tests anymore&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Sometimes I get random segfaults when tests run on CI. I'd like to take time to investigate this and fix the issues.&lt;/p&gt;&lt;h2&gt;Add support for arm64 and riscv64&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For now I'm only building caddy-snake for Linux x86_64. It would be cool to distribute more binaries for different OSes and CPU architectures.&lt;/p&gt;&lt;p&gt;For example:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;linux / x86_64 / arm64 / riscv64&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;macOS / x86_64 / arm64&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;windows / x86_64 / arm64&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Also deliver docker images for all of those.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;Expose as python package that can be used directly&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;pre class=&quot;triple-quote python&quot;&gt;import caddysnake

from flask import Flask

app = Flask(__name__)

@app.route(&quot;/hello&quot;)
def hello():
 return &quot;Hello world&quot;
 
if __name__ == &quot;__main__&quot;:
    caddysnake.run(app=app, host=&quot;localhost:8082&quot;)&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;Better performance and isolation&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Complete the work of integrating subinterpreters in caddy-snake.&lt;/p&gt;&lt;p&gt;&lt;a target='_blank' href='https://github.com/mliezun/caddy-snake/pull/9'&gt;https://github.com/mliezun/caddy-snake/pull/9&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Track performance compared to other web servers and improve it.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">I've been planning on improving Caddy Snake, making it more stable and easier to use. Right now, I want to add automatic test for Django, and build binaries and Docker images for arm64 and riscv64. I'd also like to turn it into a Python package so you can plug it straight into your code.</summary></entry><entry><title type="html">Finding and fixing a bug in Python subinterpreters</title><id>https://mliezun.com/2024/08/19/cpython-subinterpreters.html</id><updated>2024-08-19T00:00:00Z</updated><link href="https://mliezun.com/2024/08/19/cpython-subinterpreters.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2024/08/19/cpython-subinterpreters.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Finding and fixing a bug in Python subinterpreters&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;h3&gt;tldr&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Filed an &lt;a target='_blank' href='https://github.com/python/cpython/issues/117482'&gt;issue&lt;/a&gt; in CPython.&lt;/li&gt;&lt;li&gt;Sent a &lt;a target='_blank' href='https://github.com/python/cpython/pull/117660'&gt;PR&lt;/a&gt;. &lt;/li&gt;&lt;li&gt;My code was garbage and was not merged, but helped to get the issue fixed.&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Backstory&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Lately, I've been working with Python C-API. I wanted to use subinterpreters with their own GIL to unlock the performance gains promised by being able to execute many threads in parallel which was not possible before Python 3.12.&lt;/p&gt;&lt;p&gt;The reason is that I've been building a Caddy web server plugin called: &lt;a target='_blank' href='https://github.com/mliezun/caddy-snake'&gt;Caddy Snake&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;The plugin let's users embed a Python interpreter and serve requests directly from &lt;a target='_blank' href='https://caddyserver.com'&gt;Caddy&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Caddy is written in Go, and to interact with Python I had to use CGO, a compatibility layer that makes it easy to call C functions from Go.&lt;/p&gt;&lt;p&gt;To improve performance I wanted to use the new feature of having separate GIL per subinterpreter so requests could be served by many threads at the same time.&lt;/p&gt;&lt;p&gt;Today I saw a &lt;a target='_blank' href='https://izzys.casa/2024/08/463-python-interpreters/'&gt;great blogpost&lt;/a&gt; about finding a bug in subinterpreters and that inspired me to write about my experience.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;The issue&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I started &lt;a target='_blank' href='https://github.com/mliezun/caddy-snake/pull/9/files'&gt;coding up&lt;/a&gt; a basic implementation to see if I could serve simple requests from subinterpreters.&lt;/p&gt;&lt;p&gt;From the get go I found that requests were failing in Python version 3.12 but working for previous versions thanks to CI tests.&lt;/p&gt;&lt;p&gt;The failure was constrained to a C function where the HTTP status code was being converted from string to int: &lt;span class='single-quote'&gt;strtol(statusCode)&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;By printing the content of the &lt;span class='single-quote'&gt;statusCode&lt;/span&gt; variable I found the following difference.&lt;/p&gt;&lt;p&gt;In main interpreter:&lt;/p&gt;&lt;pre class=&quot;triple-quote &quot;&gt;200&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In a subinterpreter:&lt;/p&gt;&lt;pre class=&quot;triple-quote text&quot;&gt;&lt;HttpStatusCode.OK: 200&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I managed to track down the enum that was causing this. Then I crafted some code to try to reproduce the error in a standard Python interpreter.&lt;/p&gt;&lt;p&gt;I came up with this:&lt;/p&gt;&lt;pre class=&quot;triple-quote python&quot;&gt;import _xxsubinterpreters as interpreters


script = &quot;&quot;&quot;from enum import _simple_enum, IntEnum

@_simple_enum(IntEnum)
class MyEnum:
    DATA = 1
    
print(str(MyEnum.DATA))
&quot;&quot;&quot;

exec(script)
# Output: 1

interp_id = interpreters.create(isolated=False)
interpreters.run_string(interp_id, script)
# Output: &lt;MyEnum.DATA: 1&gt;, Expected: 1&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;That piece of code executes the same python code in both the main interpreter and a freshly created subinterpreter. The output should be the same, but it's not. The problem was independent of running with a separate GIL or with the same GIL.&lt;/p&gt;&lt;p&gt;Checking the &lt;span class='single-quote'&gt;__str__&lt;/span&gt; methods I could see that they were clearly different.&lt;/p&gt;&lt;p&gt;In main interpreter:&lt;/p&gt;&lt;pre class=&quot;triple-quote python&quot;&gt;...
print(MyEnum.DATA.__str__)
# Output: &lt;method-wrapper '__repr__' of MyEnum object at 0x7f9a09a2e910&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In subinterpreter:&lt;/p&gt;&lt;pre class=&quot;triple-quote python&quot;&gt;...
print(MyEnum.DATA.__str__)
# Output: &lt;method-wrapper '__str__' of MyEnum object at 0x7f9a099a5e90&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In the main interpreter the method wraps &lt;span class='single-quote'&gt;__repr__&lt;/span&gt;.
In the subinterpreter it wraps &lt;span class='single-quote'&gt;__str__&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;At this point I couldn't believe what I was looking at. When your program has a bug you always assume is your fault, it's rare to see the case where the problem is in the INTERPRETER.&lt;/p&gt;&lt;p&gt;But to my own disbelief I had found a problem with the CPython implementation.&lt;/p&gt;&lt;h2&gt;Doing my homework&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After that I decided that I was gonna do the right thing: file an issue in Github telling what I just witnessed.&lt;/p&gt;&lt;p&gt;First thing first: read the guide on &lt;a target='_blank' href='https://devguide.python.org/#contributing'&gt;how to contribute to python&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;This is what you should always do when contributing to a project because you might find that your problem is actually not worth reporting or that it was already solved in the &lt;span class='single-quote'&gt;main&lt;/span&gt; development branch.&lt;/p&gt;&lt;p&gt;I made sure to test with all supported Python versions &lt;span class='single-quote'&gt;3.8, 3.9, 3.10, 3.11, 3.12, 3.13&lt;/span&gt; and in both Linux and macOS.&lt;/p&gt;&lt;p&gt;The problem was only present in 3.12 and 3.13, after subinterpreters with separate GIL was introduced.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;The community is awesome&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In less than 1 hour and a half of posting the issue I got a response from a member of the Python Triage Team: &quot;Bisected to &lt;a target='_blank' href='https://github.com/python/cpython/commit/de64e7561680fdc5358001e9488091e75d4174a3'&gt;de64e75&lt;/a&gt;&quot;.&lt;/p&gt;&lt;p&gt;I thought to myself: &quot;that is awesome, this is happening so fast!&quot;.&lt;/p&gt;&lt;p&gt;Then others pointed out that you could reproduce it with a shorter script:&lt;/p&gt;&lt;pre class=&quot;triple-quote python&quot;&gt;import _xxsubinterpreters as interpreters

script = &quot;&quot;&quot;print(int.__str__)&quot;&quot;&quot;


exec(script)
# Output: &lt;slot wrapper '__str__' of 'object' objects&gt;

interp_id = interpreters.create()
interpreters.run_string(interp_id, script)
# Output: &lt;slot wrapper '__str__' of 'int' objects&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Having the exact commit where this problem was introduced and a concise way of seeing what the underlying problem is, I decided to give try to fix the issue myself. Diving into the CPython codebase.&lt;/p&gt;&lt;h2&gt;Diving into CPython&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The commit where the problem was introduced &lt;a target='_blank' href='https://github.com/python/cpython/commit/de64e7561680fdc5358001e9488091e75d4174a3'&gt;de64e75&lt;/a&gt; has 185 additions and 86 deletions, and changes only 5 files.&lt;/p&gt;&lt;p&gt;If you take a look at it the most suspicious one is &lt;span class='single-quote'&gt;typeobject.c&lt;/span&gt;, you can see that it's changing some behavior in the MRO: Method Resolution Order. Which is the way methods are &quot;inherited&quot; from one class to another in Python.&lt;/p&gt;&lt;p&gt;I thought that was related because in the subinterpreter the &lt;span class='single-quote'&gt;int&lt;/span&gt; class inherits the method
 &lt;span class='single-quote'&gt;__str__&lt;/span&gt; instead of 
 &lt;span class='single-quote'&gt;__repr__&lt;/span&gt;. That was not the real issue, but I was correct about the file where the problem was.&lt;/p&gt;&lt;p&gt;Debugging was done in the good old fashioned way, adding print statements all over the place.&lt;/p&gt;&lt;p&gt;With my printing mechanism I managed to find out that the function that creates those &lt;span class='single-quote'&gt;&lt;slot wrapper ...&gt;&lt;/span&gt; objects is called
 &lt;span class='single-quote'&gt;type_ready()&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;That slot wrapper is a function that calls code from a type-slot. For example the &lt;span class='single-quote'&gt;int&lt;/span&gt; type defines a slot called 
&lt;span class='single-quote'&gt;tp_str&lt;/span&gt;. That slot is a C function that knows how to convert the object into a string. That C function is wrapped into a Python function which is executed when you do 
&lt;span class='single-quote'&gt;str(5)&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;I could see that &lt;span class='single-quote'&gt;type_ready()&lt;/span&gt; was called the first time from the main interpreter, and the second time from a subinterpreter, but this time it added more stuff.&lt;/p&gt;&lt;p&gt;One would expect that &lt;span class='single-quote'&gt;type_ready()&lt;/span&gt; gives the same result wether executed from a subinterpreter or from the main interpreter. The builtin types (
    &lt;span class='single-quote'&gt;int&lt;/span&gt;, 
    &lt;span class='single-quote'&gt;str&lt;/span&gt;, 
    &lt;span class='single-quote'&gt;float&lt;/span&gt;, 
    &lt;span class='single-quote'&gt;bool&lt;/span&gt;, ...) should be the same in all cases.&lt;/p&gt;&lt;p&gt;Another group of changes in the &lt;span class='single-quote'&gt;typeobject.c&lt;/span&gt; file was a lot of functions that obtain attributes from a builtin type, for example 
&lt;span class='single-quote'&gt;lookup_tp_dict()&lt;/span&gt;:&lt;/p&gt;&lt;pre class=&quot;triple-quote C&quot;&gt;static inline PyObject *
lookup_tp_dict(PyTypeObject *self)
{
+   if (self-&gt;tp_flags &amp; _Py_TPFLAGS_STATIC_BUILTIN) {
+       PyInterpreterState *interp = _PyInterpreterState_GET();
+       static_builtin_state *state = _PyStaticType_GetState(interp, self);
+       assert(state != NULL);
+       return state-&gt;tp_dict;
+   }
    return self-&gt;tp_dict;
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This function is obtaining the &lt;span class='single-quote'&gt;tp_dict&lt;/span&gt; property from a type. Which is a dictionary that stores attributes/methods of the class.&lt;/p&gt;&lt;p&gt;In this case the entire if-statement was added. If you take a closer look you can infer what's happening. Before we just obtained the value by doing &lt;span class='single-quote'&gt;self-&gt;tp_dict&lt;/span&gt;. 
Now, if it's a builtin type, we read the value from the interpreter state &lt;span class='single-quote'&gt;state-&gt;tp_dict&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;With this information I started to dig deeper to see if I could find the exact place where the extra attributes were being added.&lt;/p&gt;&lt;p&gt;Taking &lt;span class='single-quote'&gt;int.__str__&lt;/span&gt; as an example to see what was going on:&lt;/p&gt;&lt;p&gt;&lt;ol&gt;
&lt;li&gt; The &lt;span class='single-quote'&gt;type_ready()&lt;/span&gt; function gets executed as part of the main interpreter initialization process.&lt;/li&gt;
&lt;li&gt; The function starts &quot;readying&quot; the &lt;span class='single-quote'&gt;int&lt;/span&gt; type: 
&lt;span class='single-quote'&gt;tp_dict&lt;/span&gt; gets filled by 
&lt;span class='single-quote'&gt;type_ready_fill_dict()&lt;/span&gt;, one of the thing this function does is lookup which slots are &lt;em&gt;not empty&lt;/em&gt; and add them to the dict, for 
&lt;span class='single-quote'&gt;int&lt;/span&gt; the 
&lt;span class='single-quote'&gt;tp_str&lt;/span&gt; slot is empty.&lt;/li&gt;
&lt;li&gt; After that, &lt;span class='single-quote'&gt;type_ready_inherit()&lt;/span&gt; gets called, and copies the 
&lt;span class='single-quote'&gt;tp_str&lt;/span&gt; slot from object to int.&lt;/li&gt;
&lt;li&gt; The init process for the main interpreter finishes and we're ready to start with the second interpreter.&lt;/li&gt;
&lt;li&gt; In this case, the &lt;span class='single-quote'&gt;tp_str&lt;/span&gt; slot is not empty for 
&lt;span class='single-quote'&gt;int&lt;/span&gt;, it was filled by step 3. So it gets added to the dict by 
&lt;span class='single-quote'&gt;type_ready_fill_dict()&lt;/span&gt;.&lt;/li&gt;
&lt;li&gt; The program continues and we see different behavior depending on which interpreter we run.&lt;/li&gt;
&lt;/ol&gt;&lt;/p&gt;&lt;p&gt;The problem here is that inside &lt;span class='single-quote'&gt;type_ready()&lt;/span&gt; we expect 
&lt;span class='single-quote'&gt;type_ready_fill_dict()&lt;/span&gt; to be called before 
&lt;span class='single-quote'&gt;type_ready_inherit()&lt;/span&gt;. Which is true for the main interpreter but not for the subinterpreter. Also, this is caused because all the slots in 
&lt;span class='single-quote'&gt;int&lt;/span&gt; are shared except for 
&lt;span class='single-quote'&gt;tp_dict&lt;/span&gt; which is stored in each interpreter.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h3&gt;Summary&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;To summarize the problem: &lt;span class='single-quote'&gt;type_ready()&lt;/span&gt; didn't receive a &quot;clean&quot; type. It was receiving a type with slots partially filled.&lt;/p&gt;&lt;p&gt;Some slots were filled because they were shared among interpreters, others like &lt;span class='single-quote'&gt;tp_dict&lt;/span&gt; were not shared. That caused inconsistencies between executions of 
&lt;span class='single-quote'&gt;type_ready()&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;The solution?&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Given my lack of experience working in the CPython codebase I decided that the simplest solution was to make a quick and dirty solution.&lt;/p&gt;&lt;p&gt;My solution was to check if &lt;span class='single-quote'&gt;type_ready()&lt;/span&gt; was being called from a subinterpreter and in that case cleanup the base type so it looked the same as in the main interpreter.&lt;/p&gt;&lt;p&gt;It boils down to adding this function:&lt;/p&gt;&lt;pre class=&quot;triple-quote C&quot;&gt;static int
fix_builtin_slot_wrappers(PyTypeObject *self, PyInterpreterState *interp)
{
    assert(self-&gt;tp_flags &amp; _Py_TPFLAGS_STATIC_BUILTIN);
    assert(!_Py_IsMainInterpreter(interp));

    // Getting subinterpreter state
    managed_static_type_state *state = _PyStaticType_GetState(interp, self);
    assert(state != NULL);

    // Getting main interpreter state
    PyInterpreterState *main_interp = _PyInterpreterState_Main();
    managed_static_type_state *main_state = _PyStaticType_GetState(main_interp, self);
    assert(main_state != NULL);

    // Check wich attributes the type has in subinterpreter and it doesn't have
    // in the main interpreter. Store them in keys_to_remove.
    int res = -1;
    PyObject* keys_to_remove = PyList_New(0);
    if (keys_to_remove == NULL) {
        goto finally;
    }
    Py_ssize_t i = 0;
    PyObject *key, *value;
    while (PyDict_Next(state-&gt;tp_dict, &amp;i, &amp;key, &amp;value)) {
        if (!PyDict_Contains(main_state-&gt;tp_dict, key)) {
            if (PyList_Append(keys_to_remove, key) &lt; 0) {
                goto finally;
            }
        }
    }

    // Go through keys_to_remove and remove those attributes from
    // the base type in the subinterpreter.
    Py_ssize_t list_size = PyList_Size(keys_to_remove);
    for (Py_ssize_t i = 0; i &lt; list_size; i++) {
        PyObject* key = PyList_GetItem(keys_to_remove, i);
        if (PyDict_DelItem(state-&gt;tp_dict, key) &lt; 0) {
            goto finally;
        }
    }

    res = 0;

finally:
    Py_XDECREF(keys_to_remove);
    return res;
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Then at the end of &lt;span class='single-quote'&gt;type_ready()&lt;/span&gt;, call that function for builtin types and only if we're in a subinterpreter:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;pre class=&quot;triple-quote C&quot;&gt;    if (type-&gt;tp_flags &amp; _Py_TPFLAGS_STATIC_BUILTIN) {
        PyInterpreterState *interp = _PyInterpreterState_GET();
        if (!_Py_IsMainInterpreter(interp)) {
            if (fix_builtin_slot_wrappers(type, interp) &lt; 0) {
                return -1;
            }
        }
    }&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;And that's it. That was my solution. And it worked like a charm.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;The wait&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After sending a &lt;a target='_blank' href='https://github.com/python/cpython/pull/117660'&gt;PR&lt;/a&gt; with my proposed changes I waited for a couple months until I saw some real activity.&lt;/p&gt;&lt;p&gt;&lt;a target='_blank' href='https://github.com/ericsnowcurrently'&gt;Eric Snow&lt;/a&gt; is in charge of the subinterpreters implementations. He found a better more focused solution that was less fragile and took into account what would happen if the interpreter was reinitialized.&lt;/p&gt;&lt;p&gt;In the end I decided to close my PR because Eric's pushed the real solution into a separate PR.&lt;/p&gt;&lt;p&gt;I think it would have been cool to ship code to be included in all python interpreters around the world. But I'm happy my analysis helped solve the issue.&lt;/p&gt;&lt;p&gt;In Python 3.12.5 the fix was released: &lt;a target='_blank' href='https://docs.python.org/release/3.12.5/whatsnew/changelog.html#core-and-builtins'&gt;https://docs.python.org/release/3.12.5/whatsnew/changelog.html#core-and-builtins&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;Thanks!&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;I wanted to say thanks again to all Python contributors that helped get this solved. From my experience I felt welcome by the community and that they cared about the time and effort I put in.&lt;/p&gt;&lt;p&gt;I would say it was a great experience that I hope to repeat. And my cheers go to the Python community for setting up such an easy to follow process.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Lately, I've been working with Python C-API. I wanted to use subinterpreters with their own GIL to unlock the performance gains promised by being able to execute many threads in parallel which was not possible before Python 3.12.</summary></entry><entry><title type="html">From Script to Binary, Creating single executables with Grotsky</title><id>https://mliezun.com/2024/04/11/embedding-scripts.html</id><updated>2024-04-11T00:00:00Z</updated><link href="https://mliezun.com/2024/04/11/embedding-scripts.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2024/04/11/embedding-scripts.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;From Script to Binary, Creating single executables with Grotsky&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Recently I added the possibility to embed compiled scripts to Grotsky, this makes it super easy to generate single executables that can be easily distributed.&lt;/p&gt;&lt;p&gt;Using the release &lt;a target='_blank' href='https://github.com/mliezun/grotsky/releases/tag/v0.0.13'&gt;v0.0.13&lt;/a&gt; of Grotsky, a toy programming language that I've been developing for a while, you can compile scripts into bytecode and embed them into a single binary that can be easily distributed.&lt;/p&gt;&lt;p&gt;For now, it's only possible to embed a single script, so if your script needs to import something it won't work.&lt;/p&gt;&lt;h2&gt;How embedding works&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Grotsky by default generates a magic pattern at compile time. It's 512 bytes and is stored as a static variable.&lt;/p&gt;&lt;p&gt;To generate that pattern we use the &lt;a target='_blank' href='https://crates.io/crates/const-random'&gt;const-random&lt;/a&gt; crate.&lt;/p&gt;&lt;p&gt;We use that to define a marker and identify if the Grotsky binary is running in embedded mode or not.&lt;/p&gt;&lt;pre class=&quot;triple-quote rust&quot;&gt;#[repr(C)]
struct Marker {
    magic_pattern: [u8; 512],
    is_embedded: u8,
}

const fn new_marker() -&gt; Marker {
    Marker{
        magic_pattern: const_random!([u8; 512]),
        is_embedded: 0,
    }
}

static EMBEDDED_MARKER: Marker = new_marker();&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Then we can use a very hacky trick to take a compiled script and generate an single executable with the embedded bytecode.&lt;/p&gt;&lt;pre class=&quot;triple-quote rust&quot;&gt;pub fn embed_file(compiled_script: String, output_binary: String) {
    // Get the path of the current executable (Grotsky interpreter)
    let exe_path = env::current_exe().unwrap();
    let mut exe_contents = read(exe_path).unwrap();
    let pattern = &amp;EMBEDDED_MARKER.magic_pattern;

    // Find the magic pattern inside the executable. Given that is a static
    // variable with a value defined at compile time, it has to be stored in
    // the binary, we can find it and switch the `is_embedded` flag.
    if let Some(pos) = find_position(&amp;exe_contents, pattern) {
        // We defined the Marker struct with a C representation
        // which means that right after the magic PATH we have a byte
        // that indicates if the interpreter is running in embedded mode or not.
        exe_contents[pos+512] = 1;

        // We add the magic pattern at the end of the executable again.
        // As a stop mark that right after that the bytecode will come.
        for i in 0..512 {
            exe_contents.push(pattern[i]);
        }

        // Now we read the compiled code and add it to the end of the new executable.
        let mut compiled_content = read(compiled_script).unwrap();
        exe_contents.append(&amp;mut compiled_content);

        // We write a single file with the bytecode concatenated at the end.
        write(output_binary, exe_contents).unwrap();
    }
}

// Function to find the position of magic pattern in a stream of bytes
fn find_position(haystack: &amp;Vec&lt;u8&gt;, needle: &amp;[u8; 512]) -&gt; Option&lt;usize&gt; {
    if haystack.len() &lt; needle.len() {
        return None;
    }
    for i in 0..=haystack.len() - needle.len() {
        if &amp;haystack[i..i + needle.len()] == needle.as_ref() {
            return Some(i);
        }
    }
    None
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We're using the magic pattern as a stop mark. Our resulting binary will have the same magic pattern twice.
First is the original that gets loaded as a global static variable. The second one is almost at the end of
the file and indicates the beginning of the embedded bytecode.&lt;/p&gt;&lt;p&gt;We also need a function to detect if we're running under &quot;embedded&quot; mode. In that case the interpreter should only
read the embedded bytecode and execute it.&lt;/p&gt;&lt;pre class=&quot;triple-quote rust&quot;&gt;pub fn is_embedded() -&gt; bool {
    let embedded_indicator = &amp;EMBEDDED_MARKER.is_embedded as *const u8;
    unsafe {
        // Need to perform this trick to read the actual memory location.
        // Otherwise during compilation Rust does static analysis and assumes
        // this function always returns the same value.
        return ptr::read_volatile(embedded_indicator) != 0;
    }
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We change the value without the Rust compiler ever knowing, so we do a volatile read of the pointer to make sure
we actually load the value from memory.&lt;/p&gt;&lt;p&gt;Otherwise the Rust compiler assumes that this always returns 0, because it is hardcoded in the &lt;span class='single-quote'&gt;new_marker&lt;/span&gt; function
and is never changed in the codebase.&lt;/p&gt;&lt;p&gt;Now we can proceed to run in &quot;embedded&quot; mode.&lt;/p&gt;&lt;pre class=&quot;triple-quote rust&quot;&gt;pub fn execute_embedded() {
    // Get path of current executable
    let exe_path = env::current_exe().unwrap();
    interpreter::set_absolute_path(exe_path.clone().to_str().unwrap().to_string());

    let exe_contents = read(exe_path).unwrap();
    let pattern = &amp;EMBEDDED_MARKER.magic_pattern;

    // The offset is 512 because that's the size of the magic pattern
    let offset: usize = 512;

    // Find first match (original)
    let first_match = find_position(&amp;exe_contents, pattern).unwrap();

    // We try to find the second mark by reading what comes after the first one
    let remaining = &amp;exe_contents[first_match+offset..].to_vec();
    let pos = find_position(remaining, pattern).unwrap();

    // The bytecode is located right after the second mark
    let compiled_content = &amp;remaining[pos+offset..];
    
    // Run interpreter from bytecode
    if !interpreter::run_interpreter_from_bytecode(&amp;compiled_content) {
        println!(&quot;Could not read embedded script&quot;);
        exit(1);
    }
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;With all those function only thing I need to do is add an if-statement to the &lt;span class='single-quote'&gt;main&lt;/span&gt; function in the Rust project
to check if we're on embedded mode and proceed accordingly.&lt;/p&gt;&lt;pre class=&quot;triple-quote rust&quot;&gt;fn main() {
    if embed::is_embedded() {
        embed::execute_embedded();
        return;
    }
    // Continue executing normally
    // ...
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;That's it. That's all that takes to implement single binaries with Grotsky.
Continue reading to see an example of how to actually use this feature.&lt;/p&gt;&lt;h2&gt;Embedding example: Make your own Grep&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Let's try to reproduce a simple version of the well-known Unix tool &lt;span class='single-quote'&gt;grep&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;Store the following script in a file called &lt;span class='single-quote'&gt;grep.gr&lt;/span&gt;:&lt;/p&gt;&lt;pre class=&quot;triple-quote js&quot;&gt;# Join a list of strings separated by space &quot; &quot;
fn join(list) {
	let out = &quot;&quot;
	for let i = 0; i &lt; list.length; i = i + 1 {
		out = out + list[i]
		if i &lt; list.length - 1 {
			out = out + &quot; &quot;
		}
	}
	return out
}

# Check that a pattern was provided
if process.argv.length == 1 {
	io.println(&quot;Usage:\n\tgrep [pattern ...]&quot;)
	return 1
}

# Join argv[1:] into a pattern
let pattern = join(process.argv[1:])

# Read first line
let line = io.readln()

# While we are not in EOF
#   Check that line matches pattern and print it
#   Consume next line
while line != nil {
	if re.match(pattern, line) {
		io.println(line)
	}
	line = io.readln()
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Then it can be used like this:&lt;/p&gt;&lt;pre class=&quot;triple-quote &quot;&gt;$ cat file.txt | ./grotsky grep.gr pattern&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;And it will print all lines that match the &quot;pattern&quot;.&lt;/p&gt;&lt;p&gt;We can also package it as a single binary by doing the following commands.&lt;/p&gt;&lt;pre class=&quot;triple-quote &quot;&gt;$ ./grotsky compile grep.gr
$ ./grotksy embed grep.grc&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Now we should have a &lt;span class='single-quote'&gt;grep.exe&lt;/span&gt; in our directory. And we can use it:&lt;/p&gt;&lt;pre class=&quot;triple-quote &quot;&gt;$ chmod +x grep.exe
$ cat file.txt | ./grep.exe pattern&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Should work the same as the previous example.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Recently I added the possibility to embed compiled scripts to Grotsky, this makes it super easy to generate single executables that can be easily distributed.</summary></entry><entry><title type="html">Using Go Generics to create a Single file JSON DB</title><id>https://mliezun.com/2024/01/26/go-generics-sfj-db.html</id><updated>2024-01-26T00:00:00Z</updated><link href="https://mliezun.com/2024/01/26/go-generics-sfj-db.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2024/01/26/go-generics-sfj-db.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Using Go Generics to create a Single file JSON DB&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Let's start with an example use case: you have a multitenant web server that serves pages for various customers, you can know how to which page to serve based on &lt;span class='single-quote'&gt;Host&lt;/span&gt; header in the request.&lt;/p&gt;&lt;p&gt;You can track each site in a struct like this:&lt;/p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;type Site struct {
    Name    string  `json:&quot;name&quot;`
    Host    string  `json:&quot;host&quot;`
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Your &lt;span class='single-quote'&gt;sites.json&lt;/span&gt; file could be something like this:&lt;/p&gt;&lt;pre class=&quot;triple-quote json&quot;&gt;{
    &quot;www.example.com&quot;: {
        &quot;name&quot;: &quot;Example&quot;,
        &quot;host&quot;: &quot;www.example.com&quot;
    }
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Which can be opened as a db:&lt;/p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;sitesdb, err := sfjdb.Open[map[string]Site](&quot;./sites.json&quot;)&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Then you can simply get the content of the json file and validate requests:&lt;/p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;func handle(w http.ResponseWriter, req *http.Request) {
    sites := sitesdb.View()
    host := req.Header[&quot;Host&quot;][0]
    if site, ok := sites[host]; ok {
        // Render sites that are stored in DB
        renderSite(w, req, site)
    } else {
        w.WriteHeader(404)
    }
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;Implementation&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Thanks to Go generics and built-in json Marshaling/Unmarshaling this is quite easy to build:&lt;/p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;type DB[T any] struct {
	rw       sync.RWMutex
	data     T
	filepath string
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We have a &lt;span class='single-quote'&gt;DB&lt;/span&gt; struct to keep track of the file, data and a &lt;span class='single-quote'&gt;RW&lt;/span&gt; mutex.&lt;/p&gt;&lt;p&gt;Then we can easily read data from a local file:&lt;/p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;// Load loads data from file.
func (db *DB[T]) Load() error {
	db.rw.Lock()
	defer db.rw.Unlock()
	content, err := os.ReadFile(db.filepath)
	if err != nil {
		return err
	}
	return json.Unmarshal(content, &amp;db.data)
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Each time we use the data in the DB we make a copy, to avoid modifying some attributes via pointers by mistake.&lt;/p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;func objcopy[T any](obj T) *T {
	data, err := json.Marshal(obj)
	if err != nil {
		panic(err)
	}
	newobj := new(T)
	if err := json.Unmarshal(data, newobj); err != nil {
		panic(err)
	}
	return newobj
}


// View returns a copy of the data.
func (db *DB[T]) View() T {
	db.rw.RLock()
	defer db.rw.RUnlock()
	return *objcopy(db.data)
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We can also save a new state in case we modified something (e.g. added a new site to our list).&lt;/p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;// Save saves a copy of the data as plain json in the file.
func (db *DB[T]) Save(data T) error {
	db.rw.Lock()
	defer db.rw.Unlock()
	db.data = *objcopy[T](data)
	content, err := json.Marshal(db.data)
	if err != nil {
		return err
	}
	return WriteFile(db.filepath, content, 0644)
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;You can get the &lt;span class='single-quote'&gt;WriteFile&lt;/span&gt; function &lt;a target='_blank' href='https://github.com/tailscale/tailscale/blob/main/atomicfile/atomicfile.go'&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;This function writes the file atomically, which means is sucessfully written or not written at all.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Finally we have to implement &lt;span class='single-quote'&gt;Open&lt;/span&gt; to instantiate our DB:&lt;/p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;// Open opens a json file as a database.
func Open[T any](filepath string) (db *DB[T], err error) {
	db = &amp;DB[T]{filepath: filepath}
	if err := db.Load(); err != nil {
		return nil, err
	}
	return db, nil
}&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;In praise of Go generics&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;At the beginning I was a little sad about generics, go the simple language was at end becoming a huge complex beast. But overall if you don't over-abuse generics it feels natural and easy to use/understand.&lt;/p&gt;&lt;p&gt;This still feels like the good old Go, but just a bit more power in the right way.&lt;/p&gt;&lt;p&gt;If you wish to use this in your project you can just import it from &lt;a target='_blank' href='https://github.com/mliezun/sfj-db'&gt;https://github.com/mliezun/sfj-db&lt;/a&gt;.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Storing your data in a single json file can be useful when there isn't much state that needs to be tracked. In this post we leverage Go's generics to implement a simple JSON DB.</summary></entry><entry><title type="html">Generating posts using markdown</title><id>https://mliezun.com/2024/01/04/new-markdown-generator.html</id><updated>2024-01-04T00:00:00Z</updated><link href="https://mliezun.com/2024/01/04/new-markdown-generator.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2024/01/04/new-markdown-generator.html">&lt;article&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Generating posts using markdown&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This is pretty standard for Github pages. But in this case, the parser has been written by me. It takes some subset of markdown and compiles it to HTML.&lt;/p&gt;&lt;p&gt;Only what you see in this post is what's supported.&lt;/p&gt;&lt;h2&gt;Code blocks&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Standard code block:&lt;/p&gt;&lt;pre class=&quot;triple-quote &quot;&gt;Hello, this is a code block!&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Syntax highlighting:&lt;/p&gt;&lt;pre class=&quot;triple-quote python&quot;&gt;from functools import reduce, partial
import operator

mul = partial(reduce, operator.mul)
print(&quot;Factorial of 5:&quot;, mul(range(1, 6)))&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;Unordered list&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;This&lt;/li&gt;&lt;li&gt;is an&lt;/li&gt;&lt;li&gt;unordered list&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Bolds, Italics and Inline code&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Some text can be &lt;strong&gt;bolded&lt;/strong&gt;, while some other can be in &lt;em&gt;Italics&lt;/em&gt;.&lt;/p&gt;&lt;p&gt;But the best is to have &lt;span class='single-quote'&gt;print(&quot;inline code&quot;)&lt;/span&gt;.&lt;/p&gt;&lt;h2&gt;Links and images&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a target='_blank' href='https://github.com/mliezun/mliezun.github.io/blob/master/src/parse_md.gr'&gt;Link to the blog source code where you can see how the parser works (tldr: is awful).&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Picture of NYC:&lt;/p&gt;&lt;p&gt;&lt;img alt='&quot;Picture of NYC&quot;' src='/assets/images/nyc.jpg'/&gt;&lt;/p&gt;&lt;h1&gt;HEADINGS&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There's a thing you haven't noticed so far. There's support for different kinds of headings. You can see them in increasing order here:&lt;/p&gt;&lt;h6&gt;Microscopic&lt;/h6&gt;&lt;p&gt;&lt;/p&gt;&lt;h5&gt;Small&lt;/h5&gt;&lt;p&gt;&lt;/p&gt;&lt;h4&gt;Good&lt;/h4&gt;&lt;p&gt;&lt;/p&gt;&lt;h3&gt;Pretty good&lt;/h3&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;Big&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;h1&gt;Good bye!&lt;/h1&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Thanks for checking out the blog. I've done this to reduce the complexity of creating new blogposts. It was a headache before.&lt;/p&gt;&lt;p&gt;Hopefully now I'll write more. Stay tuned.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Custom Markdown parser and HTML generator using Grotsky, my toy programming language that powers this blog. Up until now I've used a hacky HTML generator that relies on lists. Now Im integrating a simple MD parser that makes easier to write new articles.</summary></entry><entry><title type="html">Day 20. My favourite problem from Advent of Code 2023</title><id>https://mliezun.com/2023/12/25/favourite-advent-of-code-2023.html</id><updated>2023-12-25T00:00:00Z</updated><link href="https://mliezun.com/2023/12/25/favourite-advent-of-code-2023.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2023/12/25/favourite-advent-of-code-2023.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Day 20. My favourite problem from Advent of Code 2023&lt;/h2&gt;&lt;div&gt;&lt;p&gt;Im gonna briefly describe the problem here, but if you want to see the real thing go check it out &lt;a target=&quot;_blank&quot; href=&quot;https://adventofcode.com/2023/day/20&quot;&gt;https://adventofcode.com/2023/day/20&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;I like it because it involves some simple electronic devices that are wired together and send pulses/signals to each other. In this problem you have to make sure to correctly propagate the signals and simulate the behaviour of the devices.&lt;/p&gt;&lt;p&gt;There are two devices that have a very distinct behaviour:&lt;/p&gt;&lt;li&gt;&lt;strong&gt;Flip flops&lt;/strong&gt;: similar to a &lt;a target=&quot;_blank&quot; href=&quot;https://en.wikipedia.org/wiki/Flip-flop_(electronics)#T_flip-flop&quot;&gt;T flip-flop&lt;/a&gt; electronic device.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Conjunctions&lt;/strong&gt;: similar to a &lt;a target=&quot;_blank&quot; href=&quot;https://en.wikipedia.org/wiki/NAND_gate&quot;&gt;NAND gate&lt;/a&gt; with memory on its inputs.&lt;/li&gt;&lt;p&gt;In this problem, &lt;strong&gt;Flip flops&lt;/strong&gt; are initially off and whenever they reiceve a &lt;em&gt;low&lt;/em&gt; pulse they toggle between on/off. Each time it toggles state it sends a pulse as an output. When turned off sends a &lt;em&gt;low&lt;/em&gt; pulse, when turned on sends a &lt;em&gt;high&lt;/em&gt; pulse.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conjunction&lt;/strong&gt; modules remember the most recent pulse on each input. By default it remembers a low pulse for all inputs. When a pulse is received it updates the memory for that input. Then, if it remembers &lt;em&gt;high&lt;/em&gt; pulses for all inputs, it sends a &lt;em&gt;low&lt;/em&gt; pulse; otherwise, it sends a &lt;em&gt;high&lt;/em&gt; pulse.&lt;/p&gt;&lt;p&gt;There is also some &quot;dummy&quot; modules:&lt;/p&gt;&lt;li&gt;&lt;strong&gt;Broadcaster&lt;/strong&gt;: has 1 input and N outputs. It replicates the input in all its outputs.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Button&lt;/strong&gt;: when pressed sends a &lt;em&gt;low&lt;/em&gt; pulse. The button is always connected as the broadcaster input. This is similar to a &lt;a target=&quot;_blank&quot; href=&quot;https://instrumentationtools.com/basics-of-switches/&quot;&gt;normally closed switch&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Test module&lt;/strong&gt;: module that receive and process inputs but has no output.&lt;/li&gt;&lt;p&gt;One important thing to have in mind is that modules only send output pulses when they receive a pulse as input.&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;Problem input&lt;/h3&gt;&lt;div&gt;The example input looks something like this:&lt;pre class=&quot;triple-quote &quot;&gt;broadcaster -&gt; a, b, c%a -&gt; b%b -&gt; c%c -&gt; inv&amp;inv -&gt; a&lt;/pre&gt;&lt;p&gt;There will always be just one Broadcaster module called &quot;broadcaster&quot; that has the Button connected as input. In this case it has module's &quot;a&quot;, &quot;b&quot; and &quot;c&quot; connected to its output.&lt;/p&gt;&lt;p&gt;The arrow &lt;span class=&quot;single-quote&quot;&gt;-&gt;&lt;/span&gt; indicates what modules are connected to the output of the module to the left.&lt;/p&gt;&lt;p&gt;Lines that start with &lt;span class=&quot;single-quote&quot;&gt;%&lt;/span&gt; means the module is a &lt;strong&gt;Flip flop&lt;/strong&gt;, for example: &lt;span class=&quot;single-quote&quot;&gt;%a -&gt; b&lt;/span&gt; indicates that there's a flip flop called &quot;a&quot; whose output is connected to module's &quot;b&quot; input.&lt;/p&gt;&lt;p&gt;Lines that start with &lt;span class=&quot;single-quote&quot;&gt;&amp;&lt;/span&gt; means the module is a &lt;strong&gt;Conjunction&lt;/strong&gt;, for example: &lt;span class=&quot;single-quote&quot;&gt;&amp;inv -&gt; a&lt;/span&gt; indicates that there's a conjunction called &quot;inv&quot; whose output is connected to module's &quot;a&quot; input.&lt;/p&gt;Let's analyze how this circuit behaves once the button is pushed:&lt;pre class=&quot;triple-quote &quot;&gt;button -0-&gt; broadcasterbroadcaster -0-&gt; abroadcaster -0-&gt; bbroadcaster -0-&gt; ca -1-&gt; bb -1-&gt; cc -1-&gt; invinv -0-&gt; aa -0-&gt; bb -0-&gt; cc -0-&gt; invinv -1-&gt; a&lt;/pre&gt;In this example 8 &lt;em&gt;low&lt;/em&gt; (&lt;span class=&quot;single-quote&quot;&gt;0&lt;/span&gt;) pulses and 4 &lt;em&gt;high&lt;/em&gt; (&lt;span class=&quot;single-quote&quot;&gt;1&lt;/span&gt;) pulses are sent.&lt;/div&gt;&lt;h3&gt;Part 1&lt;/h3&gt;&lt;div&gt;&lt;p&gt;To solve the first part we need to calculate the multiplication between &lt;em&gt;high&lt;/em&gt; and &lt;em&gt;low&lt;/em&gt; pulses sent between devices.&lt;/p&gt;&lt;p&gt;In the previous example that would be 8*4=32.&lt;/p&gt;&lt;p&gt;But this time we don't push the button only once, but we push it a &lt;span class=&quot;single-quote&quot;&gt;1000&lt;/span&gt; times. Each time we push the button we wait until all signals propagate and the circuit settles into a state before pushing the button again.&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;Solution&lt;/h3&gt;&lt;div&gt;First I started by modelling the devices as objects. Starting with a single base class that has most of the common behaviour.&lt;pre class=&quot;triple-quote python&quot;&gt;from abc import ABCmeasure_pulses = {0: 0, 1: 0}class Module(ABC):    def __init__(self, name: str):        self.name = name        self.outputs = []    def receive_pulse(self, mod: &quot;Module&quot;, pulse: int) -&gt; list[tuple[&quot;Module&quot;, int]]:        measure_pulses[pulse] += 1        print(f&quot;{mod and mod.name} -{pulse}-&gt; {self.name}&quot;)        return self.process_pulse(mod, pulse)    def connect_output(self, mod: &quot;Module&quot;):        self.outputs.append(mod)    def propagate_pulse(self, pulse: int):        mods = []        for m in self.outputs:            mods.append((m, pulse))        return mods    def process_pulse(self, mod: &quot;Module&quot;, pulse: int):        raise NotImplementedError()    def __str__(self) -&gt; str:        return f&quot;{self.__class__.__name__}(name={self.name})&quot;    def __repr__(self) -&gt; str:        return str(self)&lt;/pre&gt;&lt;p&gt;What we see here is that we expect all modules to have a &lt;span class=&quot;single-quote&quot;&gt;name&lt;/span&gt; and &lt;span class=&quot;single-quote&quot;&gt;outputs&lt;/span&gt;. See &lt;span class=&quot;single-quote&quot;&gt;__init__()&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;__str__()&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;__repr__()&lt;/span&gt; and &lt;span class=&quot;single-quote&quot;&gt;connect_output()&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;Each module can receive a pulse &lt;span class=&quot;single-quote&quot;&gt;0&lt;/span&gt; or  &lt;span class=&quot;single-quote&quot;&gt;1&lt;/span&gt; from another module. See &lt;span class=&quot;single-quote&quot;&gt;receive_pulse()&lt;/span&gt;. Each time we process a pulse we record it in a global dict called &lt;span class=&quot;single-quote&quot;&gt;measure_pulses&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;Also we leave &lt;span class=&quot;single-quote&quot;&gt;process_pulse()&lt;/span&gt; to be defined by each particular module type.&lt;/p&gt;&lt;p&gt;We have a method that returns a list of all modules to which signals should be propagated. See &lt;span class=&quot;single-quote&quot;&gt;propagate_pulse()&lt;/span&gt;.&lt;/p&gt;Let's start by the easiest module type:&lt;pre class=&quot;triple-quote python&quot;&gt;class TestModule(Module):    def process_pulse(self, mod: &quot;Module&quot;, pulse: int):        return []&lt;/pre&gt;Give that it's a dummy module, it doesn't do anything when it receives an input.&lt;pre class=&quot;triple-quote python&quot;&gt;class Broadcaster(Module):    def process_pulse(self, mod: &quot;Module&quot;, pulse: int):        return super().propagate_pulse(pulse)&lt;/pre&gt;As expected the Broadcaster always propagates the received input to all its outputs.&lt;pre class=&quot;triple-quote python&quot;&gt;class FlipFlop(Module):    def __init__(self, name: str):        super().__init__(name)        self.state = 0    def process_pulse(self, mod: &quot;Module&quot;, pulse: int):        if pulse == 0:            self.state = (self.state + 1) % 2            return super().propagate_pulse(self.state)        return []&lt;/pre&gt;&lt;p&gt;The flip flop start initially turned off. See &lt;span class=&quot;single-quote&quot;&gt;self.state = 0&lt;/span&gt; in &lt;span class=&quot;single-quote&quot;&gt;__init__()&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;In &lt;span class=&quot;single-quote&quot;&gt;process_pulse()&lt;/span&gt; we implement the behaviour:&lt;/p&gt;&lt;li&gt;If receives a &lt;em&gt;low&lt;/em&gt; pulse, toggles the state and sends a pulse equals to the state to all its outputs.&lt;/li&gt;&lt;li&gt;Otherwise it doesn't do anything.&lt;/li&gt;&lt;pre class=&quot;triple-quote python&quot;&gt;class Conjunction(Module):    def __init__(self, name: str):        super().__init__(name)        self.memory = {}    def remember_input(self, mod: Module):        self.memory[mod.name] = 0    def process_pulse(self, mod: Module, pulse: int):        self.memory[mod.name] = pulse        if all(self.memory.values()):            return self.propagate_pulse(0)        return self.propagate_pulse(1)&lt;/pre&gt;&lt;p&gt;The conjunction initializes its memory as empty. See &lt;span class=&quot;single-quote&quot;&gt;__init__()&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;Each time a module is plugged in as an input it remembers it as OFF (&lt;span class=&quot;single-quote&quot;&gt;0&lt;/span&gt;). See &lt;span class=&quot;single-quote&quot;&gt;remember_input()&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;The way it processes pulses is by first recording the pulse for the input in its memory. Then if all inputs are &lt;span class=&quot;single-quote&quot;&gt;1&lt;/span&gt;s it sends a &lt;span class=&quot;single-quote&quot;&gt;0&lt;/span&gt; pulse to all its outputs.&lt;/p&gt;&lt;p&gt;Otherwise it sends a &lt;span class=&quot;single-quote&quot;&gt;1&lt;/span&gt; pulse to all its outputs.&lt;/p&gt;&lt;p&gt;At this point we have all our building blocks for solving this problem. We only need to parse the input and something that pushes the button and makes sure signals are propagated to the end.&lt;/p&gt;Parsing modules is straightforward:&lt;pre class=&quot;triple-quote python&quot;&gt;def parse_modules(modules: list) -&gt; dict[str, Module]:    modules_by_name = {}    outputs_by_name = {}    # Parse all modules into their correspondig class and store    # them in a dict.    for m in modules:        module_type = m[0]        module_outputs = [o.strip() for o in m[1].split(&quot;,&quot;) if o.strip()]        if module_type.startswith(&quot;broadcaster&quot;):            modules_by_name[module_type] = Broadcaster(module_type)            outputs_by_name[module_type] = module_outputs        elif module_type.startswith(&quot;%&quot;):            modules_by_name[module_type[1:]] = FlipFlop(module_type[1:])            outputs_by_name[module_type[1:]] = module_outputs        elif module_type.startswith(&quot;&amp;&quot;):            modules_by_name[module_type[1:]] = Conjunction(module_type[1:])            outputs_by_name[module_type[1:]] = module_outputs    # Once all the modules are parsed use connect their outputs.    # If the module doesn't exist at this point is a TestModule.    # If the module is a Conjunction, call &lt;span class=&quot;single-quote&quot;&gt;remember_input()&lt;/span&gt;.    for name, outputs in outputs_by_name.items():        for mod_name in outputs:            mod = modules_by_name.get(mod_name, TestModule(mod_name))            modules_by_name[name].connect_output(mod)            if isinstance(mod, Conjunction):                mod.remember_input(modules_by_name[name])    return modules_by_name&lt;/pre&gt;&lt;p&gt;If we parse our example using that function we will receive a dictionary as its output. Keys are module names and values are the objects representing the module.&lt;/p&gt;&lt;p&gt;If we parse the example we get something like this:&lt;/p&gt;&lt;pre class=&quot;triple-quote python&quot;&gt;example = &quot;&quot;&quot;broadcaster -&gt; a, b, c%a -&gt; b%b -&gt; c%c -&gt; inv&amp;inv -&gt; a&quot;&quot;&quot;example_modules = [m.split(&quot; -&gt; &quot;) for m in example.splitlines() if m.strip()]print(parse_modules(example_modules))# Output{    'broadcaster': Broadcaster(name=broadcaster),    'a': FlipFlop(name=a),    'b': FlipFlop(name=b),    'c': FlipFlop(name=c),    'inv': Conjunction(name=inv)}&lt;/pre&gt;Then we need a function that pushes the button and makes sure all signals are propagated:&lt;pre class=&quot;triple-quote python&quot;&gt;def push_button(modules_by_name: dict[str, Module]):    broad = modules_by_name[&quot;broadcaster&quot;]    queue = [(broad, broad.receive_pulse(None, 0))]    while queue:        current, signals = queue.pop(0)        for mod, pulse in signals:            queue.append((mod, mod.receive_pulse(current, pulse)))&lt;/pre&gt;&lt;p&gt;Here, we lookup the broadcaster module by name. And send a pulse (note that we pass &lt;span class=&quot;single-quote&quot;&gt;None&lt;/span&gt; as the module because we didn't implement a button class) to the broadcaster.&lt;/p&gt;&lt;p&gt;We store the current module (broadcaster) along with all the propagated signals (return value from &lt;span class=&quot;single-quote&quot;&gt;receive_pulse()&lt;/span&gt;) in a queue to be processed.&lt;/p&gt;&lt;p&gt;While the signal queue to be processed is not empty we do the following:&lt;/p&gt;&lt;li&gt;Extract the first element of the queue.&lt;/li&gt;&lt;li&gt;Go trough all the signals that this element is sending.&lt;/li&gt;&lt;li&gt;Send the pulses to each corresponding module and store them in the queue to be processed.&lt;/li&gt;&lt;p&gt;This process will stop when all responses from &lt;span class=&quot;single-quote&quot;&gt;receive_pulse()&lt;/span&gt; are empty and there are no more signals added to the queue.&lt;/p&gt;&lt;p&gt;If we run this for our example:&lt;/p&gt;&lt;pre class=&quot;triple-quote python&quot;&gt;example_modules = parse_modules(example_modules)push_button(example_modules)# OutputNone -0-&gt; broadcasterbroadcaster -0-&gt; abroadcaster -0-&gt; bbroadcaster -0-&gt; ca -1-&gt; bb -1-&gt; cc -1-&gt; invinv -0-&gt; aa -0-&gt; bb -0-&gt; cc -0-&gt; invinv -1-&gt; a&lt;/pre&gt;&lt;p&gt;It looks the same as when we analyzed the example above!! &lt;/p&gt;&lt;p&gt;We're ready for processing our problems input. (Remeber to comment out the print statement inside &lt;span class=&quot;single-quote&quot;&gt;receive_pulse()&lt;/span&gt;).&lt;/p&gt;&lt;pre class=&quot;triple-quote python&quot;&gt;modules = open(&quot;input.txt&quot;, &quot;r&quot;).read().strip()modules = [m.split(&quot; -&gt; &quot;) for m in modules.splitlines() if m.strip()]modules = parse_modules(modules)for _ in range(1000):    push_button(modules)print(&quot;result:&quot;, measure_pulses[0] * measure_pulses[1])# Outputresult: x&lt;/pre&gt;Based on your problem input &lt;span class=&quot;single-quote&quot;&gt;x&lt;/span&gt; will be the solution.&lt;/div&gt;&lt;h3&gt;Part 2&lt;/h3&gt;&lt;div&gt;&lt;p&gt;This part as always is much trickier than the first part. It doesn't involve much code changes, just figuring out a way of avoiding large computations.&lt;/p&gt;&lt;p&gt;For this part, the problem tells us that there's a module called &lt;span class=&quot;single-quote&quot;&gt;rx&lt;/span&gt;. And we need to find out the lowest amount of button pulses that will make the &lt;span class=&quot;single-quote&quot;&gt;rx&lt;/span&gt; module receive a &lt;em&gt;low&lt;/em&gt; pulse.&lt;/p&gt;&lt;p&gt;As I have learned troughout this entire challenge, just nahively letting it run and see when the &lt;span class=&quot;single-quote&quot;&gt;rx&lt;/span&gt; module gets a &lt;em&gt;low&lt;/em&gt; signal will get me nowhere. It will run forever.&lt;/p&gt;&lt;p&gt;So, taking a look at the input and see what the &lt;span class=&quot;single-quote&quot;&gt;rx&lt;/span&gt; module is actually connected to might provide some guidance.&lt;/p&gt;Following is for my case (I don't know if all problem inputs are the same). Looking up &quot;rx&quot; in the input I find a single line:&lt;pre class=&quot;triple-quote &quot;&gt;...&amp;lv -&gt; rx...&lt;/pre&gt;&lt;p&gt;That means &lt;span class=&quot;single-quote&quot;&gt;rx&lt;/span&gt; is a &lt;span class=&quot;single-quote&quot;&gt;TestModule&lt;/span&gt; (a dummy module that has nothing connected to its output). And that has only one input: a Conjunction called &lt;span class=&quot;single-quote&quot;&gt;lv&lt;/span&gt;.&lt;/p&gt;Ok, that feels like progress. Let's see what lv is connected to:&lt;pre class=&quot;triple-quote &quot;&gt;...&amp;st -&gt; lv&amp;tn -&gt; lv&amp;hh -&gt; lv&amp;dt -&gt; lv...&lt;/pre&gt;&lt;p&gt;Other 4 Conjunctions are connected as inputs of lv. That's interesting. Because &lt;span class=&quot;single-quote&quot;&gt;lv&lt;/span&gt; is a Conjuction it means that to send the &lt;em&gt;low&lt;/em&gt; pulse required by &lt;span class=&quot;single-quote&quot;&gt;rx&lt;/span&gt; it should receive all &lt;em&gt;high&lt;/em&gt; pulses from its inputs.&lt;/p&gt;&lt;p&gt;The solution from here is kind of intuitive at this point. If we figure out how many button pulses does it take for each of the input devices to send a 1 signal we can multiply them together and get the result.&lt;/p&gt;&lt;p&gt;I'll explain better. Let's say &lt;span class=&quot;single-quote&quot;&gt;st&lt;/span&gt; sends a 1 on every button push, &lt;span class=&quot;single-quote&quot;&gt;tn&lt;/span&gt; sends a 1 every second button push (this means you have to press the button twice to get &lt;span class=&quot;single-quote&quot;&gt;tn&lt;/span&gt; to send a 1 as an output), &lt;span class=&quot;single-quote&quot;&gt;hh&lt;/span&gt; sends a 1 every fourth button push and &lt;span class=&quot;single-quote&quot;&gt;dt&lt;/span&gt; sends a 1 every eighth button push.&lt;/p&gt;So it looks like this:&lt;pre class=&quot;triple-quote &quot;&gt;module | pushes---------------  st   |   1  tn   |   2  hh   |   4  dt   |   8&lt;/pre&gt;&lt;p&gt;In this example, if we push the button 8 times. They are all gonna send a &lt;em&gt;high&lt;/em&gt; pulse. Because 8 is divisible by 1, 2, 4 and 8.&lt;/p&gt;If the table were different:&lt;pre class=&quot;triple-quote &quot;&gt;module | pushes---------------  st   |   1  tn   |   3  hh   |   5  dt   |   7&lt;/pre&gt;&lt;p&gt;In this case there's no obvious number of times we should push the button. But if we multiply the numbers together we get a number that is divisible by every number in the table. Pushing the button 1 * 3 * 5 * 7 = 105 times will make all the outputs send a 1, and consequently &lt;span class=&quot;single-quote&quot;&gt;rx&lt;/span&gt; will receive a 0.&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;Solution&lt;/h3&gt;&lt;div&gt;What we need to do then is to figure after out how many button presses we get a 1 on each of those modules.&lt;pre class=&quot;triple-quote python&quot;&gt;from collections import defaultdict# Store number of button presses in a global variableITERATIONS = 0# Store &lt;em&gt;high&lt;/em&gt; pulses for target modulesOUT_PULSES = defaultdict(list)class Conjunction(Module):    def __init__(self, name: str):        super().__init__(name)        self.memory = {}    def remember_input(self, mod: Module):        self.memory[mod.name] = 0    def start_recording(self):        self.recording = True    def record(self):        if hasattr(self, &quot;recording&quot;):            OUT_PULSES[self.name].append(ITERATIONS)    def process_pulse(self, mod: Module, pulse: int):        self.memory[mod.name] = pulse        if all(self.memory.values()):            return self.propagate_pulse(0)        self.record()        return self.propagate_pulse(1)&lt;/pre&gt;&lt;p&gt;We introduced 2 new methods to the conjunction module: &lt;span class=&quot;single-quote&quot;&gt;start_recording()&lt;/span&gt; and &lt;span class=&quot;single-quote&quot;&gt;record()&lt;/span&gt;. The first just initializes a bool attribute. And the second makes sure to only record &lt;em&gt;high&lt;/em&gt; pulses for objects that have been initialized (method &lt;span class=&quot;single-quote&quot;&gt;start_recording()&lt;/span&gt; called).&lt;/p&gt;&lt;p&gt;Also introduced 2 global variables: &lt;span class=&quot;single-quote&quot;&gt;ITERATIONS&lt;/span&gt; to keep track of button pushes and &lt;span class=&quot;single-quote&quot;&gt;OUT_SIGNALS&lt;/span&gt; to track each time one of the modules outputs a &lt;em&gt;high&lt;/em&gt; pulse.&lt;/p&gt;Now we need to make those specific modules record their outputs:&lt;pre class=&quot;triple-quote python&quot;&gt;# Get the &quot;lv&quot; module by namelv = modules[&quot;lv&quot;]lv_inputs = [modules[k] for k in lv.memory.keys()]for m in lv_inputs:    m.start_recording()&lt;/pre&gt;I wasn't sure if the cycle was always going to be the same, so just to be sure I did &lt;span class=&quot;single-quote&quot;&gt;100_000&lt;/span&gt; button pushes and recorded all the &quot;1&quot; outputs for those modules.&lt;pre class=&quot;triple-quote python&quot;&gt;for i in range(100_000):    ITERATIONS += 1    push_button(modules)print(OUT_PULSES)# Output{'hh': [3769, 7538, 11307, 15076, 18845, 22614, 26383, 30152, 33921, 37690, 41459, 45228, 48997, 52766, 56535, 60304, 64073, 67842, 71611, 75380, 79149, 82918, 86687, 90456, 94225, 97994], 'tn': [3863, 7726, 11589, 15452, 19315, 23178, 27041, 30904, 34767, 38630, 42493, 46356, 50219, 54082, 57945, 61808, 65671, 69534, 73397, 77260, 81123, 84986, 88849, 92712, 96575], 'st': [3929, 7858, 11787, 15716, 19645, 23574, 27503, 31432, 35361, 39290, 43219, 47148, 51077, 55006, 58935, 62864, 66793, 70722, 74651, 78580, 82509, 86438, 90367, 94296, 98225], 'dt': [4079, 8158, 12237, 16316, 20395, 24474, 28553, 32632, 36711, 40790, 44869, 48948, 53027, 57106, 61185, 65264, 69343, 73422, 77501, 81580, 85659, 89738, 93817, 97896]}&lt;/pre&gt;We can observe that for each module we have a periodicity given by:&lt;pre class=&quot;triple-quote python&quot;&gt;hh = n*3769tn = n*3863st = n*3929dt = n*4079&lt;/pre&gt;&lt;p&gt;This means we can just multiply the first element of each list for each module and we'll get our result.&lt;/p&gt;In my case it was:&lt;pre class=&quot;triple-quote python&quot;&gt;accum = 1for name, pulses in OUT_PULSES.items():    accum *= pulses[0]print(&quot;result:&quot;, accum)# Outputresult: 233338595643977&lt;/pre&gt;&lt;/div&gt;&lt;div&gt;&lt;p&gt;Edit: As some people have pointed out in the &lt;a href='https://news.ycombinator.com/item?id=38807477' target='_blank'&gt;HN discussion&lt;/a&gt;, just multiplying the numbers together only works because the numbers are &lt;a href='https://en.wikipedia.org/wiki/Coprime_integers' target='_blank'&gt;coprimes&lt;/a&gt; and the correct solution is to use &lt;a href='https://en.wikipedia.org/wiki/Least_common_multiple' target='_blank'&gt;LCM&lt;/a&gt;.&lt;/p&gt;&lt;/div&gt;&lt;h2&gt;Closing words&lt;/h2&gt;&lt;div&gt;&lt;p&gt;This problem is my favourite because it has a few characteristics that I personally enjoy:&lt;/p&gt;&lt;li&gt;It's based on real world stuff. In this case electronic devices (which is also a plus because they're fun).&lt;/li&gt;&lt;li&gt;It can be easily translated to an OOP approach which makes it easy to implement and understand.&lt;/li&gt;&lt;li&gt;To solve the second part you need to look at the data and make a solution for your particular input.&lt;/li&gt;&lt;li&gt;It doesn't involve any Graph traversal or specific Math, Calculus or Algebra knowledge. Or any obscure CS algorithm.&lt;/li&gt;&lt;p&gt;In the end this is one of my favourites because to solve it you just have to understand the problem and understand the data.&lt;/p&gt;&lt;p&gt;Link to my github project with the solutions &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/mliezun/aoc&quot;&gt;https://github.com/mliezun/aoc&lt;/a&gt;.&lt;/p&gt;&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Advent of code 2023 has gone by, this is my first year participating. It's been fun and I want to share the problem that I enjoyed the most. It's based on simple electronic devices sending signals or pulses to each other.</summary></entry><entry><title type="html">I rewrote my toy language interpreter in Rust</title><id>https://mliezun.com/2023/11/23/grotsky-rust-part3.html</id><updated>2023-11-23T00:00:00Z</updated><link href="https://mliezun.com/2023/11/23/grotsky-rust-part3.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2023/11/23/grotsky-rust-part3.html">&lt;article&gt;&lt;h2&gt;I rewrote my toy language interpreter in Rust&lt;/h2&gt;&lt;p&gt;Im rewriting Grotsky (my toy programming language) in Rust, the previous implementation
was done in Go. The goal of the rewrite is to improve my Rust skills, and to improve the performance of Grotsky,
by at least 10x. This has been a serious of posts, this one is the latest one. Hopefully the best and most insightful
of them all.&lt;/p&gt;&lt;p&gt;In previous posts:&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://mliezun.com/2023/06/02/rewrite-grotsky-rust.html&quot;&gt;Rewrite my toy language interpreter in Rust&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;https://mliezun.com/2023/09/23/grotsky-rust-part2.html&quot;&gt;Rewrite my toy language interpreter in Rust, an update&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt; I've outlined a plan to migrate Grotsky to a Rust based platform.&lt;/p&gt;&lt;p&gt;
			Originally, my plan was very ambitious and I thought I would be able to finish the transition
			in like two months.
			In reality it took five months :-)
			&lt;/p&gt;&lt;h3&gt;Performance improvement&lt;/h3&gt;&lt;p&gt;I was aiming at a 10x improvement. In reality is not that much. I ran various benchmarks and get at most a 4x improvement. Which is not great, but also not that bad given that I know very little of how to do high performance Rust. The interpreter is written in the most dumbest and easiest way I managed to do it.&lt;/p&gt;&lt;p&gt;Let's look at some numbers for different programs.&lt;/p&gt;&lt;h4&gt;Loop program&lt;/h4&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote js language-css&quot;&gt;let start = io.clock()
fn test() {
    let a = 1
    while a &lt; 10000000 {
        a = a + 1
    }
}
test()
io.println(io.clock() - start)&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;The result is:&lt;pre class=&quot;triple-quote js language-css&quot;&gt;trial #1
build/grotsky   best 5.135s  3.877x time of best
build/grotsky-rs   best 1.325s  287.6800% faster
trial #2
build/grotsky   best 5.052s  3.814x time of best
build/grotsky-rs   best 1.325s  281.4182% faster
trial #3
build/grotsky   best 5.035s  3.802x time of best
build/grotsky-rs   best 1.325s  280.1663% faster
trial #4
build/grotsky   best 5.003s  3.777x time of best
build/grotsky-rs   best 1.325s  277.6831% faster
trial #5
build/grotsky   best 5.003s  3.777x time of best
build/grotsky-rs   best 1.325s  277.6831% faster&lt;/pre&gt;&lt;/p&gt;&lt;h4&gt;Recursive fibonacci&lt;/h4&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote js language-css&quot;&gt;let start = io.clock()
fn fib(n) {
	if n &lt;= 2 {
		return n
	}
	return fib(n-2) + fib(n-1)
}
fib(28)
io.println(io.clock() - start)&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;The result is:&lt;pre class=&quot;triple-quote js language-css&quot;&gt;trial #1
build/grotsky   best 0.8409s  294.5155% faster
build/grotsky-rs   best 3.317s  3.945x time of best
trial #2
build/grotsky   best 0.8168s  271.3829% faster
build/grotsky-rs   best 3.033s  3.714x time of best
trial #3
build/grotsky   best 0.797s  245.6835% faster
build/grotsky-rs   best 2.755s  3.457x time of best
trial #4
build/grotsky   best 0.7784s  249.9964% faster
build/grotsky-rs   best 2.724s  3.5x time of best
trial #5
build/grotsky   best 0.7784s  249.9964% faster
build/grotsky-rs   best 2.724s  3.5x time of best&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;In this case is like 3.5x slower. This is due to function calls. Im not very well versed in Rust, so on each call im copying a lot of data over and over. In the go implementation everything is just pointers so there's less copying.&lt;/p&gt;&lt;h3&gt;Compile to bytecode&lt;/h3&gt;&lt;p&gt;With the Rust implementation, generating and compiling to bytecode was added. Now it's possible to generate a bytecode file to later read it. This is a way of distributing files without giving away source code and also  a little bit more performant because you skip parsing and compilation phases.&lt;/p&gt;&lt;p&gt;How it works:&lt;pre class=&quot;triple-quote bash language-css&quot;&gt;grotsky compile example.gr  # Compile file
grotsky example.grc  # Run compiled file&lt;/pre&gt;&lt;/p&gt;&lt;h3&gt;Memory model&lt;/h3&gt;&lt;p&gt;Grotsky is a reference-counted language. We're using Rust's Rc and RefCell to keep track of values.&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote rust language-rust&quot;&gt;pub struct MutValue&lt;T&gt;(pub Rc&lt;RefCell&lt;T&gt;&gt;);

impl&lt;T&gt; MutValue&lt;T&gt; {
    pub fn new(obj: T) -&gt; Self {
        MutValue::&lt;T&gt;(Rc::new(RefCell::new(obj)))
    }
}

pub enum Value {
    Class(MutValue&lt;ClassValue&gt;),
    Object(MutValue&lt;ObjectValue&gt;),
    Dict(MutValue&lt;DictValue&gt;),
    List(MutValue&lt;ListValue&gt;),
    Fn(MutValue&lt;FnValue&gt;),
    Native(NativeValue),
    Number(NumberValue),
    String(StringValue),
    Bytes(BytesValue),
    Bool(BoolValue),
    Slice(SliceValue),
    Nil,
}

pub enum Record {
    Val(Value),
    Ref(MutValue&lt;Value&gt;),
}

pub struct VM {
    pub activation_records: Vec&lt;Record&gt;,
}&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;Most of the simple values are just stored as-is: Native (builtin functions), Number, String, Bytes, Bool, Slice and Nil.&lt;/p&gt;&lt;p&gt;For the other complex values we need to use 'pointers' which in this case are MutValue.&lt;/p&gt;&lt;p&gt;Then the Grotsky VM uses Records which can be a plain Value or a reference to a Value. The records are registers, each function has up to 255 registers. The reference to values are used to store upvalues. A register is turned into an upvalue when a variable is closed by another function.&lt;/p&gt;&lt;p&gt;This implementation ends up being very slow, but easy to manage. Because Rust stdlib does all the work.&lt;/p&gt;&lt;h3&gt;Using Rust in this blogpost&lt;/h3&gt;&lt;p&gt;As you may know, this &lt;a href='https://mliezun.com/2021/10/04/new-blog-engine.html' target='_blank'&gt;blog is powered by grotsky&lt;/a&gt;. Im happy to say that I successfully migrated from grotsky to grostky-rs as the backend for the blog. And what you're reading now is generated by the latest implementation of the language using Rust.&lt;/p&gt;&lt;p&gt;Even for local development the Rust version is used. Which means Im using a TCP server and an HTTP implementation written in Grotsky.&lt;/p&gt;&lt;h3&gt;Closing remarks&lt;/h3&gt;&lt;p&gt;This has been a great learning, Im happy to have finished because it required a lot of effort. Im not gonna announce any new work on this interpreter but I would like to keep adding stuff. Improving it further to make it more performant and more usable.&lt;/p&gt;&lt;p&gt;In the end I encourage everyone to try it and also start their own project. Is always cool to see what everyone else is doing.&lt;/p&gt;&lt;p&gt;Thanks for reading.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Im rewriting Grotsky (my toy programming language) in Rust, the previous implementation
was done in Go. The goal of the rewrite is to improve my Rust skills, and to improve the performance of Grotsky,
by at least 10x. This has been a serious of posts, this one is the latest one. Hopefully the best and most insightful
of them all.</summary></entry><entry><title type="html">Rewrite my toy language interpreter in Rust, an update</title><id>https://mliezun.com/2023/09/23/grotsky-rust-part2.html</id><updated>2023-09-23T00:00:00Z</updated><link href="https://mliezun.com/2023/09/23/grotsky-rust-part2.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2023/09/23/grotsky-rust-part2.html">&lt;article&gt;&lt;h2&gt;Rewrite my toy language interpreter in Rust, an update&lt;/h2&gt;&lt;p&gt;Im rewriting Grotsky (my toy programming language) in Rust, the previous implementation
was done in Go. The goal of the rewrite is to improve my Rust skills, and to improve the performance of Grotsky,
by at least 10x.&lt;/p&gt;&lt;p&gt;In a &lt;a href=&quot;https://mliezun.com/2023/06/02/rewrite-grotsky-rust.html&quot;&gt;previous post&lt;/a&gt; I've outlined a plan to migrate Grotsky to a Rust based platform.&lt;/p&gt;&lt;p&gt;I was suposed to finish it more than a month ago :').&lt;/p&gt;&lt;p&gt;Of course, I wasn't able to do it.&lt;/p&gt;&lt;p&gt;I think my original estimation was actually very flawed, but I also didn't put enough work.&lt;/p&gt;&lt;p&gt;I failed my estimation, because I decided to skip the step of first writing a Tree-based interpreter in Rust. To go directly to a Register-based virtual machine.&lt;/p&gt;&lt;p&gt;The reason that is taking me so long is that I travel a lot. I've been a digital nomad for more than a year now. And when I finish working I prefer to go outside and explore the place where Im staying. I go to a lot of interesting places: at this time Im in Turkey and heading to Hong Kong, after that Im going to South Korea. So, it makes sense to actually experience the places where Im staying than to stay inside writing code.&lt;/p&gt;&lt;p&gt;Im here to propose a new roadmap that is more achivable with the time I have to work on this. And the idea is to finish it before the end of the year.&lt;/p&gt;&lt;p&gt;Plus, I recently heard some advice that I think it's worth to try: Work on someone for 15 minutes every day. I do have 15 minutes everyday where Im probably scrolling through social media or just consuming content. I can make better use of that time by putting it into this project.&lt;/p&gt;&lt;h3&gt;Updated Roadmap&lt;/h3&gt;&lt;p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Sep 30:&lt;/strong&gt; Publish a blogpost about the memory model of the Rust implementation.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Oct 15:&lt;/strong&gt; Migrate automated tests to the new backend and make sure they pass.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Oct 30:&lt;/strong&gt; Implement stdlib in bytecode interpreter. Share results.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Nov 15:&lt;/strong&gt; Add ability to compile to bytecode and run from bytecode.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Dec 30:&lt;/strong&gt; Finish up everything and publish a blogpost update before end of year.&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;&lt;p&gt;This is gonna be rough, because of the traveling and being jetlaged all the time. But I think the roadmap is easy enough so I can do it.&lt;/p&gt;&lt;p&gt;Thanks for reading.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Im rewriting Grotsky (my toy programming language) in Rust, the previous implementation
was done in Go. The goal of the rewrite is to improve my Rust skills, and to improve the performance of Grotsky,
by at least 10x.</summary></entry><entry><title type="html">The end of a side project</title><id>https://mliezun.com/2023/07/15/the-end-of-a-side-project.html</id><updated>2023-07-15T00:00:00Z</updated><link href="https://mliezun.com/2023/07/15/the-end-of-a-side-project.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2023/07/15/the-end-of-a-side-project.html">&lt;article&gt;&lt;h2&gt;The end of a side project&lt;/h2&gt;&lt;p&gt;Cloud Outdated was a personalized digest of updates for cloud services. It's sad to see it go,
			but it was a fun project to work on, learn some new stuff and collab with a friend.
			There are some takeaways from this that I'd like to share.&lt;/p&gt;&lt;h3&gt;Building something simple is hard&lt;/h3&gt;&lt;p&gt;From the beginning we were trying to build a simple product to get notified when new versions
			of Python were supported in AWS Lambda. But we figured we could support other things too, like
			all Lambda runtimes, then also GCP and Azure, then more services of each one.Features started piling up pretty quickly.&lt;/p&gt;&lt;p&gt;When building stuff I try to think of every edge case, even the most improbable ones, and make
			the software infalible. Of course, it's impossible to succeed at that, software will always be fallible.And this premature optimization ends up making the project more complex than it should be.&lt;/p&gt;&lt;p&gt;We planned to work on this for a 1 or 2 months, and it ended up taking 6+ months :-).&lt;/p&gt;&lt;p&gt;My takeaway here is: start building the dumbest thing possible that gets the job done, then adjust as needed.&lt;/p&gt;&lt;h3&gt;Getting users is hard&lt;/h3&gt;&lt;p&gt;We're killing this project because nobody uses it.And nobody except us has used it since it was launched more than a year ago.Some people subscribed but never even opened an email.&lt;/p&gt;&lt;p&gt;We tried to advertise in our social media and post it in different builders communities.But that will get you so far if you're not an influencer that has the right audience.&lt;/p&gt;&lt;p&gt;We thought that we'll get more traffic from organic search or people telling their friends about this.But in the end I think nobody really needed something like this that much.&lt;/p&gt;&lt;p&gt;My takeaway here is: building something that people really want and getting the product to the hands
			of the people that want it is very complicated. You should think very deeply about what problem your
			product is solving and how your users will find you.&lt;/p&gt;&lt;h3&gt;The money&lt;/h3&gt;&lt;p&gt;This has costed like $200 (US dollars) since inception. For two people that's like $100 each.It's not a lot, but for something that has no users that's quite expensive.&lt;/p&gt;&lt;p&gt;We used Lambdas to serve this project.I feel like we were promised that the cloud and serverless are cheap and easy solution.But in my opinion it doesn't seem to be the case.It's definitely not easier nor cheaper than having a PHP website in this case.Im sure there are other cases in which it makes more sense.&lt;/p&gt;&lt;p&gt;This is also a reason of why we're killing the project.Our service started failing because after a deploy dependencies were updated and the code
			was to big to fit on a Lambda.It would have been a lot of work to fix it, so we decided to kill it and save some bucks every month.&lt;/p&gt;&lt;p&gt;For personal projects which you're not sure how they're going to scale,
			I think serverless is probably not the right choice.Serverless make sense if you're going to have huge burst of traffics and don't want
			to handle a lot of infra.&lt;/p&gt;&lt;p&gt;My takeaway here is: beware of the cloud, if you're just building a small side project
			or don't have huge infra needs stick with the cheapest providers (Hostinger, PythonAnywhere, Hetzner)
			and avoid cloud providers (GCP, Azure, AWS).&lt;/p&gt;&lt;h3&gt;Final thougths&lt;/h3&gt;&lt;p&gt;If I haven't made this clear enough, building a successful product is *hard*.There are many things to think about when starting, and the technical stuff hopefully
			is the easy part.I think this are the 3 most important lessons that I've learned working on this:&lt;ul&gt;&lt;li&gt;Build the dumbest thing that does the job, improve as needed.&lt;/li&gt;&lt;li&gt;Think deeply about what problem are you solving and how you're going to deliver the solution to the people that need it.&lt;/li&gt;&lt;li&gt;Beware of the cloud, if possible use a cheaper provider. It will save you money and headaches.&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Cloud Outdated was a personalized digest of updates for cloud services. It's sad to see it go,
	but it was a fun project to work on, learn some new stuff and collab with a friend.
	There are some takeaways from this that I'd like to share.</summary></entry><entry><title type="html">Rewrite my toy language interpreter in Rust</title><id>https://mliezun.com/2023/06/02/rewrite-grotsky-rust.html</id><updated>2023-06-02T00:00:00Z</updated><link href="https://mliezun.com/2023/06/02/rewrite-grotsky-rust.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2023/06/02/rewrite-grotsky-rust.html">&lt;article&gt;&lt;h2&gt;Rewrite my toy language interpreter in Rust&lt;/h2&gt;&lt;p&gt;Im rewriting Grotsky (my toy programming language) in Rust, the previous implementation
was done in Go. The goal of the rewrite is to improve my Rust skills, and to improve the performance of Grotsky,
by at least 10x.&lt;/p&gt;&lt;p&gt;As of this writting the Rust implementation is 4x faster than the Golang implementation.&lt;/p&gt;&lt;p&gt;I've rewritten the Lexer, Parser and part of the Runtime. Enough for run this code and measure
		how long it takes for each implementation to finish:&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote js language-css&quot;&gt;let a = 1
while a &lt; 100000000 {
    a = a + 1
}&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;I was inspired by Mitchel Hashimoto's post: &lt;a href=&quot;https://mitchellh.com/writing/building-large-technical-projects&quot; target=&quot;_blank&quot;&gt;My Approach to Building Large Technical Projects&lt;/a&gt;. And I want to create a roadmap of little projects to reach my goal of having a full-fledged interpreter in Rust.&lt;/p&gt;&lt;h3&gt;Goal&lt;/h3&gt;&lt;p&gt;Until now Grotsky has been running on a Tree-based interpreter. My goal is that at the end of the rewrite I
			will have a Bytecode interpreter.&lt;/p&gt;&lt;p&gt;First I want to rewrite the Tree-based interpreter in Rust and achieve at least 10x performance improvement.&lt;/p&gt;&lt;p&gt;Then figure out if I want to use a Register based or Stack based bytecode interpreter.&lt;/p&gt;&lt;p&gt;Also, I would like to have a stable bytecode representation to be able to compile programs to a binary format that can be shipped as is or packaged into a binary.&lt;/p&gt;&lt;p&gt;Finally, it's time Grotsky gets a REPL.&lt;/p&gt;&lt;h3&gt;Roadmap&lt;/h3&gt;&lt;p&gt;Believe it or not, Grotsky it's pretty big. It has support for reading and writting files, sockets and more on the stdlib. Which means I have a huge task ahead of me.&lt;/p&gt;&lt;p&gt;First thing I want to do is to have some sort of automated testing setup that can compare Rust vs Go implementation. Right now all test are written as Go unit test, I need to make them agnostic of language backend.&lt;/p&gt;&lt;p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Jun 9:&lt;/strong&gt; Have a complete setup of automated tests for correctness and performance.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Jun 16:&lt;/strong&gt; Language runtime (without stdlib) rewritten in Rust.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Jun 23:&lt;/strong&gt; Finish migrating stdlib and publish results (new blog post). Use new implementation for blog engine.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Jun 30:&lt;/strong&gt; Decide which kind of bytecode interpreter to use, then make a design and plan for the implementation.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Jul 7:&lt;/strong&gt; Have a working bytecode interpreter that is able to run the program shown in this post ^. Just a simple while loop. Compare performance and share progress.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Jul 14:&lt;/strong&gt; Add support for functions and closures.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Jul 21:&lt;/strong&gt; Finish runtime without stdlib in bytecode interpreter.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Jul 28:&lt;/strong&gt; Implement stdlib in bytecode interpreter. Share results.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Aug 5:&lt;/strong&gt; Add ability to compile to bytecode and run from bytecode.&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Aug 12:&lt;/strong&gt; Add REPL and finish up project.&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;&lt;p&gt;Im gonna have lots of fun with this project and Im sure next time a post here I'll be a changed man. Surely gonna learn a lot, Im excited about what lies ahead.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Im rewriting Grotsky (my toy programming language) in Rust, the previous implementation
was done in Go. The goal of the rewrite is to improve my Rust skills, and to improve the performance of Grotsky,
by at least 10x.</summary></entry><entry><title type="html">Writing a Redis clone in Go from scratch</title><id>https://mliezun.com/2023/04/08/redis-clone.html</id><updated>2023-04-08T00:00:00Z</updated><link href="https://mliezun.com/2023/04/08/redis-clone.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2023/04/08/redis-clone.html">&lt;article&gt;&lt;h2&gt;Writing a Redis clone in Go from scratch&lt;/h2&gt;&lt;p&gt;In this post we're going to write a basic Redis clone in Go that implements the most simple commands: GET, 
SET, DEL and QUIT. At the end you'll know how to parse a byte stream from a live TCP connection, and hopefully have a working
implementation of Redis.&lt;/p&gt;&lt;p&gt;What's intersting about this project is that it's &lt;em&gt;production ready&lt;/em&gt; (not really).
		It's being used in production in an old Web app that I made for a client in 2017. 
		&lt;strong&gt;It has been running for a few months now without issues&lt;/strong&gt;.&lt;/p&gt;&lt;p&gt;I mantain that app to this day and I charge like 50 bucks a month for it. I do it because
		Im friends with the person that uses the app.&lt;/p&gt;&lt;p&gt;Long story short, the app's backend is written in PHP and uses Redis for caching, only GET, SET and DEL commands. 
		I asked my friend if I could replace it with my custom version and said yes, so I decided to give it a go.&lt;/p&gt;&lt;p&gt;If you're looking for C/C++ implementation, go check out this &lt;a href='https://build-your-own.org/redis/' target='_blank'&gt;book&lt;/a&gt;.&lt;/p&gt;&lt;h3&gt;What we'll be building&lt;/h3&gt;&lt;p&gt;If you go to the &lt;a href=&quot;https://redis.io/commands/&quot; target=&quot;_blank&quot;&gt;command list&lt;/a&gt; on redis webpage you'll see that there are 463 commands to this day (maybe more if you're in the future).&lt;/p&gt;&lt;p&gt;That's a crazy number. Here, we're only implementing 4 commands: &lt;span class=&quot;single-quote language-undefined&quot;&gt;GET, SET, DEL, QUIT&lt;/span&gt;, &lt;em&gt;the other 459 commands are left as an exercise to the reader&lt;/em&gt;.&lt;/p&gt;&lt;h4&gt;GET&lt;/h4&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;GET key&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;Returns the value referenced by &lt;span class=&quot;single-quote language-undefined&quot;&gt;key&lt;/span&gt;. If the key does not exist then &lt;span class=&quot;single-quote language-undefined&quot;&gt;nil&lt;/span&gt; is returned.&lt;/p&gt;&lt;h4&gt;SET&lt;/h4&gt;&lt;p&gt;SET command gains more features on newer versions of Redis. We're going to implement one that has all features that were realeased up until version 6.0.0.&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis&quot;&gt;SET key value [NX | XX] [EX seconds | PX milliseconds]&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;Stores &lt;span class=&quot;single-quote language-undefined&quot;&gt;value&lt;/span&gt; as a string that is referenced by &lt;span class=&quot;single-quote language-undefined&quot;&gt;key&lt;/span&gt;. Overwrites any data that was previously referenced by the key.&lt;/p&gt;&lt;h5&gt;Options&lt;/h5&gt;&lt;p&gt;&lt;ul&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;EX&lt;/span&gt; &lt;em&gt;seconds&lt;/em&gt; -- Set the specified expire time, in seconds.&lt;/li&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;PX&lt;/span&gt; &lt;em&gt;milliseconds&lt;/em&gt; -- Set the specified expire time, in seconds.&lt;/li&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;NX&lt;/span&gt; &lt;em&gt;&lt;/em&gt; -- Only set the key if it does not already exist.&lt;/li&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;XX&lt;/span&gt; &lt;em&gt;&lt;/em&gt; -- Only set the key if it already exist.&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;&lt;h4&gt;DEL&lt;/h4&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis&quot;&gt;DEL key [key ...]&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;Takes 'any' amount of keys as input and removes all of them from storage. If a key doesn't exist it is ignored. Returns the amount of keys that were deleted.&lt;/p&gt;&lt;h4&gt;QUIT&lt;/h4&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis&quot;&gt;QUIT&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;When receiving this command the server closes the connection. It's useful for interactive sessions. For production environments the client should close the connection without sending any commands.&lt;/p&gt;&lt;h4&gt;Examples&lt;/h4&gt;&lt;p&gt;Let's start an interactive session of redis to test some commands. We can install redis-server with docker and run it locally. Then we can use telnet to connect directly via TCP. Open a terminal and execute the following instructions:&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;$ docker run -d --name redis-server -p 6379:6379 redis:alpine

$ telnet 127.0.0.1 6379
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;At this point the prompt should be waiting for you to write something. We're gonna test a couple of commands. In the code boxes below the first line is the command, following lines are the response.&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;GET a
$-1&lt;/pre&gt;^ That weird $-1 is the special &lt;em&gt;nil&lt;/em&gt; value. Which means there's nothing stored here.&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;set a 1
+OK&lt;/pre&gt;^ First thing to notice here is that we can use lowercase version of SET. Also, when the command is successful returns +OK.&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;set b 2
+OK&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;SET c 3
+OK&lt;/pre&gt;^ Just storing a couple more values.&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;GET a
$1
1&lt;/pre&gt;^ Here the response is returned in two lines. First line is the length of the string. Second line is the actual string.&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;get b
$1
2&lt;/pre&gt;^ We can also use lowercase version of GET, I bet commands are case-insensitive.&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;get C
$-1&lt;/pre&gt;^ Testing with uppercase C gives a &lt;em&gt;nil&lt;/em&gt;. Keys seem to be case-sensitive, probably values too. That makes sense.&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;del a b c
:3&lt;/pre&gt;^ Deleting everything returns the amount of keys deleted. Integers are indicated by ':'.&lt;/p&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;quit
+OK
Connection closed by foreign host.&lt;/pre&gt;^ When we send QUIT, the server closes the connection and we're back to our terminal session.&lt;/p&gt;&lt;p&gt;With those tests we have enough information to start building. We learned a little bit about the redis protocol and what the responses look like.&lt;/p&gt;&lt;h4&gt;Sending commands&lt;/h4&gt;&lt;p&gt;Until now we've been using the &lt;em&gt;inline&lt;/em&gt; version of redis command. There's another kind that follows the &lt;a href=&quot;https://redis.io/docs/reference/protocol-spec/&quot; target=&quot;_blank&quot;&gt;RESP (Redis serialization protocol).&lt;/a&gt;&lt;/p&gt;&lt;p&gt;The RESP protocol is quite similar to what we've seen in the examples above.The most important addition is &lt;em&gt;arrays&lt;/em&gt;. Let's see a Client&lt;&gt;Server interaction using arrays.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Client&lt;/strong&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;*2
$3
GET
$1
a&lt;/pre&gt;&lt;strong&gt;Server&lt;/strong&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;$-1&lt;/pre&gt;The server response looks the same as in the inline version. But what the client sends looks very different:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;In this case, the first thing the client sends is '*' followed by the number of elements in the array, so '*2' indicates that there are 2 elements in the array and they would be found in the following lines.&lt;/li&gt;&lt;li&gt;After that we have '$3' which means we're expecting the first element to be a string of length 3. Next line is the actual string, in our case is the command 'GET'.&lt;/li&gt;&lt;li&gt;The next value is also a string and is the key passed to the command.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;That's almost everything we need to start building a client. There's one last thing: &lt;strong&gt;error responses&lt;/strong&gt;.&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;-Example error message
-ERR unknown command 'foo'
-WRONGTYPE Operation against a key holding the wrong kind of value&lt;/pre&gt;A response that starts with a '-' is considered an error. The first word is the error type. We'll only gonna be using 'ERR' as a generic response.&lt;/p&gt;&lt;p&gt;RESP protocol is what client libraries use to communicate with Redis. With all that in our toolbox we're ready to start building.&lt;/p&gt;&lt;h3&gt;Receiving connections&lt;/h3&gt;&lt;p&gt;A crucial part of our serve is the ability to receive client's information. The way that this is done is that the server listens on a TCP port and waits for client connections. Let's start building the basic structure.&lt;/p&gt;&lt;p&gt;Create a new go module, open main.go and create a main function as follows.&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;package main

import (
	&quot;bufio&quot;
	&quot;fmt&quot;
	&quot;log&quot;
	&quot;net&quot;
	&quot;strconv&quot;
	&quot;strings&quot;
	&quot;sync&quot;
	&quot;time&quot;
)

var cache sync.Map

func main() {
	listener, err := net.Listen(&quot;tcp&quot;, &quot;:6380&quot;)
	if err != nil {
		log.Fatal(err)
	}
	log.Println(&quot;Listening on tcp://0.0.0.0:6380&quot;)

	for {
		conn, err := listener.Accept()
		log.Println(&quot;New connection&quot;, conn)
		if err != nil {
			log.Fatal(err)
		}

		go startSession(conn)
	}
}&lt;/pre&gt;&lt;p&gt;After declaring the package and imports, we create a global sync.Map that would be our cache. That's where keys are gonna be stored and retrieved.&lt;/p&gt;&lt;p&gt;On the main function we start listening on port 6380. After that we have an infinite loop that accepts new connections and spawns a goroutine to handle the session.&lt;/p&gt;&lt;h4&gt;Session handling&lt;/h4&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// startSession handles the client's session. Parses and executes commands and writes
// responses back to the client.
func startSession(conn net.Conn) {
	defer func() {
		log.Println(&quot;Closing connection&quot;, conn)
		conn.Close()
	}()
	defer func() {
		if err := recover(); err != nil {
			log.Println(&quot;Recovering from error&quot;, err)
		}
	}()
	p := NewParser(conn)
	for {
		cmd, err := p.command()
		if err != nil {
			log.Println(&quot;Error&quot;, err)
			conn.Write([]uint8(&quot;-ERR &quot; + err.Error() + &quot;\r\n&quot;))
			break
		}
		if !cmd.handle() {
			break
		}
	}
}&lt;/pre&gt;&lt;p&gt;It's super important that we close the connection when things are done. That's why we set a deferred function, to close the connection when the session finishes.&lt;/p&gt;&lt;p&gt;After that we handle any panics using recover. We do this mainly because at some point we might be reading from a connection that was closed by the client. And we don't want the entire server to die in case of an error.&lt;/p&gt;&lt;p&gt;Then we create a new parser and start trying to parse commands from the live connection. If we encounter an error we write the error message back to the client and we finish the session.&lt;/p&gt;&lt;p&gt;When cmd.handle() returns false (signaling end of session) we break the loop and the session finishes.&lt;/p&gt;&lt;h3&gt;Parsing commands&lt;/h3&gt;&lt;p&gt;Basic parser structure:&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// Parser contains the logic to read from a raw tcp connection and parse commands.
type Parser struct {
	conn net.Conn
	r    *bufio.Reader
	// Used for inline parsing
	line []byte
	pos  int
}

// NewParser returns a new Parser that reads from the given connection.
func NewParser(conn net.Conn) *Parser {
	return &amp;Parser{
		conn: conn,
		r:    bufio.NewReader(conn),
		line: make([]byte, 0),
		pos:  0,
	}
}&lt;/pre&gt;&lt;p&gt;This is pretty straight-forward. We store a reference to the connection, a reader and then some attributes that will help us with parsing.&lt;/p&gt;&lt;p&gt;The NewParser() function should be used as a contructor for Parser objects.&lt;/p&gt;&lt;p&gt;We need some helper functions that will make parsing easier:&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;func (p *Parser) current() byte {
	if p.atEnd() {
		return '\r'
	}
	return p.line[p.pos]
}

func (p *Parser) advance() {
	p.pos++
}

func (p *Parser) atEnd() bool {
	return p.pos &gt;= len(p.line)
}

func (p *Parser) readLine() ([]byte, error) {
	line, err := p.r.ReadBytes('\r')
	if err != nil {
		return nil, err
	}
	if _, err := p.r.ReadByte(); err != nil {
		return nil, err
	}
	return line[:len(line)-1], nil
}&lt;/pre&gt;&lt;p&gt;Also quite simple.&lt;ul&gt;&lt;li&gt;current(): Returns the character being pointed at by pos inside the line.&lt;/li&gt;&lt;li&gt;advance(): Point to the next character in the line.&lt;/li&gt;&lt;li&gt;atEnd(): Indicates if we're at the end of the line.&lt;/li&gt;&lt;li&gt;readLine(): Reads the input from the connection up to the carriage return char. Skips the '\n' char.&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;&lt;h4&gt;Parsing strings&lt;/h4&gt;&lt;p&gt;In Redis we can send commands like so:&lt;/p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;SET text &quot;quoted \&quot;text\&quot; here&quot;&lt;/pre&gt;&lt;p&gt;This means we need a way to handle \, &quot; chars inside a string.&lt;/p&gt;&lt;p&gt;For that we need a special parsing function that will handle strings:&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// consumeString reads a string argument from the current line.
func (p *Parser) consumeString() (s []byte, err error) {
	for p.current() != '&quot;' &amp;&amp; !p.atEnd() {
		cur := p.current()
		p.advance()
		next := p.current()
		if cur == '\\' &amp;&amp; next == '&quot;' {
			s = append(s, '&quot;')
			p.advance()
		} else {
			s = append(s, cur)
		}
	}
	if p.current() != '&quot;' {
		return nil, errors.New(&quot;unbalanced quotes in request&quot;)
	}
	p.advance()
	return
}&lt;/pre&gt;&lt;p&gt;From the functions that we've declared up to this point it's pretty clear that our parser will be reading the input line by line. And the consuming the line one char at a time.&lt;/p&gt;&lt;p&gt;The way consumeString() works is quite tricky. It assumes that the initial &quot; has been consumed before entering the function. And it consumes all characters in the current line up until it reaches the closing quotes character or the end of the line.&lt;/p&gt;&lt;p&gt;Inside the loop we can see that we're reading the current character and advancing the pointer, then the next character. When the user is sending an escaped quote inside the string we detect that by checking the current and the next characters. In this special case we end up advancing the pointer twice. Because we consumed two: chars the backslash and the quote. But we added only one char to the output: &quot;.&lt;/p&gt;&lt;p&gt;We append all other characters to the output buffer.&lt;/p&gt;&lt;p&gt;When the loop finishes, if we're not pointing to the end quote char, that means that the user sent an invalid command and we return an error.&lt;/p&gt;&lt;p&gt;Otherwise we advance the pointer and return normally.&lt;/p&gt;&lt;h4&gt;Parsing commands&lt;/h4&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// command parses and returns a Command.
func (p *Parser) command() (Command, error) {
	b, err := p.r.ReadByte()
	if err != nil {
		return Command{}, err
	}
	if b == '*' {
		log.Println(&quot;resp array&quot;)
		return p.respArray()
	} else {
		line, err := p.readLine()
		if err != nil {
			return Command{}, err
		}
		p.pos = 0
		p.line = append([]byte{}, b)
		p.line = append(p.line, line...)
		return p.inline()
	}
}&lt;/pre&gt;&lt;p&gt;We read the first character sent by the client. If it's an asterisk we handle it using the RESP protocol. Otherwise we assume that it's an inline command.&lt;/p&gt;&lt;p&gt;Let's start by parsing the inline commands first.&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// Command implements the behavior of the commands.
type Command struct {
	args []string
	conn net.Conn
}

// inline parses an inline message and returns a Command. Returns an error when there's
// a problem reading from the connection or parsing the command.
func (p *Parser) inline() (Command, error) {
	// skip initial whitespace if any
	for p.current() == ' ' {
		p.advance()
	}
	cmd := Command{conn: p.conn}
	for !p.atEnd() {
		arg, err := p.consumeArg()
		if err != nil {
			return cmd, err
		}
		if arg != &quot;&quot; {
			cmd.args = append(cmd.args, arg)
		}
	}
	return cmd, nil
}&lt;/pre&gt;&lt;p&gt;This is also quite easy to skim through. We skip any leading whitespace in case the user sent something like '     GET a'.&lt;/p&gt;&lt;p&gt;We create a new Command object with a reference to the session connection.&lt;/p&gt;&lt;p&gt;While we're not at the end of the line we consume args and append them to the arg list of the command object if they are not empty.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Consuming arguments&lt;/strong&gt;&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// consumeArg reads an argument from the current line.
func (p *Parser) consumeArg() (s string, err error) {
	for p.current() == ' ' {
		p.advance()
	}
	if p.current() == '&quot;' {
		p.advance()
		buf, err := p.consumeString()
		return string(buf), err
	}
	for !p.atEnd() &amp;&amp; p.current() != ' ' &amp;&amp; p.current() != '\r' {
		s += string(p.current())
		p.advance()
	}
	return
}&lt;/pre&gt;&lt;p&gt;Same as before we consume any leading whitespace.&lt;/p&gt;&lt;p&gt;If we find a quoted string we call our function from before: consumeString().&lt;/p&gt;&lt;p&gt;We append all characters to the output until we reach a carriage return \r, a whitespace or the end of the line.&lt;/p&gt;&lt;h4&gt;Parsing RESP protocol&lt;/h4&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// respArray parses a RESP array and returns a Command. Returns an error when there's
// a problem reading from the connection.
func (p *Parser) respArray() (Command, error) {
	cmd := Command{}
	elementsStr, err := p.readLine()
	if err != nil {
		return cmd, err
	}
	elements, _ := strconv.Atoi(string(elementsStr))
	log.Println(&quot;Elements&quot;, elements)
	for i := 0; i &lt; elements; i++ {
		tp, err := p.r.ReadByte()
		if err != nil {
			return cmd, err
		}
		switch tp {
		case ':':
			arg, err := p.readLine()
			if err != nil {
				return cmd, err
			}
			cmd.args = append(cmd.args, string(arg))
		case '$':
			arg, err := p.readLine()
			if err != nil {
				return cmd, err
			}
			length, _ := strconv.Atoi(string(arg))
			text := make([]byte, 0)
			for i := 0; len(text) &lt;= length; i++ {
				line, err := p.readLine()
				if err != nil {
					return cmd, err
				}
				text = append(text, line...)
			}
			cmd.args = append(cmd.args, string(text[:length]))
		case '*':
			next, err := p.respArray()
			if err != nil {
				return cmd, err
			}
			cmd.args = append(cmd.args, next.args...)
		}
	}
	return cmd, nil
}&lt;/pre&gt;&lt;p&gt;As we know, the leading asterisk has already been consumed from the connection input. So, at this point, the first line contains the number of elements to be consumed. We read that into an integer.&lt;/p&gt;&lt;p&gt;We create a for loop with that will parse all the elements in the array. We consume the first character to detect which kind of element we need to consume: int, string or array.&lt;/p&gt;&lt;p&gt;The int case is quite simple, we just read until the rest of the line.&lt;/p&gt;&lt;p&gt;The array case is also quite simple, we call respArray() and append the args of the result, to the current command object.&lt;/p&gt;&lt;p&gt;For strings we read the first line and get the size of the string. We keep reading lines until we have read the indicated amount of characters.&lt;/p&gt;&lt;h3&gt;Handling commands&lt;/h3&gt;&lt;p&gt;This is the 'fun' part of the implementation. Were our server becomes alive. In this section we'll implement the actual functionality of the commands.&lt;/p&gt;&lt;p&gt;Let's start with the cmd.handle() function that we saw in handleSession().&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// handle Executes the command and writes the response. Returns false when the connection should be closed.
func (cmd Command) handle() bool {
	switch strings.ToUpper(cmd.args[0]) {
	case &quot;GET&quot;:
		return cmd.get()
	case &quot;SET&quot;:
		return cmd.set()
	case &quot;DEL&quot;:
		return cmd.del()
	case &quot;QUIT&quot;:
		return cmd.quit()
	default:
		log.Println(&quot;Command not supported&quot;, cmd.args[0])
		cmd.conn.Write([]uint8(&quot;-ERR unknown command '&quot; + cmd.args[0] + &quot;'\r\n&quot;))
	}
	return true
}&lt;/pre&gt;&lt;p&gt;Needs no further explanation. Let's implement the easiest command: QUIT.&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// quit Used in interactive/inline mode, instructs the server to terminate the connection.
func (cmd *Command) quit() bool {
	if len(cmd.args) != 1 {
		cmd.conn.Write([]uint8(&quot;-ERR wrong number of arguments for '&quot; + cmd.args[0] + &quot;' command\r\n&quot;))
		return true
	}
	log.Println(&quot;Handle QUIT&quot;)
	cmd.conn.Write([]uint8(&quot;+OK\r\n&quot;))
	return false
}&lt;/pre&gt;&lt;p&gt;If any extra arguments were passed to QUIT, it returns an error.&lt;/p&gt;&lt;p&gt;Otherwise write +OK to the client and return false. Which if you remember handleSession() is the value to indicate that the session has finished. After that the connection will be automatically closed.&lt;/p&gt;&lt;p&gt;The next easieast command is DEL&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// del Deletes a key from the cache.
func (cmd *Command) del() bool {
	count := 0
	for _, k := range cmd.args[1:] {
		if _, ok := cache.LoadAndDelete(k); ok {
			count++
		}
	}
	cmd.conn.Write([]uint8(fmt.Sprintf(&quot;:%d\r\n&quot;, count)))
	return true
}&lt;/pre&gt;&lt;p&gt;Iterates through all the keys passed, deletes the ones that exists and writes back to the client the amount of keys deleted.&lt;/p&gt;&lt;p&gt;Returns true, which means the connection is kept alive.&lt;/p&gt;&lt;h4&gt;Handling GET&lt;/h4&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// get Fetches a key from the cache if exists.
func (cmd Command) get() bool {
	if len(cmd.args) != 2 {
		cmd.conn.Write([]uint8(&quot;-ERR wrong number of arguments for '&quot; + cmd.args[0] + &quot;' command\r\n&quot;))
		return true
	}
	log.Println(&quot;Handle GET&quot;)
	val, _ := cache.Load(cmd.args[1])
	if val != nil {
		res, _ := val.(string)
		if strings.HasPrefix(res, &quot;\&quot;&quot;) {
			res, _ = strconv.Unquote(res)
		}
		log.Println(&quot;Response length&quot;, len(res))
		cmd.conn.Write([]uint8(fmt.Sprintf(&quot;$%d\r\n&quot;, len(res))))
		cmd.conn.Write(append([]uint8(res), []uint8(&quot;\r\n&quot;)...))
	} else {
		cmd.conn.Write([]uint8(&quot;$-1\r\n&quot;))
	}
	return true
}&lt;/pre&gt;&lt;p&gt;As before, we validate that the correct number of arguments were passed to the command.&lt;/p&gt;&lt;p&gt;We load the value from the global variable cache.&lt;/p&gt;&lt;p&gt;If the value is nil we write back to the client the special $-1.&lt;/p&gt;&lt;p&gt;When we have a value we cast it as string and unquote it in case it's quoted. Then we write the length as the first line of the response and the string as the second line of the response.&lt;/p&gt;&lt;h4&gt;Handling SET&lt;/h4&gt;&lt;p&gt;This is the most complicated command that we'll implement.&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// set Stores a key and value on the cache. Optionally sets expiration on the key.
func (cmd Command) set() bool {
	if len(cmd.args) &lt; 3 || len(cmd.args) &gt; 6 {
		cmd.conn.Write([]uint8(&quot;-ERR wrong number of arguments for '&quot; + cmd.args[0] + &quot;' command\r\n&quot;))
		return true
	}
	log.Println(&quot;Handle SET&quot;)
	log.Println(&quot;Value length&quot;, len(cmd.args[2]))
	if len(cmd.args) &gt; 3 {
		pos := 3
		option := strings.ToUpper(cmd.args[pos])
		switch option {
		case &quot;NX&quot;:
			log.Println(&quot;Handle NX&quot;)
			if _, ok := cache.Load(cmd.args[1]); ok {
				cmd.conn.Write([]uint8(&quot;$-1\r\n&quot;))
				return true
			}
			pos++
		case &quot;XX&quot;:
			log.Println(&quot;Handle XX&quot;)
			if _, ok := cache.Load(cmd.args[1]); !ok {
				cmd.conn.Write([]uint8(&quot;$-1\r\n&quot;))
				return true
			}
			pos++
		}
		if len(cmd.args) &gt; pos {
			if err := cmd.setExpiration(pos); err != nil {
				cmd.conn.Write([]uint8(&quot;-ERR &quot; + err.Error() + &quot;\r\n&quot;))
				return true
			}
		}
	}
	cache.Store(cmd.args[1], cmd.args[2])
	cmd.conn.Write([]uint8(&quot;+OK\r\n&quot;))
	return true
}&lt;/pre&gt;&lt;p&gt;As always, first thing we do is validate the number of arguments. But in this case, SET is more tricky than the others.&lt;/p&gt;&lt;p&gt;When more than 3 arguments are passed we check for the NX or XX flags and handle them accordingly.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;NX&lt;/span&gt; &lt;em&gt;&lt;/em&gt; -- Only set the key if it does not already exist.&lt;/li&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;XX&lt;/span&gt; &lt;em&gt;&lt;/em&gt; -- Only set the key if it already exist.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Then we parse the expiration flags if any. We'll see how that's done in a second.&lt;/p&gt;&lt;p&gt;After handling all those special cases we finally store the key and value in the cache, write the +OK response and return true to keep the connection alive.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Expiration&lt;/strong&gt;&lt;/p&gt;&lt;pre class=&quot;triple-quote go language-go&quot;&gt;// setExpiration Handles expiration when passed as part of the 'set' command.
func (cmd Command) setExpiration(pos int) error {
	option := strings.ToUpper(cmd.args[pos])
	value, _ := strconv.Atoi(cmd.args[pos+1])
	var duration time.Duration
	switch option {
	case &quot;EX&quot;:
		duration = time.Second * time.Duration(value)
	case &quot;PX&quot;:
		duration = time.Millisecond * time.Duration(value)
	default:
		return fmt.Errorf(&quot;expiration option is not valid&quot;)
	}
	go func() {
		log.Printf(&quot;Handling '%s', sleeping for %v\n&quot;, option, duration)
		time.Sleep(duration)
		cache.Delete(cmd.args[1])
	}()
	return nil
}&lt;/pre&gt;&lt;p&gt;We read the option and the expiration value, then we compute the duration for each case and we spawn a new goroutine that sleeps for that amount of time and the deletes the key from the cache.&lt;/p&gt;&lt;p&gt;This is not the most efficient way to do it, but it's simple and it works for us.&lt;/p&gt;&lt;h2&gt;Working server&lt;/h2&gt;&lt;p&gt;At this point we have an usable implementation of Redis.&lt;/p&gt;&lt;p&gt;Let's start the server the server and test it.&lt;/p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;$ go run main.go2023/04/08 21:09:40 Listening on tcp://0.0.0.0:6380&lt;/pre&gt;&lt;p&gt;On a different terminal connect to the server.&lt;/p&gt;&lt;pre class=&quot;triple-quote redis language-css&quot;&gt;$ telnet 127.0.0.1 6380GET a$-1set a &quot;test \&quot;quotes\&quot; are working&quot;+OKget a$25test &quot;quotes&quot; are working&lt;/pre&gt;&lt;p&gt;It's alive!! Go have fun.&lt;/p&gt;&lt;p&gt;If you'd like to access the source code of this project there's a public gist containing all of the code displayed here.&lt;/p&gt;&lt;p&gt;&lt;a href=&quot;https://gist.github.com/mliezun/f55baa4cd024c1cdf3030e49c5f87875&quot; target=&quot;_blank&quot;&gt;Link to source code&lt;/a&gt;&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">In this post we're going to write a basic Redis clone in Go that implements the most simple commands: GET, 
SET, DEL and QUIT. At the end you'll know how to parse a byte stream from a live TCP connection, and hopefully have a working
implementation of Redis.</summary></entry><entry><title type="html">How to write a program that can replicate itself</title><id>https://mliezun.com/2022/11/26/grotsky-quine.html</id><updated>2022-11-26T00:00:00Z</updated><link href="https://mliezun.com/2022/11/26/grotsky-quine.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2022/11/26/grotsky-quine.html">&lt;article&gt;&lt;h2&gt;How to write a program that can replicate itself&lt;/h2&gt;&lt;p&gt;Grotsky is a toy programming language that I made for fun. Today we're visinting the concept of Quines, 
a.k.a. self replicating programs. It's said that any turing-complete language should be able to write a program that replicates
 itself. And grotsky is no exception.&lt;/p&gt;&lt;p&gt;Read more about grotsky in previous blogposts:&lt;ul&gt;&lt;li&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://mliezun.com/2020/02/21/grotsky-part1.html&quot;&gt;Grotsky Part 1: Syntax&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://mliezun.com/2020/03/15/grotsky-part2.html&quot;&gt;Grotsky Part 2: Parsing expressions&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://mliezun.com/2020/04/01/grotsky-part3.html&quot;&gt;Grotsky Part 3: Interpreting&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://mliezun.com/2020/12/17/grotsky-getmyip.html&quot;&gt;Grotsky Part 4: Writing a service to get your public IP&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://mliezun.com/2021/10/04/new-blog-engine.html&quot;&gt;I created a programming language and this blog is powered by it&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;&lt;p&gt;Quines are very easy to write. The language that you're using needs to be able to do a couple things:&lt;ul&gt;&lt;li&gt;Write to a file or stdout (print)&lt;/li&gt;&lt;li&gt;Support for string arrays&lt;/li&gt;&lt;li&gt;Translate numbers/integers to character ascii representation&lt;/li&gt;&lt;li&gt;Concatenate strings&lt;/li&gt;&lt;li&gt;Loop through arrays from arbitrary indexes&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;&lt;h3&gt;Super simple quine: less than 30 lines of code&lt;/h3&gt;&lt;p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;let tabChar = 9
let quoteChar = 34
let commaChar = 44
let code = [
	&quot;let tabChar = 9&quot;,
	&quot;let quoteChar = 34&quot;,
	&quot;let commaChar = 44&quot;,
	&quot;let code = [&quot;,
	&quot;]&quot;,
	&quot;for let i = 0; i &lt; 4; i = i+1 {&quot;,
	&quot;	io.println(code[i])&quot;,
	&quot;}&quot;,
	&quot;for let i = 0; i &lt; code.length; i = i+1 {&quot;,
	&quot;	io.println(strings.chr(tabChar) + strings.chr(quoteChar) + code[i] + strings.chr(quoteChar) + strings.chr(commaChar))&quot;,
	&quot;}&quot;,
	&quot;for let i = 4; i &lt; code.length; i = i+1 {&quot;,
	&quot;	io.println(code[i])&quot;,
	&quot;}&quot;,
]
for let i = 0; i &lt; 4; i = i+1 {
	io.println(code[i])
}
for let i = 0; i &lt; code.length; i = i+1 {
	io.println(strings.chr(tabChar) + strings.chr(quoteChar) + code[i] + strings.chr(quoteChar) + strings.chr(commaChar))
}
for let i = 4; i &lt; code.length; i = i+1 {
	io.println(code[i])
}
&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;Now we can use grotksy cli to run the program and compare the output to the original source.&lt;/p&gt;&lt;p&gt;Save the original source to a file called &lt;span class=&quot;single-quote&quot;&gt;quine.gr&lt;/span&gt; then run the following commands:&lt;pre class=&quot;triple-quote bash&quot;&gt;
$ grotsky quine.gr &gt; quine_copy.gr
$ cmp quine.gr quine_copy.gr
$ echo $?
0
&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;If you see a 0 as the final output that means the files are the same. Otherwise if you saw an error message or a different output, that means something has gone wrong.&lt;/p&gt;&lt;p&gt;How exciting is this?!!We've just written a program that gives itself as an output. That sounds impossible when you hear it for the first time. But it was actually pretty easy!&lt;/p&gt;&lt;p&gt;Source code available here: &lt;a target=&quot;_blank&quot; href=&quot;https://gist.github.com/mliezun/c750ba701608850bd86d646a3ebf700f&quot;&gt;https://gist.github.com/mliezun/c750ba701608850bd86d646a3ebf700f&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Grotsky cli binary available here:&lt;a target=&quot;_blank&quot; href=&quot;https://github.com/mliezun/grotsky/releases/tag/v0.0.6&quot;&gt;https://github.com/mliezun/grotsky/releases/tag/v0.0.6&lt;/a&gt;&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Grotsky is a toy programming language that I made for fun. Today we're visinting the concept of Quines, 
a.k.a. self replicating programs. It's said that any turing-complete language should be able to write a program that replicates
 itself. And grotsky is no exception.</summary></entry><entry><title type="html">Migrate from Heroku to Fly.io</title><id>https://mliezun.com/2022/09/22/heroku-to-fly.html</id><updated>2022-09-22T00:00:00Z</updated><link href="https://mliezun.com/2022/09/22/heroku-to-fly.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2022/09/22/heroku-to-fly.html">&lt;article&gt;&lt;h2&gt;How to migrate from Heroku to Fly.io&lt;/h2&gt;&lt;p&gt;A couple weeks ago Heroku announced the removal. I have plenty of projects running on free dynos.
 I have taken some time to move my code to Fly.io. And also I've written a little tutorial of how to perform the migration.&lt;/p&gt;&lt;p&gt;I'll use one of my public repos as an example &lt;a href=&quot;https://github.com/mliezun/getmyip&quot; target=&quot;_blank&quot;&gt;https://github.com/mliezun/getmyip&lt;/a&gt;. It's a simple service that returns the IP from which you're making the request. It's useful when you want to know your public IP.&lt;/p&gt;&lt;p&gt;That project ^ was covered in a previous &lt;a href=&quot;https://mliezun.com/2020/12/17/grotsky-getmyip.html&quot; target=&quot;_blank&quot;&gt;blogpost&lt;/a&gt;.&lt;/p&gt;&lt;h3&gt;Migration instructions&lt;/h3&gt;&lt;p&gt;The first thing we need to do is to remove heroku from the remotes. Inside your project run:&lt;/p&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;git remote remove heroku&lt;/pre&gt;&lt;p&gt;If you have a heroku.yml file, delete it.&lt;/p&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;rm -rf heroku.yml&lt;/pre&gt;&lt;p&gt;Then, we're ready to start using fly. There are tutorials on the &lt;a href=&quot;https://fly.io/docs/languages-and-frameworks/dockerfile/&quot; target=&quot;_blank&quot;&gt;official fly.io docs&lt;/a&gt; for many frameworks and languages. We're going to be following the one for a Docker app, since it's the most general case.&lt;/p&gt;&lt;p&gt;First thing you need to do is create an account in &lt;a href=&quot;https://fly.io/&quot; target=&quot;_blank&quot;&gt;Fly.io&lt;/a&gt; if you don't have one yet.&lt;/p&gt;&lt;p&gt;Once you created your account, install the flyctl command line tool. After that, login by running the following command:&lt;/p&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;flyctl auth login&lt;/pre&gt;&lt;p&gt;After you've logged in to your account, you're ready to launch your application. Execute the next command and follow the interactive setup.&lt;/p&gt;&lt;pre class=&quot;triple-quote text&quot;&gt;$ flyctl launch
Scanning source code
Detected a Dockerfile app
? App Name (leave blank to use an auto-generated name): 
Automatically selected personal organization: Miguel Liezun
? Select region: mia (Miami, Florida (US))
Created app morning-breeze-4255 in organization personal
Wrote config file fly.toml
? Would you like to set up a Postgresql database now? No
? Would you like to deploy now? Yes
Deploying morning-breeze-4255
==&gt; Validating app configuration
--&gt; Validating app configuration done
Services
TCP 80/443 -&gt; 8080
Remote builder fly-builder-green-pond-8004 ready
==&gt; Creating build context
--&gt; Creating build context done
==&gt; Building image with Docker
--&gt; docker host: 20.10.12 linux x86_64
...
&lt;/pre&gt;&lt;p&gt;Make sure your app listens to port 8080, that's the default for fly apps. You can change the port inside the file fly.toml if you want, just search for the internal port and change it. Remember to run launch again if you change the port.&lt;/p&gt;&lt;pre class=&quot;triple-quote toml&quot;&gt;# fly.toml file generated for morning-breeze-4255 on 2022-09-21T21:50:20-03:00

app = &quot;morning-breeze-4255&quot;
kill_signal = &quot;SIGINT&quot;
kill_timeout = 5
processes = []

[env]

[experimental]
  allowed_public_ports = []
  auto_rollback = true

[[services]]
  http_checks = []
  internal_port = 8080 # &lt;- Put your desired port here
# ...
&lt;/pre&gt;&lt;p&gt;Finally, you only need to open the app and enjoy!You migrated your first app from heroku to fly :-)&lt;/p&gt;&lt;pre class=&quot;triple-quote text&quot;&gt;$ flyctl open
opening http://morning-breeze-4255.fly.dev ...
&lt;/pre&gt;&lt;p&gt;Access the newly deployed 'getmyip' service using the link &lt;a href=&quot;http://morning-breeze-4255.fly.dev&quot; target=&quot;_blank&quot;&gt;http://morning-breeze-4255.fly.dev&lt;/a&gt;.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">A couple weeks ago Heroku announced the removal. I have plenty of projects running on free dynos.
 I have taken some time to move my code to Fly.io. And also I've written a little tutorial of how to perform the migration.</summary></entry><entry><title type="html">Branchable MySQL: Managing multiple dev environments</title><id>https://mliezun.com/2022/09/20/branchable-mysql.html</id><updated>2022-09-20T00:00:00Z</updated><link href="https://mliezun.com/2022/09/20/branchable-mysql.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2022/09/20/branchable-mysql.html">&lt;article&gt;&lt;h2&gt;Branchable MySQL: Managing multiple dev environments&lt;/h2&gt;&lt;p&gt;When teams start to grow, having a single dev environment becomes an issue. People start stepping on each others toes.
A common problem is that two people want to apply incompatible migrations on the database. That problem is impossible 
to fix if folks are working on parallel branches.
If we can have a database for each branch of a project, that will remove much of the pain of having multiple devs applying
changes to the db.&lt;/p&gt;&lt;p&gt;There are already projects that solve this problem: &lt;a href=&quot;https://planetscale.com/docs/concepts/branching&quot; target=&quot;_blank&quot;&gt;PlanetScale&lt;/a&gt; and &lt;a href=&quot;https://neon.tech/&quot; target=&quot;_blank&quot;&gt;Neon&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;A common case where this problem arises is when two devs want to add a column to the same table. &lt;/p&gt;&lt;img alt=&quot;Two devs applying changes to the same table in the database.&quot; src=&quot;/assets/images/branchable-mysql/diagram1.png&quot;/&gt;&lt;p&gt;We have a &lt;span class=&quot;single-quote sql&quot;&gt;people&lt;/span&gt; table in the database. One of the devs wants to add the &lt;span class=&quot;single-quote sql&quot;&gt;last_name&lt;/span&gt; column and the other one wants to add the &lt;span class=&quot;single-quote sql&quot;&gt;address&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;Dev1's code thinks the table will have 3 columns after he applies his operation: &lt;span class=&quot;single-quote sql&quot;&gt;id, name, last_name&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;Dev2's code also thinks the table will have 3 columns: &lt;span class=&quot;single-quote sql&quot;&gt;id, name, address&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;In reality the table will have 4 columns.So neither of them will be able to run their code unless they talk to each other and figure out how to make this work.&lt;/p&gt;&lt;p&gt;This is far from ideal.&lt;/p&gt;&lt;p&gt;What we want instead, is that each one of them can develop their features independently.&lt;/p&gt;&lt;img alt=&quot;Two devs applying changes to the same table in different database branches.&quot; src=&quot;/assets/images/branchable-mysql/diagram2.png&quot;/&gt;&lt;p&gt;They both apply to the same table, but each table lives on an instance that was 'replicated' from the original.&lt;/p&gt;&lt;h2&gt;How can we implement the ideal case?&lt;/h2&gt;&lt;p&gt;MySQL writes data (by default) to the directory &lt;span class=&quot;single-quote bash&quot;&gt;/var/lib/mysql/data&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;We can use an &lt;a href=&quot;https://en.wikipedia.org/wiki/UnionFS&quot; target=&quot;_blank&quot;&gt;Union filesystem&lt;/a&gt;. And configure MySQL to use a different directory to read and write data.&lt;/p&gt;&lt;p&gt;That way we can have a feature/user-last-name 'branch' read and write data from a directory like &lt;span class=&quot;single-quote bash&quot;&gt;/app/user-last-name/mysql/data&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;And a feature/user-address 'branch' read and write data from a directory like &lt;span class=&quot;single-quote bash&quot;&gt;/app/user-address/mysql/data&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;Those branches can be mounted using fuse-overlayfs by executing the following commands: &lt;/p&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;
# Directory /app/base contains data from the original branch

fuse-overlayfs -o lowerdir=/app/base,upperdir=/app/user-last-name,workdir=/tmp/user-last-name overlayfs /app/user-last-name

fuse-overlayfs -o lowerdir=/app/base,upperdir=/app/user-address,workdir=/tmp/user-address overlayfs /app/user-address
&lt;/pre&gt;&lt;p&gt;This means both 'branches' of the database are able to coexist and have different schemas during their lifetime.&lt;/p&gt;&lt;h2&gt;Experimenting with a use case&lt;/h2&gt;&lt;p&gt;I had this idea in my head for months. I finally convinced myself that it was worth a shot.&lt;/p&gt;&lt;p&gt;I decided to do a little implementation using Docker and python FastAPI.Exposing a simple interface so that it's easy to create and delete branches.&lt;/p&gt;&lt;p&gt;The project is live on github &lt;a href=&quot;https://github.com/mliezun/branchable-mysql&quot; target=&quot;_blank&quot;&gt;branchable-mysql&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;The container image is published on Docker Hub &lt;a href=&quot;https://hub.docker.com/repository/docker/mliezun/branchable-mysql&quot; target=&quot;_blank&quot;&gt;branchable-mysql&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;To start using the image let's create a docker-compose.yml file.&lt;/p&gt;&lt;pre class=&quot;triple-quote yaml&quot;&gt;version: &quot;3&quot;

services:
  mysql:
    image: mliezun/branchable-mysql
    platform: linux/amd64
    privileged: true
    restart: always
    volumes:
      - appdata:/app/

volumes:
  appdata:
&lt;/pre&gt;&lt;p&gt;Then you can execute &lt;span class=&quot;single-quote bash&quot;&gt;docker-compose up&lt;/span&gt; and the MySQL server should start running.&lt;/p&gt;&lt;p&gt;After that, connect easily to the db &lt;span class=&quot;single-quote bash&quot;&gt;docker compose exec mysql mysql -uroot -h127.0.0.1 --skip-password -P33061&lt;/span&gt;. You should enter to an interactive mysql console.&lt;/p&gt;&lt;p&gt;Let's create an initial schema, a table and insert some data so that we can see how branching works.On the console that we just opened execute:&lt;/p&gt;&lt;pre class=&quot;triple-quote sql&quot;&gt;
mysql&gt; create schema s1;
Query OK, 1 row affected (0.01 sec)

mysql&gt; use s1;
Database changed
mysql&gt; create table people (id int primary key auto_increment, name varchar(255) not null);
Query OK, 0 rows affected (0.07 sec)

mysql&gt; insert into people select 0, 'Miguel';
Query OK, 1 row affected (0.02 sec)
Records: 1  Duplicates: 0  Warnings: 0

mysql&gt; select * from people;
+----+--------+
| id | name   |
+----+--------+
|  1 | Miguel |
+----+--------+
1 row in set (0.00 sec)
&lt;/pre&gt;&lt;p&gt;That's enough for now, we're ready to start creating branches.&lt;/p&gt;&lt;p&gt;On a separate terminal, without closing the previous mysql interactive console, execute: &lt;/p&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;
docker compose exec mysql /app/scripts/create_branch.sh base feature/user-last-name

{&quot;branch_name&quot;:&quot;feature/user-last-name&quot;,&quot;base_branch&quot;:&quot;base&quot;,&quot;port&quot;:33062}
&lt;/pre&gt;&lt;p&gt;Now you can login to the new database branch using port 33062 &lt;span class=&quot;single-quote bash&quot;&gt;docker compose exec mysql mysql -uroot -h127.0.0.1 --skip-password -P33062&lt;/span&gt;&lt;/p&gt;&lt;pre class=&quot;triple-quote sql&quot;&gt;
mysql&gt; use s1;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql&gt; alter table people add column last_name varchar(255) not null;
Query OK, 0 rows affected (0.03 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql&gt; select * from people;
+----+--------+-----------+
| id | name   | last_name |
+----+--------+-----------+
|  1 | Miguel |           |
+----+--------+-----------+
1 row in set (0.00 sec)
&lt;/pre&gt;&lt;p&gt;In a new terminal we can create a another branch: &lt;/p&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;
docker compose exec mysql /app/scripts/create_branch.sh base feature/user-address

{&quot;branch_name&quot;:&quot;feature/user-address&quot;,&quot;base_branch&quot;:&quot;base&quot;,&quot;port&quot;:33063}
&lt;/pre&gt;&lt;p&gt;Then connect using port 33063 &lt;span class=&quot;single-quote bash&quot;&gt;docker compose exec mysql mysql -uroot -h127.0.0.1 --skip-password -P33063&lt;/span&gt;&lt;/p&gt;&lt;pre class=&quot;triple-quote sql&quot;&gt;
mysql&gt; use s1;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql&gt; alter table people add column last_name varchar(255) not null;
Query OK, 0 rows affected (0.03 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql&gt; select * from people;
+----+--------+
| id | name   |
+----+--------+
|  1 | Miguel |
+----+--------+
1 row in set (0.00 sec)

mysql&gt; alter table people add column address varchar(255) not null;
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql&gt; select * from people;
+----+--------+---------+
| id | name   | address |
+----+--------+---------+
|  1 | Miguel |         |
+----+--------+---------+
1 row in set (0.00 sec)
&lt;/pre&gt;&lt;p&gt;As you can see, we have 3 servers running at the same time, each one with different schemas.&lt;/p&gt;&lt;p&gt;This is great for local development and for having branch-aware dev environments.&lt;/p&gt;&lt;h2&gt;Final thoughts&lt;/h2&gt;&lt;p&gt;I hope you find this blogpost useful.If you want to start using branchable-mysql go ahead.If you encounter any issues please report them in the github repo or create a pull request.&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">When teams start to grow, having a single dev environment becomes an issue. People start stepping on each others toes.
A common problem is that two people want to apply incompatible migrations on the database. That problem is impossible 
to fix if folks are working on parallel branches.
If we can have a database for each branch of a project, that will remove much of the pain of having multiple devs applying
changes to the db.</summary></entry><entry><title type="html">Webscraping as a side project</title><id>https://mliezun.com/2022/07/20/cloud-outdated-release.html</id><updated>2022-07-20T00:00:00Z</updated><link href="https://mliezun.com/2022/07/20/cloud-outdated-release.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2022/07/20/cloud-outdated-release.html">&lt;article&gt;&lt;h2&gt;Webscraping as a side project&lt;/h2&gt;&lt;p&gt;A friend and I were looking for a side project to work together. We realized we both faced a similar problem.&lt;/p&gt;&lt;p&gt;Let's use AWS Lambda Python runtime as an example.AWS will send out emails when a version is at the end of life making it difficult to stay on the latest if desired.Plus, reacting to them usually means you are many many versions behind already.&lt;/p&gt;&lt;p&gt;Our journey started.We made a list of providers for the MVP: AWS, GCP and Azure.Then a list of the services that have versions (for example S3 doesn't have versions).After that we realized that we could get some versions using APIs.Other services exclusively require webscraping.&lt;/p&gt;&lt;p&gt;We support 46 services and counting.Take a look at&lt;a href=&quot;https://cloud-outdated.com/services/&quot; target=&quot;_blank&quot;&gt;Cloud Outdated&lt;/a&gt; and subscribe to get notified.If you are looking for a service that's not there&lt;a href=&quot;https://gs2azalhg3t.typeform.com/to/Q6oHtttI&quot; target=&quot;_blank&quot;&gt;contact us&lt;/a&gt;.&lt;/p&gt;&lt;h2&gt;Picking a language, framework and platform&lt;/h2&gt;&lt;p&gt;We're both Python programmers. The choice was obvious.&quot;Let's use Python and Django framework for the job&quot; we said.We didn't want to spend our innovation tokens on new language/framework.So we chose Boring Technology.&lt;/p&gt;&lt;p&gt;For the db we spent our first innovation token.We decided to go with the flashy new serverless postgres-compatible&lt;a href=&quot;https://www.cockroachlabs.com/lp/serverless/&quot; target=&quot;_blank&quot;&gt;CockroachDB&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;On the hosting side we're using AWS Lambda. Taking advantage of the free compute time.Helps mantaining the costs down.&lt;/p&gt;&lt;h2&gt;Make webscraping reliable&lt;/h2&gt;&lt;p&gt;A webpage that's being scraped can change at any time. First thing we did was account for those edge cases.We created a custom exception that is triggered when something changed. So that we can react to that downstream.&lt;pre class=&quot;triple-quote python&quot;&gt;
class ScrapingError(Exception):
    pass
&lt;/pre&gt;We wanted to keep the implementation simple. Each service is scraped by a single function.The signature of the function is something like&lt;span class=&quot;single-quote python&quot;&gt;aws_lambda_python() -&gt; List[Version]&lt;/span&gt;.All the implementations follow a similar pattern:&lt;pre class=&quot;triple-quote python&quot;&gt;
def aws_lambda_python():
    # Read versions from aws docs website:
    # https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html

    if not found_versions:
        raise ScrappingError

    # Process and return versions
&lt;/pre&gt;That's ^ what we call a poll function.&lt;/p&gt;&lt;p&gt;We pass poll functions through a polling class that handles all the errors and results.When we detect an scraping error we have a special case. We send an email with the detailsof what happened. Because the problem is something that requires manual action. We receivethat email in our personal inboxes and fix the problem ASAP.&lt;/p&gt;&lt;p&gt;The poll class that handles all the magic behind cloud outdated is actually very simple:&lt;pre class=&quot;triple-quote python&quot;&gt;
class PollService:
    def __init__(self, service: Service, poll_fn: Callable):
        self.poll_fn = poll_fn
        # Some other attributes...

    def poll(self):
        try:
            results = self.poll_fn()
            self.process_results(results)
        except ScrapingError as e:
            notify_operator(
                f&quot;{type(e).__name__} at line {e.__traceback__.tb_lineno} of {__file__}: {e.__str__()}&quot;
            )

    def process_results(self, results):
        # if results contains new versions:
        #     save new versions to db
        # if results contains deprecated versions:
        #     set versions in db as depreacted
&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;That's the hearth of Cloud Outdated.After that we have to send notifications to subscribed users.That part is trivial.We send an email that contains the difference between what was last sent to a user and what wehave stored in the db at the moment.&lt;/p&gt;&lt;h2&gt;Last toughts&lt;/h2&gt;&lt;p&gt;Having a side project is usually a good idea.For us has been a journey were we got to know some new stuff (CockroachDB).We also learned about how to build a product and keep a MVP mentality.The most difficult challenge we face is to bring more users to the platform.&lt;/p&gt;&lt;p&gt;We'd love to see more people subscribed.If this blogpost sparked your interest go to&lt;a href=&quot;https://cloud-outdated.com/services/&quot; target=&quot;_blank&quot;&gt;Cloud Outdated&lt;/a&gt; and subscribe to start getting emails.&lt;/p&gt;&lt;p&gt;See you next time!&lt;/p&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Cloud Outdated is a personalized digest of updates for cloud services. Works like a newsletter where you can choose
	which services you want to get notified about. For example: Subscribe to AWS Lambda with Python runtime, and you'll get an email
	when 3.10 is supported.</summary></entry><entry><title type="html">Blake3 hash plugin for MySQL written in Rust</title><id>https://mliezun.com/2022/05/05/rust-blake-mysql.html</id><updated>2022-05-05T00:00:00Z</updated><link href="https://mliezun.com/2022/05/05/rust-blake-mysql.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2022/05/05/rust-blake-mysql.html">&lt;article&gt;&lt;h2&gt;Implementing a blake3 hash plugin for MySQL in Rust&lt;/h2&gt;&lt;p&gt;It's been long since I've written something, I wanted to bring you some new stuff, so here you have a short blog post.I encourage you to try this new plugin that uses this&lt;a target=&quot;_blank&quot; href=&quot;https://docs.rs/blake3/latest/blake3/&quot;&gt;blake3&lt;/a&gt; hash implementation.Blake3 is secure, unlike MD5 and SHA-1. And secure against length extension, unlike SHA-2.Start using it and create an issue in the github repo if you would like a feature implemented!&lt;/p&gt;&lt;div&gt;Checkout&lt;a target=&quot;_blank&quot; href=&quot;https://github.com/mliezun/blake-udf&quot;&gt;blake-udf source code.&lt;/a&gt;&lt;/div&gt;&lt;h3&gt;How to use&lt;/h3&gt;&lt;h4&gt;Download and install MySQL plugin&lt;/h4&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;$ wget 'https://github.com/mliezun/blake-udf/releases/download/v0.1.0/libblake_udf.so'$ mv libblake_udf.so /usr/lib/mysql/plugin/&lt;/pre&gt;&lt;/div&gt;&lt;h4&gt;Load UDF in MySQL&lt;/h4&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;$ mysql -uroot -p -e 'create function blake3_hash returns string soname &quot;libblake_udf.so&quot;;'&lt;/pre&gt;&lt;/div&gt;&lt;h4&gt;Execute function&lt;/h4&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;$ mysql --binary-as-hex=0 -uroot -p -e 'select blake3_hash(&quot;a&quot;);'&lt;/pre&gt;Output: &lt;span class=&quot;single-quote&quot;&gt;17762fddd969a453925d65717ac3eea21320b66b54342fde15128d6caf21215f&lt;/span&gt;&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Using Rust to create a MySQL plugin that implements blake3 hash.</summary></entry><entry><title type="html">Playing with Javascript Proxies (getters/setters)</title><id>https://mliezun.com/2021/12/31/playing-with-js.html</id><updated>2021-12-31T00:00:00Z</updated><link href="https://mliezun.com/2021/12/31/playing-with-js.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2021/12/31/playing-with-js.html">&lt;article&gt;&lt;h2&gt;Playing with Javascript Proxies (getters/setters)&lt;/h2&gt;&lt;h3&gt;Overview&lt;/h3&gt;&lt;div&gt;&lt;p&gt;Happy New Year!&lt;/p&gt;&lt;p&gt;This is my final post for the 2021. This year I didn't post that much, but a lot of work was put into
				the blog to rewrite it using &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/mliezun/grotsky&quot;&gt;Grotksy&lt;/a&gt;.I hope everyone has a great 2022 and that next year is much better than the last one.&lt;/p&gt;&lt;p&gt;The inspiration for this blog post comes from the idea of building a tiny db that feels more natural to Javscript.All the databases that I've seen make a heavy use of methods like: &lt;span class=&quot;single-quote&quot;&gt;db.get()&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;db.put()&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;db.scan()&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;db.query()&lt;/span&gt;. And many others that Im sure you have seen.I think it would be great to see something like:&lt;pre class=&quot;triple-quote go&quot;&gt;const db = getDb(&quot;...&quot;)// Create new userconst u = {username: &quot;jdoe&quot;, email: &quot;jdoe@example.com&quot;, id: 100}// Store new user in the databasedb.objects.users[u.username] = u// Commit the changes to the databasedb.actions.save()&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;In this blog post we will be building a much simpler version that stores everything in memory. Each change
				made to the objects will be stored in a log (called layers) and the final object will be composed of all
				the small layers present in the log.&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;Defining a proxy&lt;/h3&gt;&lt;div&gt;&lt;p&gt;We need to implement some generic getters/setters.&lt;pre class=&quot;triple-quote go&quot;&gt;const objects = new Proxy({}, {
    get: function(obj, prop) {
        validate(prop, null)
        // Implementation
    },
    set: function(obj, prop, value) {
        validate(prop, value)
        // Implementation
    }
})&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;Let's define the validation function. In this case we want the objects to be able to be serialized to JSON.&lt;pre class=&quot;triple-quote go&quot;&gt;
const validate = (prop, value) =&gt; {
    // Make sure that the property and value are serializable
    // JSON.stringify throws an error if not serializable
    const l = {}
    l[prop] = value
    JSON.stringify(l)
    return l
}
&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;This empty proxy will validate that the values and prop are serializable and do nothing else. Now we can start building on top of it.&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;Building a tree to hold everything together&lt;/h3&gt;&lt;div&gt;&lt;p&gt;We need a root object where we will store all the changes that are applied to an object.We will have a sort of tree structure to hold everything together.It will look something like this:&lt;pre class=&quot;triple-quote go&quot;&gt;
              rootObject({})  -&gt; layers([{users: {jdoe: ...}}, {tokens: {tk1: ...}}])
                    |
        --------------------------
        |                        |
 child(.users{})          child(.tokens{})
        |                        |
       ...                      ...
&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;The root object contains the layers with all the changes made from the beginning of the existence of the object.Each time a property of the root object is accessed a child is returned that internally holds a reference to the root.This way we can go through the entire chain of access and be able to reach the stored layers.By chain of access I mean the following: &lt;span class=&quot;single-quote&quot;&gt;objects.users.jdoe.metadata.login.ip&lt;/span&gt;.As you can see, we need to traverse through many objects to be able to reach the ip field. But the layer that contains the information is only stored in the root, so each child needs to mantain a reference to the parent to be able to reach the root node.&lt;/p&gt;&lt;p&gt;Let's define a simple function to be able to create a new rootObject.&lt;pre class=&quot;triple-quote go&quot;&gt;
const wrapObject = (parent, key, current) =&gt; {
    const rootObj = {
        parent: Object.freeze(parent),
        layers: [Object.freeze({'value': current, 'previous': null})],
        pushLayer (l) {}, // Push new layer
        getLayer (ks) {}, // Get layer where information is stored based on given keys
        getValue (k) {} // Get value that matches given key
    }

    const rootProxy = {
        get: function(obj, prop) {
            validate(prop, null)
            const val = rootObj.getValue(prop)
            if (typeof val == 'object') {
                // If the value is an object we need to have a child instance
                // with a reference to the parent
                return wrapObject(rootObj, prop, val).objects
            }
            // If the value is other kind like a number or string we can safely return that
            return val
        },
        set: function(obj, prop, value) {
            const l = validate(prop, value)
            // Add new layer to the rootObj
            rootObj.pushLayer({'value': l})
        }
    }

    return {
        actions: {
            revert () {
                // Deleting the last layer will revert the changes
                const pop = rootObj.layers[rootObj.layers.length-1]
                rootObj.layers.splice(rootObj.layers.length-1, rootObj.layers.length)
                return pop
            }
        },
        objects: new Proxy({}, rootProxy)
    }
}
&lt;/pre&gt;&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;Handling layers&lt;/h3&gt;&lt;div&gt;&lt;p&gt;The layer format:&lt;pre class=&quot;triple-quote go&quot;&gt;
const layer = {
    value: {status: 'active'},
    previous: null // Reference to a previous layer that has the key 'status' in it
}
&lt;/pre&gt;The layers are stored in an array, each layer holds the value and a reference to the previous layer that set a value for the same key (in this case the key was 'status'). Also the layers form a simple linked list through the 'previous' reference. That way we have the entire history of a given key.&lt;/p&gt;&lt;p&gt;We would need a function to be able to tell if an object has a list of nested keys. Trust me for now, you'll see.&lt;pre class=&quot;triple-quote go&quot;&gt;
const nestedExists = (obj, ks) =&gt; {
    for (let j = 0; j &lt; ks.length; j++) {
        let k = ks[j];
        if (!(k in obj)) {
            return false
        }
        obj = obj[k]
    }
    return true
}
&lt;/pre&gt;In this function we receive an object and a list of keys, we start accessing the first internal object with the first key and we keep doing the same till we make sure that all the keys are present.&lt;/p&gt;&lt;p&gt;Now we're almost done. Let's define the functions for handling the store and retrieval of layers.&lt;pre class=&quot;triple-quote go&quot;&gt;
const rootObj = {
    parent: Object.freeze(parent),
    layers: [Object.freeze({'value': current, 'previous': null})],
    pushLayer (l) {
        // If this is a child object we need to build the entire chain of access
        // from the bottom up
        if (parent) {
            const ll = {}
            ll[key] = l['value']
            // Search for a previous layer modifying the same key
            const previous = parent.getLayer([key])
            // Tell the parent object to push the new layer
            parent.pushLayer(Object.freeze({'value': ll, previous}))
        } else {
            // We are in the root object, add the layer to the array
            this.layers.push(Object.freeze(l))
        }
    },
    getLayer (ks) {
        // Search through the entire list of layers to see if one contains all the keys
        // that we are looking for. Start from the end of the array (top of the stack)
        for (let i = this.layers.length - 1; i &gt;= 0; i--) {
            let v = nestedExists(this.layers[i]['value'], ks)
            if (v) {
                return this.layers[i]
            }
        }
        if (parent) {
            // If we are in a child object, look through all the previous layers
            // and see if the key we're looking for is contained in one of them.
            let ll = parent.getLayer([key].concat(ks))
            while (ll) {
                let a = nestedExists(ll['value'][key], ks)
                if (a) {
                    return Object.freeze({'value': ll['value'][key]})
                }
                ll = ll.previous
            }
        }
    },
    getValue (k) {
        // Straightforward, get layer and return value
        const l = this.getLayer([k])
        if (l) {
            return Object.freeze(l['value'][k])
        }
    }
}
&lt;/pre&gt;That's all we need. We can create a new object and start adding and modifying properties. Each change will be added to the end of the log and worked out when a property is accessed.&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;Wrapping Up&lt;/h3&gt;&lt;div&gt;&lt;p&gt;Let's try the final result. The source code is loaded in this page, so you can open a dev console in the browser and try for yourself.&lt;pre class=&quot;triple-quote go&quot;&gt;
const store = wrapObject(null, null, {})

// Create new user
const user = {username: 'jdoe', email: 'jdoe@example.com', name: 'John Doe', id: 100}

// Add new user
store.objects.users = {}
store.objects.users[user.username] = user

// Print user email
console.log(store.objects.users.jdoe.email)

// Change user email and print
store.objects.users.jdoe.email = 'jdoe2@example.com'
console.log(store.objects.users.jdoe.email)

// Revert last change and print email again
store.actions.revert()
console.log(store.objects.users.jdoe.email)
&lt;/pre&gt;&lt;/p&gt;&lt;p&gt;That's it for now. We defined a Javascript object that contains the entire history of changes that were made to itself.And at any point we can revert the changes and go back to a previous state.Everything is stored in an array and is easily serializable.If we wanted to take this to the next level, each change could be written to a persistence storage (s3, sqlite, mysql, ...)&lt;/p&gt;&lt;p&gt;The full source code is available in a&lt;a target=&quot;_blank&quot; href=&quot;https://gist.github.com/mliezun/5946c8af80e3747519175027579414fb&quot;&gt;public gist&lt;/a&gt;.&lt;/p&gt;&lt;/div&gt;&lt;script&gt;
const validate = (prop, value) =&gt; {
    // Make sure that the property and value are serializable
    // JSON.stringify throws an error if not serializable
    const l = {}
    l[prop] = value
    JSON.stringify(l)
    return l
}

const nestedExists = (obj, ks) =&gt; {
    for (let j = 0; j &lt; ks.length; j++) {
        let k = ks[j];
        if (!(k in obj)) {
            return false
        }
        obj = obj[k]
    }
    return true
}

const wrapObject = (parent, key, current) =&gt; {
    const rootObj = {
        parent: Object.freeze(parent),
        layers: [Object.freeze({'value': current, 'previous': null})],
        pushLayer (l) {
            if (parent) {
                const ll = {}
                ll[key] = l['value']
                const previous = parent.getLayer([key])
                parent.pushLayer(Object.freeze({'value': ll, previous}))
            } else {
                this.layers.push(Object.freeze(l))
            }
        },
        getLayer (ks) {
            for (let i = this.layers.length - 1; i &gt;= 0; i--) {
                let v = nestedExists(this.layers[i]['value'], ks)
                if (v) {
                    return this.layers[i]
                }
            }
            if (parent) {
                let ll = parent.getLayer([key].concat(ks))
                while (ll) {
                    let a = nestedExists(ll['value'][key], ks)
                    if (a) {
                        return Object.freeze({'value': ll['value'][key]})
                    }
                    ll = ll.previous
                }
            }
        },
        getValue (k) {
            const l = this.getLayer([k])
            if (l) {
                return Object.freeze(l['value'][k])
            }
        }
    }

    const rootProxy = {
        get: function(obj, prop) {
            validate(prop, null)
            const val = rootObj.getValue(prop)
            if (typeof val == 'object') {
                return wrapObject(rootObj, prop, val).objects
            }
            return val
        },
        set: function(obj, prop, value) {
            const l = validate(prop, value)
            rootObj.pushLayer({'value': l})
        }
    }

    return {
        actions: {
            revert () {
                const pop = rootObj.layers[rootObj.layers.length-1]
                rootObj.layers.splice(rootObj.layers.length-1, rootObj.layers.length)
                return pop
            }
        },
        objects: new Proxy({}, rootProxy)
    }
}
&lt;/script&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">In this last post of the year I play with proxies in an attempt to create a Javascript object where changes are appended
	to a log and can be reverted by deleting the last element of the log using getters and setters.</summary></entry><entry><title type="html">I created a programming language and this blog is powered by it</title><id>https://mliezun.com/2021/10/04/new-blog-engine.html</id><updated>2021-10-04T00:00:00Z</updated><link href="https://mliezun.com/2021/10/04/new-blog-engine.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2021/10/04/new-blog-engine.html">&lt;article&gt;&lt;h2&gt;I created a programming language and this blog is powered by it&lt;/h2&gt;&lt;h3&gt;Why did I do it?&lt;/h3&gt;&lt;div&gt;&lt;p&gt;Mostly for fun.&lt;/p&gt;&lt;p&gt;If you follow my blog or take a look at some of the posts that I made, you will see that I wasbuilding a programming language called &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/mliezun/grotsky&quot;&gt;Grotksy&lt;/a&gt;. Just a toy programming language that I made based on the book &lt;a target=&quot;_blank&quot; href=&quot;http://www.craftinginterpreters.com/&quot;&gt;Crafting Interpreters&lt;/a&gt;, which I totally recommend buying and reading if you haven't yet.&lt;/p&gt;&lt;p&gt;I wanted to build something interesting but simple enough that could be made with Grotsky.I tought that replacing Jekyll with my own engine was a task worth a try.There is nothing groundbreaking or innovative being made here, just a little experimentation.&lt;/p&gt;&lt;p&gt;I have to give credit to the project &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/fpereiro/lith&quot;&gt;lith&lt;/a&gt;, because the 'templating' engine for the blog is inspired by it.&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;How did I do it?&lt;/h3&gt;&lt;div&gt;&lt;p&gt;That's a good question.&lt;/p&gt;&lt;p&gt;Originally, this blog was powered by Jekyll, that translated markdown to html and hosted by Github Pages. I decided that I was going to build a templating engine and generate html to keep things simple.&lt;/p&gt;&lt;p&gt;But also, as a challenge I made a simple HTTP server to use as a dev server when trying the blog locally.&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;HTTP Server&lt;/h3&gt;&lt;div&gt;&lt;p&gt;For the purpose of having a custom HTTP Server I had to add support for TCP sockets to the language.I wrapped the go standard library in some functions and exposed that to the Grotsky language.In grotsky looks something like this&lt;/p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;
let socket = net.listenTcp(host + &quot;:&quot; + port)
let conn
while true {
	try {
		conn = socket.accept()
	} catch err {
		io.println(&quot;Error accepting new connection&quot;, err)
		continue
	}
	try {
		# Call function that handles connection
		handleConn(conn)
	} catch err {
		io.println(&quot;Error handling connection&quot;, err)
	}
	try {
		conn.close()
	} catch err {}
}
				&lt;/pre&gt;&lt;p&gt;This means that the server listens on a socket, accepts connections, writes some text/bytes to the connection and then closes the connection.&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;Template Engine&lt;/h3&gt;&lt;div&gt;&lt;p&gt;The templating engine is built using the native support Grotsky provide for lists.A regular page for the blog looks like this:&lt;/p&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;
let base = import(&quot;../base.gr&quot;)
# Create new Post Object
let post = base.Post(
	&quot;Title&quot;,
	&quot;Brief description of blog post.&quot;,
	&quot;Author Name&quot;,
	&quot;descriptive,tags&quot;,
	[
		[
			&quot;h2&quot;,
			[],
			[
				&quot;Title&quot;
			]
		],
		[
			&quot;div&quot;,
			[&quot;class&quot;, &quot;content&quot;],
			[
				&quot;Content line 1&quot;,
				&quot;Content line 2&quot;,
			]
		]
	]
)
			&lt;/pre&gt;&lt;p&gt;It's pretty straightforward: the first element of the list is the html tag, the second isan array of properties for the tag and the last one is a list that contains what will bethe *content* of enclosed by the tags.&lt;/p&gt;&lt;/div&gt;&lt;h3&gt;Resources&lt;/h3&gt;&lt;div&gt;&lt;p&gt;If you want to take a peek, the source code for both projects is available on github:&lt;ul&gt;&lt;li&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://github.com/mliezun/grotsky&quot;&gt;Grotsky Programming Language&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a target=&quot;_blank&quot; href=&quot;https://github.com/mliezun/mliezun.github.io&quot;&gt;Blog Source Code&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/p&gt;&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">I created my own programming language using Go, then I built a blog engine and used that engine to build this blog.</summary></entry><entry><title type="html">Mlisp: My own lisp implementation compiled to WASM</title><id>https://mliezun.com/2021/04/01/mlisp-wasm.html</id><updated>2021-04-01T00:00:00Z</updated><link href="https://mliezun.com/2021/04/01/mlisp-wasm.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2021/04/01/mlisp-wasm.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Mlisp, My own lisp implementation&lt;/h2&gt;&lt;div&gt;&lt;a href=&quot;https://github.com/mliezun/mlisp&quot;&gt;Mlisp&lt;/a&gt; a tiny lispy language based on the book &lt;a href=&quot;http://www.buildyourownlisp.com/&quot;&gt;Build Your Own Lisp&lt;/a&gt;.The interpreter is written in C and compiled directly to WASM. You can try it in this page by openning the developer console of your browser and typing &lt;span class=&quot;single-quote&quot;&gt;Mlisp.interpret(&quot;+ 2 2&quot;)&lt;/span&gt; or using the repl shown below.&lt;/div&gt;&lt;h3&gt;Interface&lt;/h3&gt;&lt;div&gt;To be able to access C functions from your browser you have to export them. Let's see how we can define a function that is exported.&lt;pre class=&quot;triple-quote c&quot;&gt;#if __EMSCRIPTEN__EMSCRIPTEN_KEEPALIVE#endifint mlisp_init();&lt;/pre&gt;When compilen with &lt;span class=&quot;single-quote&quot;&gt;emcc&lt;/span&gt; the emscripten compiler to wasm, you have to add &lt;span class=&quot;single-quote&quot;&gt;EMSCRIPTEN_KEEPALIVE&lt;/span&gt; macro before your function so it doesn't get optimized away.The exported functions in this project are:&lt;pre class=&quot;triple-quote c&quot;&gt;int mlisp_init();char *mlisp_interpret(char *input);void mlisp_cleanup();&lt;/pre&gt;The project is then compiled with: &lt;pre class=&quot;triple-quote &quot;&gt;emcc -std=c99  -Wall -O3 -s WASM=1 -s EXTRA_EXPORTED_RUNTIME_METHODS='[&quot;cwrap&quot;]'&lt;/pre&gt;That means that you would be able to access the exported functions using a &lt;span class=&quot;single-quote&quot;&gt;cwrap&lt;/span&gt; that let's you wrap a C function call from a Javascript function call.This compilation generates two files &lt;span class=&quot;single-quote&quot;&gt;mlisp.js&lt;/span&gt; and &lt;span class=&quot;single-quote&quot;&gt;mlisp.wasm&lt;/span&gt;.The javascript file defines a &lt;span class=&quot;single-quote&quot;&gt;Module&lt;/span&gt; that provides useful tool to access exported functions.&lt;/div&gt;&lt;h4&gt;Let's start using it&lt;/h4&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote js&quot;&gt;const Mlisp = {    init: Module.cwrap('mlisp_init', 'number', []),    interpret: Module.cwrap('mlisp_interpret', 'string', ['string']),    cleanup: Module.cwrap('mlisp_cleanup', 'void', []),};// Init interpreterMlisp.init();// Run some commandsconsole.log(Mlisp.interpret(&quot;+ 2 2&quot;));// Cleanup interpreterMlisp.cleanup();&lt;/pre&gt;&lt;/div&gt;&lt;h3&gt;Automated Build &amp; Release from github&lt;/h3&gt;&lt;div&gt;I made a github workflow for this project to automatically build and release so you can retrieve them from &lt;a href=&quot;https://github.com/mliezun/mlisp/releases/tag/refs%2Fheads%2Fmaster&quot;&gt;Github&lt;/a&gt;.&lt;/div&gt;&lt;h3&gt;REPL&lt;/h3&gt;&lt;div&gt;&lt;script src=&quot;/assets/mlisp/mlisp.js&quot;&gt;&lt;/script&gt;&lt;style&gt;.container-centered {  display: flex;  justify-content: center;}.vertical-centered {  display: block;}&lt;/style&gt;&lt;div class=&quot;container-centered&quot;&gt;    &lt;div class=&quot;vertical-centered&quot; style=&quot;width: 40vw&quot;&gt;        &lt;textarea id=&quot;show-repl&quot; disabled=&quot;true&quot; style=&quot;min-width: 40vw; max-width: 40vw; min-height: 20vh&quot;&gt;&lt;/textarea&gt;        &lt;input id=&quot;input-command&quot; type=&quot;text&quot; style=&quot;min-width: 40vw; max-width: 40vw&quot; placeholder=&quot;&gt; Input some commands&quot;/&gt;    &lt;/div&gt;&lt;/div&gt;&lt;script type=&quot;application/javascript&quot;&gt;var A = {    mlisp: null,    init () {        const node = document.getElementById('input-command');        node.addEventListener(&quot;keyup&quot;, (event) =&gt; {            if (event.key === &quot;Enter&quot;) {                this.handleInput(event);            }        });    },    handleInput(ev) {        if (!this.mlisp) {            window.Mlisp = {                init: Module.cwrap('mlisp_init', 'number', []),                interpret: Module.cwrap('mlisp_interpret', 'string', ['string']),                cleanup: Module.cwrap('mlisp_cleanup', 'void', []),            };            this.mlisp = window.Mlisp;            this.mlisp.init();        }        const node = ev.target;        const cmd = node.value;        if (!cmd) {            return;        }        const output = document.getElementById('show-repl');        const result = this.mlisp.interpret(cmd);        node.value = null;        output.value += `&gt; ${cmd}\n\t${result}\n`;    }};A.init();&lt;/script&gt;&lt;/div&gt;&lt;h3&gt;Interesting commands to try out&lt;/h3&gt;&lt;div&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;foldl&lt;/span&gt;: Fold left (same as reduce left)    - &lt;span class=&quot;single-quote&quot;&gt;(foldl + 0 {1 2 3 4 5})&lt;/span&gt;: Sum of elements&lt;/li&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;filter&lt;/span&gt;    - &lt;span class=&quot;single-quote&quot;&gt;(filter (\ {e} {&gt; e 3}) {1 2 3 4 5 6})&lt;/span&gt;: Elements bigger than 3&lt;/li&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;map&lt;/span&gt;    - &lt;span class=&quot;single-quote&quot;&gt;(foldl * 1 (map (\ {e} {* e 2}) {1 1 1 1 1}))&lt;/span&gt;: Multiply elements by 2 and then multiply all elements&lt;/li&gt;&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Lisp implementation written in C that compiles to WASM with emscripten.</summary></entry><entry><title type="html">Grotsky Part 4: Writing a service to get your public IP</title><id>https://mliezun.com/2020/12/17/grotsky-getmyip.html</id><updated>2020-12-17T00:00:00Z</updated><link href="https://mliezun.com/2020/12/17/grotsky-getmyip.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2020/12/17/grotsky-getmyip.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Writing a service to get your public IP&lt;/h2&gt;&lt;div&gt;&lt;a href=&quot;https://github.com/mliezun/grotsky&quot;&gt;Grotsky&lt;/a&gt; (my toy programming language) finally can be used to make something useful.In this post I want to show you how I made a service that let's your retrieve your public IP as a response to a HTTP Request.&lt;/div&gt;&lt;h3&gt;Show me the code&lt;/h3&gt;&lt;div&gt;Let's start by building the http request handler.The service will be deployed to heroku. Heroku passes the port that the http server has to listen as an environment variable named &lt;span class=&quot;single-quote&quot;&gt;PORT&lt;/span&gt;.&lt;/div&gt;&lt;h5&gt;Let's get the server up and running&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;let listen = &quot;:8092&quot;let port = env.get(&quot;PORT&quot;)if port != &quot;&quot; {    listen = &quot;:&quot; + port}io.println(&quot;Listen &quot; + listen)http.listen(listen)&lt;/pre&gt;We listen by default at the port 8092 and if the environment variable is given we change it.Then we print what is the port and start the server with &lt;span class=&quot;single-quote&quot;&gt;http.listen&lt;/span&gt;. That blocks the execution and starts the server.Grotsky interpreter is written in Go, and uses Go's standard http server. Each requests is handled by a goroutine, but because Grotsky is single threaded only one goroutine executes at any given point in time. When a request is received the goroutine has to hold the GIL (Global Interrupt Lock) to be able to give control to the interpreter.&lt;/div&gt;&lt;h5&gt;Now lets add some code to handle requests&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;fn getIP(rq, rs) {    io.println(&quot;Request from --&gt; &quot; + rq.address)    rs.write(200, rq.address)}http.handler(&quot;/&quot;, getIP)let listen = &quot;:8092&quot;let port = env.get(&quot;PORT&quot;)if port != &quot;&quot; {    listen = &quot;:&quot; + port}io.println(&quot;Listen &quot; + listen)http.listen(listen)&lt;/pre&gt;Now we have something interesting to try out!What we've done is to log and write back as response the address of the device that is doing the request.To try it out you need to download grotsky.&lt;pre class=&quot;triple-quote bash&quot;&gt;$ go get github.com/mliezun/grotsky/cmd/grotsky&lt;/pre&gt;Save the Grotsky code under a filed called &lt;span class=&quot;single-quote&quot;&gt;getip.g&lt;/span&gt; and the execute it using the grotsky interpreter:&lt;pre class=&quot;triple-quote bash&quot;&gt;$ go run $(go env GOPATH)/src/github.com/mliezun/grotsky/cmd/grotsky getip.g&lt;/pre&gt;Output:&lt;pre class=&quot;triple-quote &quot;&gt;Listen :8092&lt;/pre&gt;Now you can make a request to see if it is working&lt;pre class=&quot;triple-quote bash&quot;&gt;$ curl localhost:8092&lt;/pre&gt;Output:&lt;pre class=&quot;triple-quote &quot;&gt;[::1]:43464&lt;/pre&gt;We see that the address contains the port we want to split it and show just the IP.&lt;/div&gt;&lt;h5&gt;Let's write a couple functions to do that&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;fn findReversed(string, char) {    for let i = string.length-1; i &gt; -1; i = i - 1 {        if string[i] == char {            return i        }    }    return -1}fn parseIP(address) {    let ix = findReversed(address, &quot;:&quot;)    return address[:ix]}&lt;/pre&gt;The function &lt;span class=&quot;single-quote&quot;&gt;findReversed&lt;/span&gt; finds the first index where &lt;span class=&quot;single-quote&quot;&gt;char&lt;/span&gt; appears in &lt;span class=&quot;single-quote&quot;&gt;string&lt;/span&gt; starting from the end.The function &lt;span class=&quot;single-quote&quot;&gt;parseIP&lt;/span&gt; uses &lt;span class=&quot;single-quote&quot;&gt;findReversed&lt;/span&gt; to obtain the index where &quot;:&quot; splits the IP and the PORT and uses that index to return just the IP address.&lt;/div&gt;&lt;h5&gt;Now we can send just the IP address&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;fn getIP(rq, rs) {    let address = parseIP(rq.address)    io.println(&quot;Request from --&gt; &quot; + address)    rs.write(200, address)}&lt;/pre&gt;Add the two functions at the beginning of the file and modify the getIP function.Restart the server and now if you make a request you should get just the IP.&lt;pre class=&quot;triple-quote &quot;&gt;$ curl localhost:8092[::1]&lt;/pre&gt;Voila!&lt;/div&gt;&lt;h5&gt;We have just one last issue: Proxies!&lt;/h5&gt;&lt;div&gt;Our service will probably sit behind a proxy, so we need to read the address from a special header &lt;span class=&quot;single-quote&quot;&gt;X-Forwarded-For&lt;/span&gt;.Let's implement that!&lt;pre class=&quot;triple-quote &quot;&gt;fn getIP(rq, rs) {    let address = parseIP(rq.address)    let forwarded = rq.headers[&quot;X-Forwarded-For&quot;]    if forwarded != nil {        address = forwarded[0]    }    io.println(&quot;Request from --&gt; &quot; + address)    rs.write(200, address)}&lt;/pre&gt;We read the header from the request and if &lt;span class=&quot;single-quote&quot;&gt;X-Forwarded-For&lt;/span&gt; is present we sent that as a response to the user.&lt;/div&gt;&lt;h5&gt;Our work is complete. Let's try it!&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;$ curl localhost:8092 -H 'X-Forwarded-For: 8.8.8.8'8.8.8.8$ curl localhost:8092[::1]&lt;/pre&gt;Well done. Now you can deploy it to Heroku (that's up to you) or any other cloud platform.I have my own version running under: https://peaceful-lowlands-45821.herokuapp.com/&lt;/div&gt;&lt;p&gt;Deployed to Fly.io after Heroku killed the free plan: http://morning-breeze-4255.fly.dev&lt;/p&gt;&lt;h5&gt;Try it from your command line&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote bash&quot;&gt;$ curl http://morning-breeze-4255.fly.dev&lt;/pre&gt;&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Part 4 of building my own language series. This time I write and deploy a service to Heroku that let's your retrieve your pulbic IP.</summary></entry><entry><title type="html">Executing python code from MySQL Server</title><id>https://mliezun.com/2020/04/19/mysql-python.html</id><updated>2020-04-19T00:00:00Z</updated><link href="https://mliezun.com/2020/04/19/mysql-python.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2020/04/19/mysql-python.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Executing python code from MySQL Server&lt;/h2&gt;&lt;div&gt;&lt;/div&gt;&lt;h3&gt;Trying &lt;span class=&quot;single-quote&quot;&gt;py_eval&lt;/span&gt;&lt;/h3&gt;&lt;div&gt;&lt;/div&gt;&lt;h5&gt;Generate a list of integers from 0 to 10&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote sql&quot;&gt;&gt; select py_eval('[i for i in range(10)]') list;+--------------------------------+| list                           |+--------------------------------+| [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] |+--------------------------------+&lt;/pre&gt;&lt;/div&gt;&lt;h5&gt;Generate a dictionary (json object) from a list of dicts&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote sql&quot;&gt;&gt; select replace(py_eval('{ str(user[&quot;id&quot;]) : user for user in [{&quot;id&quot;: 33, &quot;name&quot;: &quot;John&quot;}, {&quot;id&quot;: 44, &quot;name&quot;: &quot;George&quot;}] }'), &quot;'&quot;, '&quot;') dict;+------------------------------------------------------------------------+| dict                                                                   |+------------------------------------------------------------------------+| {&quot;33&quot;: {&quot;id&quot;: 33, &quot;name&quot;: &quot;John&quot;}, &quot;44&quot;: {&quot;id&quot;: 44, &quot;name&quot;: &quot;George&quot;}} |+------------------------------------------------------------------------+&lt;/pre&gt;Replace is needed here, because python uses single quotes for dictionaries.&lt;/div&gt;&lt;h5&gt;Make a function that receives a json array and a key and sorts the array by key&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote sql&quot;&gt;DROP function IF EXISTS &lt;span class=&quot;single-quote&quot;&gt;sort_by_key&lt;/span&gt;;DELIMITER $$CREATE FUNCTION &lt;span class=&quot;single-quote&quot;&gt;sort_by_key&lt;/span&gt; (arr json, k text)RETURNS jsonBEGIN    RETURN replace(py_eval(CONCAT(&quot;sorted(&quot;, arr, &quot;, key=lambda e: e['&quot;, k, &quot;'])&quot;)), &quot;'&quot;, '&quot;');END$$DELIMITER ;&lt;/pre&gt;Test&lt;pre class=&quot;triple-quote sql&quot;&gt;&gt; select sort_by_key('[{&quot;a&quot;:2}, {&quot;a&quot;:1}, {&quot;a&quot;: 722}, {&quot;a&quot;: 0}]', 'a') sorted;+--------------------------------------------+| sorted                                     |+--------------------------------------------+| [{&quot;a&quot;: 0}, {&quot;a&quot;: 1}, {&quot;a&quot;: 2}, {&quot;a&quot;: 722}] |+--------------------------------------------+&lt;/pre&gt;&lt;/div&gt;&lt;h3&gt;How to write a MySQL UDF&lt;/h3&gt;&lt;div&gt;There is a pretty good guide at the &lt;a href=&quot;https://dev.mysql.com/doc/refman/8.0/en/adding-udf.html&quot;&gt;MySQL 8.0 Reference Manual&lt;/a&gt;. I'll give you a brief explanation so you can start quickly, but reading the full guide is highly recomended.MySQL's UDFs are written in C++ and need to follow certain conventions so they can be recognized as such.In our case, we want our MySQL function to be called &lt;span class=&quot;single-quote&quot;&gt;py_eval&lt;/span&gt;, so we have to define the following C++ functions:&lt;li&gt;py_eval_init or py_eval_deinit&lt;/li&gt;&lt;li&gt;py_eval&lt;/li&gt;**py_eval_init**: (Optional) Initializes memory and data structures for the function execution.**py_eval**: Executes the actual function, in our case evaluates a python expression.**py_eval_deinit**: (Optional) If any memory was allocated in the init function, this is the place where we free it.For &lt;span class=&quot;single-quote&quot;&gt;py_eval&lt;/span&gt; we only need **py_eval_init** and **py_eval**.&lt;/div&gt;&lt;h4&gt;Functions signatures&lt;/h4&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote c&quot;&gt;bool py_eval_init(UDF_INIT *initid, UDF_ARGS *args,                             char *message);char *py_eval(UDF_INIT *, UDF_ARGS *args, char *result,                         unsigned long *res_length, unsigned char *null_value,                         unsigned char *);&lt;/pre&gt;These are the standard definitions for MySQL functions that return string, as is the case of &lt;span class=&quot;single-quote&quot;&gt;py_eval&lt;/span&gt;. To be able to declare this functions, you need to have the definition of &lt;span class=&quot;single-quote&quot;&gt;UDF_INIT&lt;/span&gt; and &lt;span class=&quot;single-quote&quot;&gt;UDF_ARGS&lt;/span&gt;, you can find that at the source code of mysql server -&gt; &lt;a href=&quot;https://github.com/mysql/mysql-server/blob/8.0/include/mysql/udf_registration_types.h&quot;&gt;right here&lt;/a&gt;.&lt;/div&gt;&lt;h3&gt;Evaluating python expression&lt;/h3&gt;&lt;div&gt;For evaluating python expression, we'll be using &lt;a href=&quot;https://github.com/pybind/pybind11&quot;&gt;pybind11&lt;/a&gt;. That gives us the ability to directly access the python interpreter and execute code.&lt;/div&gt;&lt;h5&gt;Example&lt;/h5&gt;&lt;div&gt;Make sure you have &lt;span class=&quot;single-quote&quot;&gt;g++&lt;/span&gt; installed. Try executing: &lt;span class=&quot;single-quote&quot;&gt;g++ --help&lt;/span&gt;. And some version of python running of your system, for this tutorial I'll be using version _3.8_.&lt;pre class=&quot;triple-quote bash&quot;&gt;$ mkdir py_eval &amp;&amp; cd py_eval$ git clone https://github.com/pybind/pybind11&lt;/pre&gt;Create a new file called &lt;span class=&quot;single-quote&quot;&gt;main.cpp&lt;/span&gt; with the following content:&lt;pre class=&quot;triple-quote c&quot;&gt;#include &quot;pybind11/include/pybind11/embed.h&quot;#include &quot;pybind11/include/pybind11/eval.h&quot;#include &lt;iostream&gt;namespace py = pybind11;py::scoped_interpreter guard{}; // We need this to keep the interpreter aliveint main(void) {    auto obj = py::eval(&quot;[i for i in range(10)]&quot;);    std::cout &lt;&lt; std::string(py::str(obj)) &lt;&lt; std::endl;}&lt;/pre&gt;To run the example we have to compile the file.First, we need the compilation flags.&lt;pre class=&quot;triple-quote bash&quot;&gt;$ pkg-config python-3.8 --libs --cflags-I/usr/include/python3.8&lt;/pre&gt;Then, we can compile and run our code with the following.&lt;pre class=&quot;triple-quote bash&quot;&gt;$ g++ main.cpp -I/usr/include/python3.8 -lpython3.8$ ./a.out[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]&lt;/pre&gt;&lt;/div&gt;&lt;h3&gt;Putting all together&lt;/h3&gt;&lt;div&gt;Download udf types to the project folder.&lt;pre class=&quot;triple-quote bash&quot;&gt;$ wget https://raw.githubusercontent.com/mysql/mysql-server/8.0/include/mysql/udf_registration_types.h&lt;/pre&gt;Create a new file called &lt;span class=&quot;single-quote&quot;&gt;py_eval.cpp&lt;/span&gt;, with the following content:&lt;pre class=&quot;triple-quote &quot;&gt;c++#include &quot;pybind11/include/pybind11/embed.h&quot;#include &quot;pybind11/include/pybind11/eval.h&quot;#include &quot;udf_registration_types.h&quot;#include &lt;string.h&gt;namespace py = pybind11;py::scoped_interpreter guard{}; // We need this to keep the interpreter aliveextern &quot;C&quot; bool py_eval_init(UDF_INIT *initid, UDF_ARGS *args,                             char *message){    // Here we can check if we received one argument    if (args-&gt;arg_count != 1)    {        // The function returns true if there is an error,        // the error message is copied to the message arg.        strcpy(message, &quot;py_eval must have one argument&quot;);        return true;    }    // Cast the passed argument to string    args-&gt;arg_type[0] = STRING_RESULT;    initid-&gt;maybe_null = true; /* The result may be null */    return false;}extern &quot;C&quot; char *py_eval(UDF_INIT *, UDF_ARGS *args, char *result,                         unsigned long *res_length, unsigned char *null_value,                         unsigned char *){    // Evaluate the argument as a python expression    auto obj = py::eval(args-&gt;args[0]);    // Cast the result to std::string    std::string res_str = std::string(py::str(obj));    // Copy the output string from py::eval to the result argument    strcpy(result, res_str.c_str());    // Set the length of the result string    *res_length = res_str.length();    return result;}&lt;/pre&gt;Then, we have to compile the project as a shared library, and move it to the plugin folder of mysql (in your case, it could be located in some other directory).&lt;pre class=&quot;triple-quote &quot;&gt;$ g++ -I/usr/include/python3.8 -lpython3.8 -shared -fPIC -o py_eval.so py_eval.cpp$ sudo cp py_eval.so /usr/lib/mysql/plugin/&lt;/pre&gt;Now, it's time to try it from mysql.First, connect to your server as root.&lt;pre class=&quot;triple-quote &quot;&gt;$ sudo mysql -uroot&lt;/pre&gt;Create and test the function.&lt;pre class=&quot;triple-quote &quot;&gt;&gt; create function py_eval returns string soname 'py_eval.so';Query OK, 0 rows affected (0.029 sec)&gt; select py_eval('[i for i in range(10)]') list;+--------------------------------+| list                           |+--------------------------------+| [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] |+--------------------------------+1 row in set (0.001 sec)&lt;/pre&gt;&lt;/div&gt;&lt;h3&gt;Future&lt;/h3&gt;&lt;div&gt;There is a lot to do, for example there is no error control on the function execution. The python expression that we are trying to evaluate could fail causing a server reboot. Also, there is some extra work to do to be able to use &lt;span class=&quot;single-quote&quot;&gt;import&lt;/span&gt;. And there are many concerns regarding concurrency issues.If you want to contribute to improve execution of python code on mysql server, please go to my &lt;a href=&quot;https://github.com/mliezun/mysql-python&quot;&gt;github project&lt;/a&gt; and make a PR.I hope you enjoyed this tutorial and come back soon for more.&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Evaluate python expressions inside MySQL using a UDF that binds to python interpreter.</summary></entry><entry><title type="html">Writing your own C malloc and free</title><id>https://mliezun.com/2020/04/11/custom-malloc.html</id><updated>2020-04-11T00:00:00Z</updated><link href="https://mliezun.com/2020/04/11/custom-malloc.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2020/04/11/custom-malloc.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Writing your own C malloc and free&lt;/h2&gt;&lt;div&gt;&lt;/div&gt;&lt;h3&gt;Challenge&lt;/h3&gt;&lt;div&gt;This challenge comes from the book Crafting Interpreters by Bob Nystrom. And can be found in &lt;a href=&quot;http://www.craftinginterpreters.com/chunks-of-bytecode.html#challenges&quot;&gt;Chapter 14 - Challenge 3&lt;/a&gt;.The challenge goes:&gt; You are allowed to call malloc() once, at the beginning of the interpreters execution, to allocate a single big block of memory which your reallocate() function has access to. It parcels out blobs of memory from that single region, your own personal heap. Its your job to define how it does that.&lt;/div&gt;&lt;p&gt;Check out this &lt;a href=&quot;https://xeiaso.net/blog/anything-message-queue&quot; target=&quot;_blank&quot;&gt;comparison of malloc() to S3.&lt;/a&gt;&lt;/p&gt;&lt;h3&gt;Solution&lt;/h3&gt;&lt;div&gt;As stated in the challenge I'll be using a big chunk of _contiguous_ memory. The main idea of my solution is to store the blocks of memory in the array prepending a header with metadata.&lt;pre class=&quot;triple-quote &quot;&gt; _______________________________________________|head_0|block_0 ... |head_1|block_1    ...      | &lt;/pre&gt;The structure of the header is pretty similar to that of a linked list.&lt;pre class=&quot;triple-quote c&quot;&gt;struct block_meta{    size_t size;    struct block_meta *next;    int free;};#define META_SIZE sizeof(struct block_meta)&lt;/pre&gt;It stores the size of the block, a pointer to the next block and a flag to mark wether it's free or not.Then, a function to traverse the list of blocks and find if there is any freed block is needed:&lt;pre class=&quot;triple-quote c&quot;&gt;void *first_block = NULL;struct block_meta *find_free_block(struct block_meta **last, size_t size){    struct block_meta *current = first_block;    while (current &amp;&amp; !(current-&gt;free &amp;&amp; current-&gt;size &gt;= size))    {        *last = current;        current = current-&gt;next;    }    return current;}&lt;/pre&gt;This function receives a double pointer to a block_meta struct called &lt;span class=&quot;single-quote&quot;&gt;last&lt;/span&gt; that at the end of the execution should be pointing to the last node of the list and a size_t variable that indicates the minimum size that the block needs to be.&lt;/div&gt;&lt;h5&gt;Memory initialization&lt;/h5&gt;&lt;div&gt;Two functions are needed to handle the big chunk of memory, one to initialize and the other to free it.&lt;pre class=&quot;triple-quote c&quot;&gt;void initMemory();void freeMemory();&lt;/pre&gt;To implement &lt;span class=&quot;single-quote&quot;&gt;initMemory&lt;/span&gt; I've decided that I would ask for the maximum amount of memory that I could get from the OS.&lt;pre class=&quot;triple-quote c&quot;&gt;#define MINREQ 0x20000// Big block of memoryvoid *memory = NULL;// Position where the last block endssize_t endpos = 0;void initMemory(){    size_t required = PTRDIFF_MAX;    while (memory == NULL)    {        memory = malloc(required);        if (required &lt; MINREQ)        {            if (memory)            {                free(memory);            }            printf(&quot;Cannot allocate enough memory\n&quot;);            exit(ENOMEM);        }        required &gt;&gt;= 1;    }}void freeMemory(){    free(memory);}&lt;/pre&gt;As you can see, &lt;span class=&quot;single-quote&quot;&gt;initMemory&lt;/span&gt; starts trying to allocate the maximum amount a memory allowed, and starts to divide that amount by 2 every time the allocation fails. If there isn't at least 128KB of memory available the program crashes with ENOMEM.Now that we have our chunk of memory ready to go, we can start to start giving blocks away.&lt;pre class=&quot;triple-quote c&quot;&gt;struct block_meta *request_block(size_t size){    struct block_meta *last = NULL;    struct block_meta *block = find_free_block(&amp;last, size);    if (block)    {        block-&gt;free = 0;        return block;    }    // Append new block to list    block = memory + endpos;    endpos += META_SIZE + size;    if (last)    {        last-&gt;next = block;    }    else    {        first_block = block;    }    block-&gt;free = 0;    block-&gt;next = NULL;    block-&gt;size = size;    return block;}&lt;/pre&gt;How &lt;span class=&quot;single-quote&quot;&gt;request_block&lt;/span&gt; works:1. Tries to find a free block with enough space. If there is one, it is set as occupied and returns that block.2. If there isn't a free block available. It adds a new block with enough space at the end of &lt;span class=&quot;single-quote&quot;&gt;memory&lt;/span&gt; (the big chunk).3. If this is the first call, points the head of the list to the recently created block, else point the last node to the block.4. Set the new block as occupied, set the size and next to null. Then return the new block.With this function, implementing &lt;span class=&quot;single-quote&quot;&gt;malloc&lt;/span&gt; and &lt;span class=&quot;single-quote&quot;&gt;free&lt;/span&gt; is pretty easy:&lt;pre class=&quot;triple-quote c&quot;&gt;void *my_malloc(size_t size){    struct block_meta *block = request_block(size);    return block + 1;}void my_free(void *ptr){    struct block_meta *block = ptr - META_SIZE;    block-&gt;free = 1;}&lt;/pre&gt;To finish the challenge, I have to implement realloc, that is a little bit more tricky.&lt;pre class=&quot;triple-quote c&quot;&gt;void *my_realloc(void *ptr, size_t size){    if (!ptr)    {        return my_malloc(size);    }    struct block_meta *block = ptr - META_SIZE;    if (block-&gt;size &gt;= size)    {        return block + 1;    }    uint8_t *newptr = my_malloc(size);    size_t i;    for (i = 0; i &lt; (block-&gt;size &lt; size ? block-&gt;size : size); i++)    {        newptr[i] = ((uint8_t *)ptr)[i];    }    block-&gt;free = 1;    return newptr;}&lt;/pre&gt;How realloc works:&lt;li&gt;If the pointer to reallocate is null, works just like malloc.&lt;/li&gt;&lt;li&gt;If the given size is bigger than the prior size, it allocates a bigger block and copies all data from the original block to the new block.&lt;/li&gt;&lt;li&gt;If the given size is smaller than the prior size, it allocates a smaller block and copies just the data that fits into the smaller block.&lt;/li&gt;&lt;/div&gt;&lt;h3&gt;New challenge&lt;/h3&gt;&lt;div&gt;In my implementation I used a linked list where each node holds a pointer to the next, but given that I have control over the _entire_ memory this actualy isn't necessary.My challenge to you is that you remove the pointer to next from the &lt;span class=&quot;single-quote&quot;&gt;block_meta&lt;/span&gt; struct.&lt;/div&gt;&lt;h3&gt;Resources&lt;/h3&gt;&lt;div&gt;&lt;li&gt;https://danluu.com/malloc-tutorial/&lt;/li&gt;&lt;li&gt;http://www.craftinginterpreters.com/chunks-of-bytecode.html&lt;/li&gt;&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Challenge for writing your own implementation of malloc and free.</summary></entry><entry><title type="html">Grotsky Part 3: Interpreting</title><id>https://mliezun.com/2020/04/01/grotsky-part3.html</id><updated>2020-04-01T00:00:00Z</updated><link href="https://mliezun.com/2020/04/01/grotsky-part3.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2020/04/01/grotsky-part3.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Grotsky Part 3: Interpreting&lt;/h2&gt;&lt;div&gt;&lt;/div&gt;&lt;h3&gt;It's slow! &lt;/h3&gt;&lt;div&gt;My interpreter it's really, really, wait for it... _Really slow_.An example of a bad performing grotsky code:&lt;pre class=&quot;triple-quote &quot;&gt;# fib: calculates the n-th fibonacci number recursivelyfn fib(n) begin    if n &lt; 2 return n    return fib(n-2) + fib(n-1)endprintln(fib(30))&lt;/pre&gt;&lt;/div&gt;&lt;h5&gt;Running the code&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;$ time ./grotsky examples/fib.g&lt;/pre&gt;Gives a wooping result of:&lt;pre class=&quot;triple-quote &quot;&gt;832040real    0m11,154suser    0m11,806ssys     0m0,272s&lt;/pre&gt;Almost twelve seconds!!! Comparing with a similar python code&lt;pre class=&quot;triple-quote &quot;&gt;def fib(n):    if n &lt; 2: return n    return fib(n-2) + fib(n-1)print(fib(30))&lt;/pre&gt;Gives a result of:&lt;pre class=&quot;triple-quote &quot;&gt;832040real    0m0,423suser    0m0,387ssys     0m0,021s&lt;/pre&gt;That means, my interpreter is at least 20 times slower than Cpython.&lt;/div&gt;&lt;h5&gt;Why is it so slow?&lt;/h5&gt;&lt;div&gt;&lt;a href=&quot;https://www.reddit.com/r/golang/comments/5kv2xx/why_is_golangs_performance_worse_than_javas_in/&quot;&gt;Here is an explanation&lt;/a&gt;.As the person from the first comment states, go garbage collector is not well suited for this kind of scenario with heavy allocation of objects.&gt; Go's GC is not generational, so allocation requires (comparatively speaking) much more work. It's also tuned for low latency (smallest pause when GC has to stop the program) at the expense of throughput (i.e. total speed). This is the right trade-off for most programs but doesn't perform optimally on micro-benchmarks that measure throughtput.Setting the gc percent at 800 (100 by default) more than halves the time that the function takes to compute:&lt;pre class=&quot;triple-quote &quot;&gt;$ time GOGC=800 ./grotsky examples/fib.g832040real    0m5,110suser    0m5,182ssys     0m0,061s&lt;/pre&gt;&lt;/div&gt;&lt;h3&gt;Interpreting functions&lt;/h3&gt;&lt;div&gt;Callable interface&lt;pre class=&quot;triple-quote go&quot;&gt;type callable interface {	arity() int	call(exec *exec, arguments []interface{}) interface{}}&lt;/pre&gt;_All grotsky functions must be an object that implements the callable interface._For that I defined two kind of structs:&lt;pre class=&quot;triple-quote go&quot;&gt;type function struct {	declaration   *fnStmt	closure       *env	isInitializer bool}type nativeFn struct {	arityValue int	callFn  func(exec *exec, arguments []interface{}) interface{}}&lt;/pre&gt;&lt;/div&gt;&lt;h5&gt;nativeFn&lt;/h5&gt;&lt;div&gt;Let's you define standard functions available on all grotsky interpreters. Line &lt;span class=&quot;single-quote&quot;&gt;println&lt;/span&gt;.&lt;pre class=&quot;triple-quote go&quot;&gt;func (n *nativeFn) arity() int {	return n.arityValue}func (n *nativeFn) call(exec *exec, arguments []interface{}) interface{} {	return n.callFn(exec, arguments)}&lt;/pre&gt;From that, println would be pretty straight forward:&lt;pre class=&quot;triple-quote go&quot;&gt;...var println nativeFnprintln.arityValue = 1println.callFn = func(exec *exec, arguments []interface{}) interface{} {    fmt.Println(arguments[0])    return nil}...&lt;/pre&gt;&lt;/div&gt;&lt;h5&gt;Ordinary grotsky functions&lt;/h5&gt;&lt;div&gt;For ordinary grotsky functions the things are a little bit messier.First I got to introduce the &lt;span class=&quot;single-quote&quot;&gt;environment&lt;/span&gt; that is an object that holds &lt;span class=&quot;single-quote&quot;&gt;map[string]interface{}&lt;/span&gt; as a dictionary for variables in the local scope and a pointer to another environment that contains variables for the outer scope.&lt;pre class=&quot;triple-quote go&quot;&gt;type env struct {	state *state	enclosing *env	values    map[string]interface{}}func newEnv(state *state, enclosing *env) *env {	return &amp;env{		state:     state,		enclosing: enclosing,		values:    make(map[string]interface{}),	}}func (e *env) get(name *token) interface{} {	if value, ok := e.values[name.lexeme]; ok {		return value	}	if e.enclosing != nil {		return e.enclosing.get(name)	}	e.state.runtimeErr(errUndefinedVar, name)	return nil}func (e *env) define(name string, value interface{}) {	e.values[name] = value}&lt;/pre&gt;As you can see, the define method creates a variable on the local scope, and the get methods tries to retrieve a variable first from the local scope and then from the outer scope.Let's see how functions are implemented.&lt;pre class=&quot;triple-quote go&quot;&gt;func (f *function) arity() int {	return len(f.declaration.params)}func (f *function) call(exec *exec, arguments []interface{}) (result interface{}) {	env := newEnv(exec.state, f.closure)	for i := range f.declaration.params {		env.define(f.declaration.params[i].lexeme, arguments[i])	}	defer func() {		if r := recover(); r != nil {			if returnVal, isReturn := r.(returnValue); isReturn {				result = returnVal			} else {				panic(r)			}		}	}()	exec.executeBlock(f.declaration.body, env)	return nil}&lt;/pre&gt;Function &lt;span class=&quot;single-quote&quot;&gt;arity&lt;/span&gt; is pretty simple.The function &lt;span class=&quot;single-quote&quot;&gt;call&lt;/span&gt; takes an &lt;span class=&quot;single-quote&quot;&gt;exec&lt;/span&gt; object, that is no more than an instance of the interpreter, and the arguments to the function as an array of objects. Then creates a new environment the is surrounded by the environment local to the function definition and defines all the function parameters. Then comes the tricky part, first there is a deferred call to an anonymous function, let's ignore that for a moment, in the end, the function &lt;span class=&quot;single-quote&quot;&gt;executeBlock&lt;/span&gt; gets called. Let's see what that function does:&lt;pre class=&quot;triple-quote go&quot;&gt;func (e *exec) executeBlock(stmts []stmt, env *env) {	previous := e.env	defer func() {		e.env = previous	}()	e.env = env	for _, s := range stmts {		e.execute(s)	}}&lt;/pre&gt;What's happening here is that the interpreter steps into the new environment, saving the previous environment in a variable, and execute all given statements, after that it restores the environment to the previous one. Exactly as a function does.&lt;/div&gt;&lt;h5&gt;What happens when you hit a &lt;span class=&quot;single-quote&quot;&gt;return&lt;/span&gt;&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote go&quot;&gt;type returnValue interface{}...func (e *exec) visitReturnStmt(stmt *returnStmt) R {	if stmt.value != nil {		panic(returnValue(stmt.value.accept(e)))	}	return nil}&lt;/pre&gt;When you get to a return node in the ast, the nodes panics with a return value. This has to do with the fact that you need to go up the call stack and finish the execution of the function, otherwise the function will keep it's execution.That's the reason of the deferred function we forgot a couple seconds ago:&lt;pre class=&quot;triple-quote go&quot;&gt;func (f *function) call(exec *exec, arguments []interface{}) (result interface{}) {    ...    defer func() {		if r := recover(); r != nil {			if returnVal, isReturn := r.(returnValue); isReturn {				result = returnVal			} else {				panic(r)			}		}    }()    ...}&lt;/pre&gt;This function recovers from a panic. If the value recovered is of type &lt;span class=&quot;single-quote&quot;&gt;returnValue&lt;/span&gt; it recovers successfully and sets the result value of the function call to the return value, else it panics again.&lt;/div&gt;&lt;h3&gt;Hasta la vista, baby&lt;/h3&gt;&lt;div&gt;That's it for now. There are a lot of nifty stuff to keep talking about. But I think it's enough for now.Remember to check out the &lt;a href=&quot;https://github.com/mliezun/grotsky&quot;&gt;source code&lt;/a&gt;. And stay tuned for more.&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Part 3 of building my own language series. Interpreting expressions and statement, traversing the Abstract Syntax Tree.</summary></entry><entry><title type="html">Grotsky Part 2: Parsing expressions</title><id>https://mliezun.com/2020/03/15/grotsky-part2.html</id><updated>2020-03-15T00:00:00Z</updated><link href="https://mliezun.com/2020/03/15/grotsky-part2.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2020/03/15/grotsky-part2.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Grotsky Part 2: Parsing expressions&lt;/h2&gt;&lt;div&gt;&lt;/div&gt;&lt;h3&gt;Expressions&lt;/h3&gt;&lt;div&gt;Parsing an expression like &lt;span class=&quot;single-quote&quot;&gt;1+2*3&lt;/span&gt; requires a complex representation on memory. Just looking at it we think that it's pretty simple, but there is some hidden &lt;span class=&quot;single-quote&quot;&gt;hierarchy&lt;/span&gt; that we have to pay attention to, like the fact that first we have to compute &lt;span class=&quot;single-quote&quot;&gt;2*3&lt;/span&gt; and then add &lt;span class=&quot;single-quote&quot;&gt;1&lt;/span&gt; to it.To represent that in a data structure the best thing we can come up to is a tree, as seen in the next figure:![image](/assets/images/grotsky-part2/AST.png)As you can see the leaves of the tree are literals and the root and intermediate nodes are operations that have to be applied from the bottom up. That means that we traverse the tree until we reach the bottom and start computing the results by going up.&lt;/div&gt;&lt;h3&gt;Defining node types&lt;/h3&gt;&lt;div&gt;&gt; Not all operations are created equal.We have to define how each node fits into the tree.I'll use the following syntax: &lt;span class=&quot;single-quote&quot;&gt;Binary -&gt; left expr, operator token, right expr&lt;/span&gt;. Which means that a binary operation (as we have seen in the image before) links to 2 expressions (left and right) and stores 1 value (operator).&lt;/div&gt;&lt;h5&gt;Let's define all posible operations on literals&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;Literal -&gt; value object# 1, &quot;asd&quot;, 5.2, true, falseBinary -&gt; left expr, operator token, right expr# 1+2, 3*3, 4^2+1Grouping -&gt; expression expr# (1+2)Logical -&gt; left expr, operator token, right expr# true or false, false and trueUnary: operator token, right expr# not true, -5List -&gt; elements []expr# [1, 2, 3, [4], &quot;asd&quot;]Dictionary -&gt; elements []expr# {&quot;a&quot;: 1, &quot;b&quot;: 2, 3: 4}Access -&gt; object expr, slice expr# [1, 2, 3][0], {&quot;a&quot;:1}[&quot;a&quot;]Slice -&gt; first expr, second expr, third expr# [1, 2, 3, 4, 5, 6][1:4:2]&lt;/pre&gt;&lt;/div&gt;&lt;h3&gt;Traversing the abstract syntax tree&lt;/h3&gt;&lt;div&gt;To traverse the syntax tree we need a pattern that's uniform and easily scalable when we have to add other types of expressions and statements.For that we'll use the &lt;a href=&quot;https://en.wikipedia.org/wiki/Visitor_pattern&quot;&gt;Visitor Pattern&lt;/a&gt;.&lt;/div&gt;&lt;h4&gt;Visitor Pattern&lt;/h4&gt;&lt;div&gt;First we need an interface for the expression that allows a visitor to visit it.&lt;pre class=&quot;triple-quote go&quot;&gt;type expr interface {    accept(exprVisitor) interface{}}&lt;/pre&gt;An expression visitor should have a method for each kind of expression it has to visit.&lt;pre class=&quot;triple-quote go&quot;&gt;type exprVisitor interface {    visitLiteralExpr(expr expr) interface{}    visitBinaryExpr(expr expr) interface{}    visitGroupingExpr(expr expr) interface{}    visitLogicalExpr(expr expr) interface{}    visitUnaryExpr(expr expr) interface{}    visitListExpr(expr expr) interface{}    visitDictionaryExpr(expr expr) interface{}    visitAccessExpr(expr expr) interface{}    visitSliceExpr(expr expr) interface{}}&lt;/pre&gt;Then we have to define a type for each kind of expression that implements &lt;span class=&quot;single-quote&quot;&gt;expr&lt;/span&gt; interface. For example, this is the implementation for a binary expression:&lt;pre class=&quot;triple-quote go&quot;&gt;type binaryExpr struct {    left expr    operator *token    right expr}func (s *binaryExpr) accept(visitor exprVisitor) R {    return visitor.visitBinaryExpr(s)}&lt;/pre&gt;For all other expressions the definition is practically the same.&lt;/div&gt;&lt;h4&gt;String Visitor&lt;/h4&gt;&lt;div&gt;To finish this chapter, let's define a visitor that allows you to print the syntax tree in a lisp-like syntax, ex: (+ 1 2).Here is the implementation of the string visitor for a binary expression:&lt;pre class=&quot;triple-quote go&quot;&gt;type stringVisitor struct{}func (v stringVisitor) visitBinaryExpr(expr expr) R {    binary := expr.(*binaryExpr)    return fmt.Sprintf(&quot;(%s %v %v)&quot;, binary.operator.lexeme, binary.left.accept(v), binary.right.accept(v))}&lt;/pre&gt;&lt;/div&gt;&lt;h3&gt;Grotsky expression&lt;/h3&gt;&lt;div&gt;You can check out the state of the Grotsky project right here: &lt;a href=&quot;https://github.com/mliezun/grotsky&quot;&gt;https://github.com/mliezun/grotsky&lt;/a&gt;.Grotsky it's able to parse and print all types of expressions defined in this article right now.&lt;/div&gt;&lt;h4&gt;Expressions&lt;/h4&gt;&lt;div&gt;Examples of operations supported:&lt;pre class=&quot;triple-quote &quot;&gt;# Math operations1+2*3^2-(4+123)/2.6=&gt; (- (+ 1 (* 2 (^ 3 2))) (/ (+ 4 123) 2.6))# Logical operationstrue or false=&gt; (or true false)# Comparisons1 == 1 and (1 &gt; 3 or 11/5.5 &lt;= 3+2^2 and 1 != 2)=&gt; (and (== 1 1) (or (&gt; 1 3) (and (&lt;= (/ 11 5.5) (+ 3 (^ 2 2))) (!= 1 2))))# Lists[1, 2, [3], &quot;asd&quot;]=&gt; (list 1 2 (list 3) &quot;asd&quot;)# List slicing[1,2,3,4][1:3][::2][0]=&gt; (#0 (#::2 (#1:3 (list 1 2 3 4))))# Dictionary{    1: 2,    3: 4,    &quot;asd&quot;: 3.14}=&gt; (dict 1=&gt;2 3=&gt;4 &quot;asd&quot;=&gt;3.14)# Dictionary key lookup{&quot;key&quot;:0.6}[&quot;key&quot;]=&gt; (#&quot;key&quot; (dict &quot;key&quot;=&gt;0.6))&lt;/pre&gt;That's it for now. In the next chapter we'll traverse the tree but instead of printing we'll execute the operations listed before.If you have questions or suggestions please get in touch.&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Part 2 of building my own language series. Parsing expressions, traversing and printing the Abstract Syntax Tree.</summary></entry><entry><title type="html">Grotsky Part 1: Syntax</title><id>https://mliezun.com/2020/02/21/grotsky-part1.html</id><updated>2020-02-21T00:00:00Z</updated><link href="https://mliezun.com/2020/02/21/grotsky-part1.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2020/02/21/grotsky-part1.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Grotsky Part 1: Syntax&lt;/h2&gt;&lt;div&gt;&lt;/div&gt;&lt;h4&gt;Syntax Restrictions&lt;/h4&gt;&lt;div&gt;&lt;li&gt;No use of semicolon &lt;span class=&quot;single-quote&quot;&gt;;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;Block statements delimited by &lt;span class=&quot;single-quote&quot;&gt;begin&lt;/span&gt; and &lt;span class=&quot;single-quote&quot;&gt;end&lt;/span&gt;&lt;/li&gt;&lt;li&gt;Function definition using &lt;span class=&quot;single-quote&quot;&gt;fn&lt;/span&gt; keyword&lt;/li&gt;&lt;li&gt;Logic operators in plain english &lt;span class=&quot;single-quote&quot;&gt;or&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;and&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;not&lt;/span&gt;&lt;/li&gt;&lt;li&gt;Conditional statements use the following keywords: &lt;span class=&quot;single-quote&quot;&gt;if&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;elif&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;else&lt;/span&gt;&lt;/li&gt;&lt;li&gt;There is no switch statement&lt;/li&gt;&lt;li&gt;Class definition with &lt;span class=&quot;single-quote&quot;&gt;class&lt;/span&gt; keyword&lt;/li&gt;&lt;li&gt;Arithmetic operations: &lt;span class=&quot;single-quote&quot;&gt;*&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;/&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;-&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;+&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;^&lt;/span&gt;&lt;/li&gt;&lt;li&gt;Grouping with parentheses &lt;span class=&quot;single-quote&quot;&gt;()&lt;/span&gt;&lt;/li&gt;&lt;li&gt;Native support for python-like lists and dictionaries: &lt;span class=&quot;single-quote&quot;&gt;[]&lt;/span&gt;, &lt;span class=&quot;single-quote&quot;&gt;{}&lt;/span&gt;&lt;/li&gt;&lt;li&gt;Support for enhanced for loop: &lt;span class=&quot;single-quote&quot;&gt;for i, el in array&lt;/span&gt;&lt;/li&gt;&lt;li&gt;Keywords and identifiers can only use alphabethic characters&lt;/li&gt;&lt;/div&gt;&lt;h4&gt;Primitives&lt;/h4&gt;&lt;div&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;nil&lt;/span&gt;&lt;/li&gt;&lt;li&gt;Integers&lt;/li&gt;&lt;li&gt;Floats&lt;/li&gt;&lt;li&gt;Booleans&lt;/li&gt;&lt;li&gt;Strings&lt;/li&gt;&lt;li&gt;Lists&lt;/li&gt;&lt;li&gt;Dictionaries&lt;/li&gt;&lt;/div&gt;&lt;h4&gt;Example of functions and operations&lt;/h4&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote ruby&quot;&gt;## Arithmethicprint(2^10 - 2323*3)# Output: -5945print(2^(12*3+400/-4+10*5/2))# Output: 1.8189894035458565e-12## Logicprint(true or false)# Output: true (short circuit)print(false and true)# Output: false (short circuit)## Conditionalsif 3 &gt; 2 or (1 &lt; 3 and 2 == 2) begin    print('Condition is true')endelif 3 == 4 begin    print('Condition 2 is true')endelse begin    print('Conditions are false')end## Listsfor i in [1, 2, 3, 4] begin    print(i)endlet lst = [1, 2, 3, 4]lst[0] = -1print(lst) # Output: [-1, 2, 3, 4]print(lst[1:3]) # Output: [2, 3]## Dictionaries# (dictionaries and lists not allowed as keys)let dct = {    &quot;Key1&quot;: &quot;Val1&quot;,    2: &quot;Val2&quot;,    true: false}for key, val in dct begin    print(key, val)end## Functionsfn square(x)begin    return x^2endfn operate(x, operation)begin    return operation(x)end## Clojurefn makeCounter()begin    let n = 0    return fn() begin        n = n+1        return n    endend## Classesclass Counterbegin    init(start) begin        self.start = start    end    count() begin        self.start = self.start+1        return self.start    endendclass CounterTwobegin    count() begin        return super.count()*2    endend&lt;/pre&gt;&lt;/div&gt;&lt;h4&gt;Syntax definition&lt;/h4&gt;&lt;div&gt;Let's build a syntax definition in backus naur format that will be easy to parse with a recursive descent parser.&lt;/div&gt;&lt;h5&gt;Expresions&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;expression       assignment;list             &quot;[&quot; arguments? &quot;]&quot;;dictionary       &quot;{&quot; dict_elements? &quot;}&quot;;dict_elements    keyval (&quot;,&quot; keyval)*;keyval           expression &quot;:&quot; expression;assignment       (call &quot;.&quot;)? IDENTIFIER &quot;=&quot; assignment | access;access           logic_or (&quot;[&quot; slice &quot;]&quot;)*;logic_or         logic_and (&quot;or&quot; logic_and)*;logic_and        equality (&quot;and&quot; equality)*;equality         comparison ((&quot;!=&quot; | &quot;==&quot;) comparison)*;comparison       addition ((&quot;&gt;&quot; | &quot;&gt;=&quot; | &quot;&lt;&quot; | &quot;&lt;=&quot;) addition)*;addition         multiplication ((&quot;-&quot; | &quot;+&quot;) multiplication)*;multiplication   power ((&quot;/&quot; | &quot;*&quot;) power)*;power            unary (&quot;^&quot; unary)*;unary            (&quot;not&quot; | &quot;-&quot;) unary | call;call             primary (&quot;(&quot; arguments? &quot;)&quot; | &quot;.&quot; IDENTIFIER)*;arguments        expression (&quot;,&quot; expression)*;slice            (&quot;:&quot; expression)                | (&quot;:&quot; expression &quot;:&quot; expression)                | (&quot;:&quot; &quot;:&quot; expression)                | expression                | (expression &quot;:&quot;)                | (expression &quot;:&quot; expression)                | (expression &quot;:&quot; &quot;:&quot; expression)                | (expression &quot;:&quot; expression &quot;:&quot; expression);primary          NUMBER                | STRING                | &quot;false&quot;                | &quot;true&quot;                | &quot;nil&quot;                | IDENTIFIER                | &quot;(&quot; expression &quot;)&quot;                | fnAnon                | list                | dictionary;fnAnon           &quot;fn&quot; &quot;(&quot; parameters? &quot;)&quot; block;&lt;/pre&gt;&lt;/div&gt;&lt;h5&gt;Statements&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;program         declaration* EOF;declaration     classDecl | funDecl | varDecl | statement;classDecl       &quot;class&quot; IDENTIFIER ( &quot;&lt;&quot; IDENTIFIER )? &quot;begin&quot; methodDecl* &quot;end&quot; NEWLINE;methodDecl      &quot;class&quot;? function;funDecl         &quot;fn&quot; function ;function        IDENTIFIER &quot;(&quot; parameters? &quot;)&quot; block ;parameters      IDENTIFIER ( &quot;,&quot; IDENTIFIER )* ;varDecl         &quot;let&quot; IDENTIFIER (&quot;=&quot; expression)? NEWLINE;statement       forStmt                | ifStmt                | returnStmt                | whileStmt                | exprStmt                | block;exprStmt        expression NEWLINE;forStmt         &quot;for&quot;  (classicFor | newFor) statement;classicFor      (varDecl | exprStmt | &quot;,&quot;) expression? &quot;,&quot; expression?;newFor          IDENTIFIER (&quot;,&quot; IDENTIFIER)? &quot;in&quot; expression;ifStmt          &quot;if&quot; expression statement (&quot;elif&quot; expression statement)* (&quot;else&quot; statement)?;returnStmt      &quot;return&quot; expression? NEWLINE;whileStmt       &quot;while&quot; expression statement;block           &quot;begin&quot; NEWLINE declaration* &quot;end&quot; NEWLINE;&lt;/pre&gt;That's it! The next step is to build a lexer and a parser.&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Part 1 of building my own language series. Defining the syntax of grotsky toy language.</summary></entry><entry><title type="html">Sudoku Solver</title><id>https://mliezun.com/2020/02/18/sudoku-solver.html</id><updated>2020-02-18T00:00:00Z</updated><link href="https://mliezun.com/2020/02/18/sudoku-solver.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2020/02/18/sudoku-solver.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Sudoku Solver&lt;/h2&gt;&lt;div&gt;I wanted to make my own sudoku solver to challenge myself.Im not a sudoku player so my approach is a brute force scan of possible combinations sort-of.I just know the basic rules:&lt;li&gt;Numbers 1-9 are allowed.&lt;/li&gt;&lt;li&gt;Numbers in the same row cannot be repeated.&lt;/li&gt;&lt;li&gt;Numbers in the same column cannot be repeated.&lt;/li&gt;&lt;li&gt;Numbers in the 3x3 square cannot be repeated.&lt;/li&gt;The first thing i did was to build a some classes that calculates the possible values a cell can have if it's empty, based on the constraints.I came up with 3 classes:&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;Board&lt;/span&gt; that stores the entire board.&lt;/li&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;BoardSlice&lt;/span&gt; that stores a slice of a board. An object of this type is returned when a &lt;span class=&quot;single-quote&quot;&gt;Board&lt;/span&gt; is sliced (method &lt;span class=&quot;single-quote&quot;&gt;__getitem__&lt;/span&gt;).&lt;/li&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;Cell&lt;/span&gt; that stores the value of a single cell and calculates all possible values a cell can take.&lt;/li&gt;The class &lt;span class=&quot;single-quote&quot;&gt;Cell&lt;/span&gt; receives a board, the coordinates on the board, and the value that holds. Also has the method options that uses python &lt;span class=&quot;single-quote&quot;&gt;set&lt;/span&gt; data structure to calculate the posibilites.If you look at the following snippet you can see that the method &lt;span class=&quot;single-quote&quot;&gt;options&lt;/span&gt;generates the sets: &lt;span class=&quot;single-quote&quot;&gt;options&lt;/span&gt; that contains all possible options (1-9), &lt;span class=&quot;single-quote&quot;&gt;row&lt;/span&gt; that contains all the numbers that are in the same row, &lt;span class=&quot;single-quote&quot;&gt;column&lt;/span&gt; that contains all the numbers that are in the same column and &lt;span class=&quot;single-quote&quot;&gt;square&lt;/span&gt; that contains all the numbers that are in the same 3x3 square. The return value is &lt;span class=&quot;single-quote&quot;&gt;options&lt;/span&gt; without all the used values.&lt;pre class=&quot;triple-quote python&quot;&gt;class Cell:    def __init__(self, b, i, j, value):        self.b = b        self.value = value        self.i = i        self.j = j    def options(self):        if self.value != 0:            return {self.value}        options = set(range(1, 10))        row = set(map(lambda x: x.value, self.b[self.i]))        column = set(map(lambda x: x.value, self.b[:][self.j]))        def to_square(k): return slice((k // 3) * 3, (k // 3) * 3 + 3)        square = set(            map(lambda x: x.value,                self.b[to_square(self.i)][to_square(self.j)]))        return options - row - column - square - {0}&lt;/pre&gt;To make easier the implementation of the square I used the class &lt;span class=&quot;single-quote&quot;&gt;BoardSlice&lt;/span&gt; that contains a slice of a board and implements the magic method &lt;span class=&quot;single-quote&quot;&gt;__getitem__&lt;/span&gt;.&lt;pre class=&quot;triple-quote python&quot;&gt;class BoardSlice:    def __init__(self, board_slice):        self.board_slice = board_slice    def __getitem__(self, items):        if type(items) == slice:            return (el for row in self.board_slice for el in row[items])        if type(items) == int:            return (row[items] for row in self.board_slice)        raise KeyError&lt;/pre&gt;The base class: &lt;span class=&quot;single-quote&quot;&gt;Board&lt;/span&gt; contains the board and a copy method that copies all the values and creates a new &lt;span class=&quot;single-quote&quot;&gt;Board&lt;/span&gt; object. This is necessary to avoid messing with object references and have a clean object when needed.&lt;pre class=&quot;triple-quote python&quot;&gt;class Board:    def __init__(self, board):        self.board = [[Cell(self, i, j, value)                       for (j, value) in enumerate(row)] for (i, row) in enumerate(board)]    def copy(self):        return Board(((cell.value for cell in row) for row in self.board))    def __getitem__(self, items):        if type(items) == int:            return self.board[items]        if type(items) == slice:            return BoardSlice(self.board[items])        raise KeyError    def __repr__(self):        return repr(self.board)&lt;/pre&gt;With these tools the next step is to solve the problem!My idea was to generate a mixed iterative-recursive algorithm.The first pass will be iterative, and if needed, the second pass will be recursive.&lt;/div&gt;&lt;h5&gt;Iterative pass&lt;/h5&gt;&lt;div&gt;Iterates over the whole board and calculates the options that each cell can have. If a cell has only one option set that option on the cell and set a flag to repeat the iterative pass, if has 0 options return &lt;span class=&quot;single-quote&quot;&gt;None&lt;/span&gt; meaning that the board has no solutions, and if has more than one option store the options for the recursive pass.If the loop ends and we found that no cell has more than one option then we solved the board!The idea of this first step is to solve an _easy_ board quickly.&lt;/div&gt;&lt;h5&gt;Recursive pass&lt;/h5&gt;&lt;div&gt;If the iterative pass ends and we found that a cell has more than one option then we try all that options and call solve again!If solve returns a board that means we've found the solution!If solve returns None (back at the iterative passs) we have to try with another options.&lt;/div&gt;&lt;h5&gt;BoardSolver&lt;/h5&gt;&lt;div&gt;The class is pretty straightforward.&lt;pre class=&quot;triple-quote python&quot;&gt;class SudokuSolver:    @staticmethod    def solve(board):        b = board.copy()        # First pass: Iterative        board_map = {}        exhaust = False        while not exhaust:            exhaust = True            for i in range(9):                for j in range(9):                    cell = b[i][j]                    if cell.value == 0:                        options = cell.options()                        if len(options) == 1:                            cell.value = options.pop()                            exhaust = False                        elif len(options) == 0:                            return None                        elif len(board_map) == 0:                            board_map[(i, j)] = options        # Second pass: Recursive        for ((i, j), options) in board_map.items():            for op in options:                b[i][j].value = op                solved = SudokuSolver.solve(b)                if solved:                    return solved            return None        return b&lt;/pre&gt;&lt;/div&gt;&lt;h5&gt;Conclusions&lt;/h5&gt;&lt;div&gt;Actually my implementation is not a brute force algorithm, is a search algorithm, that searches the path to solving a board. Because it doesn't try all values on all cells nonsensically, it rather tries _some_ options for a given cell and advances to the next option as _soon_ as it detects that it's not the correct path.&lt;/div&gt;&lt;h4&gt;Source&lt;/h4&gt;&lt;div&gt;Take a look at the &lt;a href=&quot;https://github.com/mliezun/sudoku-solver&quot;&gt;source code&lt;/a&gt;.&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Iterative + recursive sudoku solver using python magic methods.</summary></entry><entry><title type="html">Crafting interpreters</title><id>https://mliezun.com/2020/02/12/crafting-interpreters.html</id><updated>2020-02-12T00:00:00Z</updated><link href="https://mliezun.com/2020/02/12/crafting-interpreters.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2020/02/12/crafting-interpreters.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Crafting interpreters&lt;/h2&gt;&lt;div&gt;I've just finished the section 2 of the book _Crafting Interpreters_, and I wanted to upload it to github right away.Take a look at the &lt;a href=&quot;https://github.com/mliezun/jlox&quot;&gt;source code&lt;/a&gt;.Beside the lox specification I've added:&lt;li&gt;The keyword &lt;span class=&quot;single-quote&quot;&gt;until&lt;/span&gt; that is a variation of &lt;span class=&quot;single-quote&quot;&gt;while&lt;/span&gt; loops (as in ruby). also&lt;/li&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;print&lt;/span&gt; is a function instead of a statement.&lt;/li&gt;&lt;li&gt;&lt;span class=&quot;single-quote&quot;&gt;eval&lt;/span&gt; function that let's you eval source code in runtime.&lt;/li&gt;&lt;li&gt;Class methods.&lt;/li&gt;I'll implement another language interpreter, this time using &lt;span class=&quot;single-quote&quot;&gt;golang&lt;/span&gt; and with a syntax similar to ruby.&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Lox language interpreter based on the book craftinginterpreters.com by Bob Nystrom.</summary></entry><entry><title type="html">Reinventing the Wheel: PHP Generators</title><id>https://mliezun.com/2020/01/24/php-generator.html</id><updated>2020-01-24T00:00:00Z</updated><link href="https://mliezun.com/2020/01/24/php-generator.html" rel="alternate" type="text/html"/><content type="html" xml:base="https://mliezun.com/2020/01/24/php-generator.html">&lt;article&gt;&lt;div&gt;&lt;/div&gt;&lt;h2&gt;Reinventing the Wheel: PHP Generators&lt;/h2&gt;&lt;div&gt;&lt;/div&gt;&lt;h3&gt;First thing first. How a generator works?&lt;/h3&gt;&lt;div&gt;&lt;/div&gt;&lt;h4&gt;Starting back at C&lt;/h4&gt;&lt;div&gt;Let's create a function that each time we call it we get the next number of the fibonacci sequence.&lt;pre class=&quot;triple-quote c&quot;&gt;int fibonacci(){    static int a = 0;    static int b = 1;    int aux = b;    b = a + b;    a = aux;    return a;}&lt;/pre&gt;If we call fibonacci(), the first time we'll get 1, the second time 1, the third 2, the fourth 3, and so on...This happens because we declared variables &lt;span class=&quot;single-quote&quot;&gt;a, b&lt;/span&gt; to be static. This means that they mantain the value after the function returns. Normally, what happens (if we don't declare a variable as static) is that the variables inside the function don't mantain the values of the last execution.&lt;/div&gt;&lt;h4&gt;First generator for PHP&lt;/h4&gt;&lt;div&gt;The equivalent function in PHP is pretty similar to C's approach.&lt;pre class=&quot;triple-quote php&quot;&gt;function fibonacci(){    static $a = 0;    static $b = 1;    $aux = $b;    $b = $a + $b;    $a = $aux;    return $a;}$out = [];for ($i = 1; $i &lt;= 10; $i++) {    $out[] = fibonacci();}echo implode(', ', $out) . &quot;\n&quot;;/*Output: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55*/&lt;/pre&gt;Let's compare this to the &lt;span class=&quot;single-quote&quot;&gt;real&lt;/span&gt; PHP version using &lt;span class=&quot;single-quote&quot;&gt;yield&lt;/span&gt;.&lt;pre class=&quot;triple-quote php&quot;&gt;function fibonacci($N){    $a = 0;    $b = 1;    for ($i = 0; $i &lt; $N; $i++) {        $aux = $b;        $b = $a + $b;        $a = $aux;        yield $a;    }}$out = [];foreach (fibonacci(10) as $fib) {    $out[] = $fib;}echo implode(', ', $out) . &quot;\n&quot;;/*Output: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55*/&lt;/pre&gt;&lt;/div&gt;&lt;h4&gt;Creating a custom version of PHP &lt;span class=&quot;single-quote&quot;&gt;yield&lt;/span&gt;&lt;/h4&gt;&lt;div&gt;This is my own version using the library parallel and channels (probably uses yield internally).&lt;pre class=&quot;triple-quote php&quot;&gt;class MyGenerator implements Iterator{    private $chan;    private $current;    private $iteratorFn;    private $runtime;    private $key = -1;    private $valid = true;    public function __construct($iteratorFn)    {        $this-&gt;iteratorFn = $iteratorFn;        $this-&gt;runtime = new \parallel\Runtime();        $channel = new \parallel\Channel();        $this-&gt;runtime-&gt;run(function() use ($iteratorFn, $channel) {            $iteratorFn(function ($val) use ($channel) {                $channel-&gt;send($val);            });            $channel-&gt;close();        });        $this-&gt;chan = $channel;        $this-&gt;next();    }    public function current()    {        return $this-&gt;current;    }    public function next()    {        try {            ++$this-&gt;key;            $val = $this-&gt;chan-&gt;recv();            $this-&gt;current = $val;        } catch (\parallel\Channel\Error\Closed $e) {            $this-&gt;valid = false;        }        return $this-&gt;current;    }    public function key() {return $this-&gt;key;}    public function valid() {return $this-&gt;valid;}    public function rewind() {}}function fibonacci($N){    return new MyGenerator(function ($yield) use ($N) {        $a = 0;        $b = 1;        for ($i = 0; $i &lt; $N; $i++) {            $aux = $b;            $b = $a + $b;            $a = $aux;            $yield($a);        }    });}$out = [];foreach (fibonacci(10) as $fib) {    $out[] = $fib;}echo implode(', ', $out) . &quot;\n&quot;;&lt;/pre&gt;&lt;/div&gt;&lt;h4&gt;Performance comparison: PHP vs Custom&lt;/h4&gt;&lt;div&gt;&lt;/div&gt;&lt;h5&gt;Tested code&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote php&quot;&gt;for ($i = 0; $i &lt; 1000; ++$i) {    foreach (fibonacci(100) as $fib) {        $out[] = $fib;    }}&lt;/pre&gt;&lt;/div&gt;&lt;h5&gt;&lt;span class=&quot;single-quote&quot;&gt;yield&lt;/span&gt; version&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;real    0m0,083suser    0m0,059ssys     0m0,023s&lt;/pre&gt;&lt;/div&gt;&lt;h5&gt;&lt;span class=&quot;single-quote&quot;&gt;MyGenerator&lt;/span&gt; version&lt;/h5&gt;&lt;div&gt;&lt;pre class=&quot;triple-quote &quot;&gt;real    0m2,138suser    0m1,426ssys     0m1,363s&lt;/pre&gt;So, it's aproximately 26 times slower :-)&lt;/div&gt;&lt;/article&gt;</content><author><name>Miguel Liezun</name></author><category term="posts"/><summary type="html">Attempt of a lunatic to recreate functionalities that a language already has using the same language, and failing.</summary></entry></feed>