<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:webfeeds="http://webfeeds.org/rss/1.0" version="2.0">
  <channel>
    <title>Software Engineering News from Bruno Bernardino</title>
    <link>https://news.onbrn.com/</link>
    <atom:link href="https://news.onbrn.com/atom.xml" rel="self" type="application/rss+xml"/>
    <description>Get the latest updates on stuff I'm working on and thinking about. Mostly Software Engineering.</description>
    <lastBuildDate>Sun, 15 Feb 2026 11:55:04 GMT</lastBuildDate>
    <language>en</language>
    <generator>Lume v3.2.1</generator>
    <item>
      <title>Announcing Uruky</title>
      <link>https://news.onbrn.com/announcing-uruky-a-eu-based-simpler-alternative-to-kagi/</link>
      <guid isPermaLink="false">https://news.onbrn.com/announcing-uruky-a-eu-based-simpler-alternative-to-kagi/</guid>
      <content:encoded>
        <![CDATA[<p>For the last couple of months I've been working with my wife on <a href="https://uruky.com/">Uruky</a>. It's a EU-based and simpler <a href="https://kagi.com/">Kagi</a> alternative (privacy-focused and ad-free search with domain boosting/exclusion rules).</p>
<p>We've been using it with friends and family with much success and hashbangs work for any edge cases we haven't fixed/improved yet.</p>
<p>It's already using EU-based search providers with EU-based indexes like <a href="https://mojeek.com/">Mojeek</a> and <a href="https://marginalia-search.com/">Marginalia</a>. We're currently waiting for <a href="https://www.eu-searchperspective.com/">EUSP</a> (the new Ecosia/Qwant-effort-related index) to provide us with an API key.</p>
<p>If you're interested in trying it for a few days and are a human, reach out with your account number and I'll give you a couple of weeks for free.</p>
<p>If you have any suggestions, comments, or recommendations, I'd love to hear it.</p>
<p>Thank you for your attention and kindness. I really appreciate it!</p>
]]>
      </content:encoded>
      <pubDate>Sun, 15 Feb 2026 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Step-by-step guide for upgrading PostgreSQL Docker containers</title>
      <link>https://news.onbrn.com/step-by-step-guide-upgrading-postgresql-docker-containers/</link>
      <guid isPermaLink="false">https://news.onbrn.com/step-by-step-guide-upgrading-postgresql-docker-containers/</guid>
      <content:encoded>
        <![CDATA[Recently I upgraded a couple of apps which were running an outdated [PostgreSQL](https://www.postgresql.org/) database with [Docker](https://www.docker.com/) containers (I use [Docker Compose](https://docs.docker.com/compose/) to manage them), and had a hard time finding a good guide on how to do it. Most were too complicated, outdated, or just unnecessarily complex for me.

The process should be really simple, and I had done it before, but I believe in the power of checklists, and somehow, I didn't have one handy.

So I decided to write one, and share it with you here!

_**NOTE:** This guide assumes the easiest way to upgrade the PostgreSQL is to export the database, start a new container, and import the database into it. This is the easiest way, but it's not the only way. You can find [other, more complicated and complex ways to do it officially](https://www.postgresql.org/docs/current/upgrading.html), but if your database isn't over ~50GB, this is the easiest and simplest way in my opinion._

## Step 1: Dump the database

First, we need a dump of the database. We'll need it to seed the new container. We don't need to stop the container for this, as we'll be using [`pg_dump`](https://www.postgresql.org/docs/current/app-pgdump.html).

```bash
docker exec -it <container-name> pg_dump -U <username> <database-name> > db_backup.sql
```

## Step 2: Stop the container

Now we need to stop the container, so we can upgrade the PostgreSQL version.

```bash
docker compose stop <container-name>
```

## Step 3: Upgrade the PostgreSQL version

Now we need to upgrade the PostgreSQL version. Assuming you were, for example, running PostgreSQL 15 and you want to upgrade to 17, you would make the following changes to the `docker-compose.yml` file:

```diff
+  image: postgres:17
-  image: postgres:15
```

> **NOTE:** If you're upgrading from <18 to >=18, you might need to also [change the mounted pgdata volume as per the documentation](https://hub.docker.com/_/postgres#pgdata). It changed from `/var/lib/postgresql/data` to `/var/lib/postgresql/<postgres-version>/docker`.

## Step 4: Start the container

Now we need to start the new container, running the new PostgreSQL version.

```bash
docker compose up -d <container-name>
```

## Step 5: Import the database

Now we need to import the database into the new container.

```bash
cat db_backup.sql | docker exec -i <container-name> psql -U <username> <database-name>
```

That's it!

I hope this helps you.

Thank you so much for being here. I really appreciate you!
]]>
      </content:encoded>
      <pubDate>Mon, 09 Jun 2025 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Bash script for daily backups from PostgreSQL Docker containers</title>
      <link>https://news.onbrn.com/bash-script-for-daily-backups-from-postgresql-docker-containers/</link>
      <guid isPermaLink="false">https://news.onbrn.com/bash-script-for-daily-backups-from-postgresql-docker-containers/</guid>
      <content:encoded>
        <![CDATA[I use [Docker](https://www.docker.com/) with [Docker Compose](https://docs.docker.com/compose/) for my own apps. I've got a few apps that use [PostgreSQL](https://www.postgresql.org/), and a couple of years ago I was looking for a way to backup the databases from those containers in an easy and automated way.

I found a few solutions, but all were way too complicated or inefficient or required more work to restore, or were reliant on the host machine having PostgreSQL installed (which I didn't want because my apps are running different PostgreSQL versions), so I wrote a script myself!

I've been using this bash script to backup the databases from the containers. It's a simple script that runs every day at a specific time (via a simple cron job), and backs up the databases to a specific directory (via [`pg_dump`](https://www.postgresql.org/docs/current/app-pgdump.html)), without any passwords in the script. It keeps 14 days of backups, compresses them, and cleans up old backups. Oh, and it supports multiple containers!

Here it is:

### `backup.sh`

```bash
#!/bin/bash

#
# Backup Postgresql databases into daily files for multiple containers.
#

# Base backup settings
BASE_BACKUP_DIR=/home/www/backups
DAYS_TO_KEEP=14
FILE_SUFFIX=_backup.sql

# App container names
declare -A CONTAINERS=(
    ["amazing-app"]="/home/www/apps/amazing-app"
    ["another-app"]="/home/www/apps/another-app"
)

# Common PostgreSQL settings
POSTGRES_USER=postgres
POSTGRES_CONTAINER_USER=postgres

# Function to backup a single container
backup_container() {
    local container_name=$1
    local container_dir=$2
    local backup_dir="${BASE_BACKUP_DIR}/${container_name}"
    local file=$(date +"%Y%m%d%H%M")${FILE_SUFFIX}
    local output_file="${backup_dir}/${file}"
    local postgres_container="${container_name}-postgresql-1"
    local postgres_db="${container_name}"

    # Create backup directory if it doesn't exist
    mkdir -p "${backup_dir}"

    # Do the database backup (dump)
    cd "${container_dir}" && docker exec -u ${POSTGRES_CONTAINER_USER} ${postgres_container} pg_dump --dbname="${postgres_db}" --username="${POSTGRES_USER}" > "${output_file}"

    # gzip the database dump file
    gzip "${output_file}"

    # Show the result
    echo "${output_file}.gz was created:"
    ls -l "${output_file}.gz"

    # Prune old backups for this container
    find "${backup_dir}" -maxdepth 1 -mtime +${DAYS_TO_KEEP} -name "*${FILE_SUFFIX}.gz" -exec rm -f '{}' ';'
}

for container_name in "${!CONTAINERS[@]}"; do
    echo "Backing up ${container_name}..."
    backup_container "${container_name}" "${CONTAINERS[$container_name]}"
done

echo "Done backing up databases."

## Restore:
# cd ${container_dir} && docker cp /home/www/backups/${postgres_container}/*_backup.sql.gz ${postgres_container}:/tmp/backup.sql.gz
# docker exec -u ${POSTGRES_CONTAINER_USER} -i ${postgres_container} pg_restore -C --dbname="${postgres_db}" --username="${POSTGRES_USER}" /tmp/backup.sql.gz
```

Obviously there are some things you can improve, like adding support for multiple databases per container, multiple postgresql container names (per container), and multiple postgresql users. I don't need that, so it'd be unnecessary complexity for me.

I run this with a simple cronjob:

```bash
# Every day at 3:07am
7 3 * * * /home/www/scripts/backup.sh > /home/www/logs/backup.log 2>&1
```

Which keeps a nice log of the last daily backup in `/home/www/logs/backup.log`.

I hope this helps you.

Thank you so much for being here. I really appreciate you!
]]>
      </content:encoded>
      <pubDate>Thu, 29 May 2025 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Announcing bewCloud</title>
      <link>https://news.onbrn.com/announcing-bewcloud-a-simpler-alternative-to-nextcloud/</link>
      <guid isPermaLink="false">https://news.onbrn.com/announcing-bewcloud-a-simpler-alternative-to-nextcloud/</guid>
      <content:encoded>
        <![CDATA[<p><strong>tl;dr;</strong> Check out more information about the project at <a href="https://bewcloud.com/">bewCloud.com</a>!</p>
<p>For the last month or so I've been working on bewCloud, and today I'm making its source code public and open!</p>
<p>Today I also archived quite a few projects that weren't making me much money after deciding to focus on building something for myself, long-term.</p>
<p>You can read more about it in its <a href="https://bewcloud.com/">dedicated website</a>.</p>
<p>If you have any suggestions, comments, or recommendations, I'd love to hear it.</p>
<p>Thank you for your attention and kindness. I really appreciate it!</p>
]]>
      </content:encoded>
      <pubDate>Sat, 16 Mar 2024 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>How to forward email to multiple addresses with Cloudflare Email Routing</title>
      <link>https://news.onbrn.com/how-to-forward-email-to-multiple-addresses-with-cloudflare-email-routing/</link>
      <guid isPermaLink="false">https://news.onbrn.com/how-to-forward-email-to-multiple-addresses-with-cloudflare-email-routing/</guid>
      <content:encoded>
        <![CDATA[[Cloudflare](https://cloudflare.com) is pretty great. I use them as [a registrar](https://www.cloudflare.com/products/registrar/) and [DNS provider](https://www.cloudflare.com/application-services/products/#security-services) (not usually proxied, unless I'm actively preventing a DDoS), and recently had to move in a few domains I use mostly for email forwarding that had a specific forwarder to a couple of different email addresses.

While [Cloudflare Email Routing](https://developers.cloudflare.com/email-routing/) is simple and easy to setup for any number of one-to-one email forwarding policies, you [can't choose more than one email as the destination (email forwarded to)](https://community.cloudflare.com/t/feature-email-routing-multiple-catch-all-addresses/333153). This was blocking me, but I was motivated to get all my domains under Cloudflare, so I looked into their [Email Workers](https://developers.cloudflare.com/email-routing/email-workers/) and found I could use them for that exact purpose, easily!

Basically, I just create a new Email Worker, and have the Email Routing send to that worker instead of a single email address. Here's the simple code for the Email Worker:

```js
export default {
  async email(message, env, ctx) {
    await message.forward('email1@example.com');
    await message.forward('email2@example.com');
  },
};
```

And that's it! Nothing more.

**NOTE**: You can reuse the same email worker across domains, which is exactly what I did. Had I known I couldn't rename the email worker after it was created, I'd given it a name based on where it was redirecting _to_, rather than where it was redirecting _from_. I'm sure my OCD won't make me create a brand new email worker with the same code and a different name for it to make more sense. Certainly.

Anyway, I hope this helps you.

Thank you so much for being here. I really appreciate you!
]]>
      </content:encoded>
      <pubDate>Sun, 10 Sep 2023 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Migrating from Edge Cloud Computing to self-hosting with Caddy and Docker</title>
      <link>https://news.onbrn.com/migrating-from-edge-cloud-computing-to-self-hosting-with-caddy-and-docker/</link>
      <guid isPermaLink="false">https://news.onbrn.com/migrating-from-edge-cloud-computing-to-self-hosting-with-caddy-and-docker/</guid>
      <content:encoded>
        <![CDATA[In my quest to improve my privacy and control over the things I create and share, in the last few months I've achieved a couple of important milestones I had my sight on for a while:

1. Having my own git server, for my personal and private repos/apps/configs.
2. Deploy apps on my own server, simply.

The first one was an important step in achieving the second, because I didn't want to keep my personal server settings/configs and keys in GitHub or Gitlab (somewhere I didn't control), so I've always had them backed up in an encrypted format in my home server and a couple of external disks (and cloud).

The main contenders were Gitlab and [Gitea](https://gitea.com), and Gitea won because it's much lighter and faster to run (and simpler). Additionally, I found its Actions easier to setup and get working (especially for some personal repos I migrated from GitHub).

After I got that going with, effectively, a couple of `docker-compose.yml` files (I already had one for my personal Nextcloud instance) and a `Caddyfile` ([`caddy`](https://caddyserver.com) is so much better than `nginx` and comes with automated certificate handling for SSL) to properly reverse proxy domains to the right docker containers, I had a nice boilerplate for my apps server, too.

Since most of what I build personally nowadays is with [Deno](https://deno.land), I've had most of these apps deployed on [Deno Deploy](https://deno.com/deploy) with a lot of ease. While I have found it to be a bit too unstable (I've had apps stop working because they released a new engine version that wasn't backwards-compatible more than a couple of times, for example) and with slower cold starts than what I would expect, my main reason for getting the apps into my own server was to have more control of them. The fact they load much more faster now is a bonus.

So what I've had to do in these repos, is basically create a `docker-compose.yml` file and a `Dockerfile` so that `docker-compose up` brings the app up and running.

Those are simple.

The `Dockerfile`:

```Dockerfile
FROM denoland/deno:alpine-1.34.1

EXPOSE 8000

WORKDIR /app

# Prefer not to run as root.
USER deno

# These steps will be re-run upon each file change in your working directory:
ADD . /app

# Compile the main app so that it doesn't need to be compiled each startup/entry.
RUN deno cache --reload main.ts

CMD ["run", "--allow-all", "main.ts"]
```

The `docker-compose.yml`:

```yaml
version: '3'
services:
  website:
    build: .
    restart: always
    ports:
      - 127.0.0.1:3000:8000
```

Each app does require me to use a specific port, which I keep track of in a repo for my server settings/config/keys. I only open it up to the localhost (`127.0.0.1`) so that you can't access the app without `caddy` in front of it. I want that mostly for SSL, but also to potentially [throttle/rate-limit](https://caddyserver.com/docs/modules/http.handlers.rate_limit) more easily.

The server's `Caddyfile` entry for such a Deno app is simple too:

```caddy
example.com {
	tls you@example.com
	encode gzip zstd

	reverse_proxy localhost:3000 {
		header_up Host {host}
		header_up X-Real-IP {remote}
		header_up X-Forwarded-Host {upstream_hostport}
	}
}
```

And that's it.

Now I have complete control over the Deno version/engine running the app, and can use dynamic imports or even `npm:` specifiers if I wished (though I don't).

To have it be deployed automatically on a push to the `main` branch, I have a `.gitea/workflows/deploy.yml` action, which basically SSH's into the server (I've created a key for it, shared with the apps server), pulls the `main` branch, and restarts the docker container(s). It only looks complicated because of the SSH setup, but in reality it's simple:

```yaml
name: Deploy

on:
  push:
    branches:
      - main
  workflow_dispatch:

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: https://gitea.com/actions/checkout@v3
      - name: Configure SSH
        run: |
          mkdir -p ~/.ssh/
          echo "$SSH_KEY" | tr -d '\r' > ~/.ssh/server.key
          chmod 600 ~/.ssh/server.key
          cat >>~/.ssh/config <<END
          Host server
            HostName server.example.com
            User root
            IdentityFile ~/.ssh/server.key
            StrictHostKeyChecking no
          END
          cat ~/.ssh/config
        env:
          SSH_KEY: ${{ secrets.APPS_SSH_KEY }}

      - name: Deploy via SSH
        run: ssh server 'cd apps/website && git pull origin main && git remote prune origin && docker system prune -f && docker-compose up -d --build && docker-compose ps && docker-compose logs'
```

That works really well for my use case (a few Deno apps, some with some specific needs for Redis or Postgres).

As for my apps server, I have it on a [DigitalOcean](https://digitalocean.com) droplet, just because I trust their ability to keep things running, including backups, and I only have "code" there, not really private keys. All of that is in my git server, which is on my home server, and backed up to a couple of different places, physically, and encrypted.

Thank you for your attention and kindness. I really appreciate it!
]]>
      </content:encoded>
      <pubDate>Sat, 08 Jul 2023 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Migrating a Node/NPM package to Deno</title>
      <link>https://news.onbrn.com/migrating-a-node-npm-package-to-deno/</link>
      <guid isPermaLink="false">https://news.onbrn.com/migrating-a-node-npm-package-to-deno/</guid>
      <content:encoded>
        <![CDATA[<p><strong>tl;dr;</strong> You can <a href="https://github.com/BrunoBernardino/shurley/commit/1a55ee6f69ad3fb7794d4809b6895d2b70664016">see the main commit of the migration here</a>. You can also <a href="https://github.com/BrunoBernardino/shurley">see the whole repo and inspect at will</a>.</p>
<h3>Why migrate?</h3>
<p>I've wanted to write a package that's published in both NPM and Deno, but wanted to focus on the development and publishing process rather than the package itself, so migrating an existing package made a lot more sense.</p>
<h3>Why <a href="https://github.com/BrunoBernardino/shurley">shurley</a>?</h3>
<p>Even though it didn't have any production dependencies already, the development dependencies still required security updates, which was annoying as it didn't affect anyone <em>using</em> the package. Anyway, it's a very simple package/function, so it seemed like the perfect candidate for my intentions.</p>
<h3>Why Deno?</h3>
<p>It allows TypeScript by default, and it includes a formatter, linter, and test commands (the <code>devDependencies</code> that I had installed)!</p>
<h3>What changed?</h3>
<p><a href="https://github.com/BrunoBernardino/shurley/commit/1a55ee6f69ad3fb7794d4809b6895d2b70664016">You can see it in the commit</a>, I removed a lot of stuff. Mainly everything that enabled <code>prettier</code>, <code>eslint</code>, and <code>mocha</code>.</p>
<p>I then added a new script, <a href="https://github.com/BrunoBernardino/shurley/commit/1a55ee6f69ad3fb7794d4809b6895d2b70664016#diff-47b0591aa425dd5166c1cfeecd1f73eb4c29b7280e3ca7e7e46755354deb6366"><code>build-npm.ts</code></a>, which uses <code>dnt</code> to generate the Node-compatible code for <code>npm</code>.</p>
<p>Finally, I just added a <code>Makefile</code> and tweaked the <code>README.md</code> to make the development and publishing process easer.</p>
<p>I was really happy and surprised with how easy the process was, though arguably this was a very simple package to migrate.</p>
<p>You can find the package in <a href="https://deno.land/x/shurley">Deno</a> and in <a href="https://www.npmjs.com/package/shurley">NPM</a> now.</p>
<p>I hope you enjoy it.</p>
<p>Thank you so much for being here. I really appreciate you!</p>
]]>
      </content:encoded>
      <pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Simple Browser Message Encryption</title>
      <link>https://news.onbrn.com/simple-browser-message-encryption/</link>
      <guid isPermaLink="false">https://news.onbrn.com/simple-browser-message-encryption/</guid>
      <content:encoded>
        <![CDATA[<p><strong>tl;dr;</strong> You can <a href="https://simple-browser-message-encryption.onbrn.com/">encrypt and decrypt messages securely here</a>. You can also <a href="https://github.com/BrunoBernardino/simple-browser-message-encryption">see the simple code for it</a>.</p>
<h3>Why AES-GCM-256?</h3>
<p>Because it's a <a href="https://en.wikipedia.org/wiki/Galois/Counter_Mode">very secure symmetric cryptographic algorithm</a> that is natively supported in <a href="https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto">Browsers and in Deno</a>.</p>
<h3>Why for the browser?</h3>
<p>Because you generally have it in every device. Note this is also a PWA that works offline, so you can install it and use it to encrypt/decrypt messages even without a network connection in most devices.</p>
<h3>Why not just use Signal or Threema?</h3>
<p>Ignoring the problem of having to trust an entity, sometimes people just don't have it.</p>
<h3>Why not just use PGP?</h3>
<p>I prefer it whenever I can, but it's not trivial to setup and use for non-technical people. Its assymetric encryption nature is much better.</p>
<h3>But you still need to securely share the encryption password.</h3>
<p>Indeed. And you can do it in person once, or perhaps describe it to the other person (e.g. &quot;use my phone number&quot; or &quot;use your email address&quot;) instead of sharing it explicitly.</p>
<p>Anyway, I hope you enjoy it.</p>
<p>Thank you for your attention and kindness. I really appreciate it!</p>
]]>
      </content:encoded>
      <pubDate>Fri, 30 Dec 2022 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Using TypeScript and Web Components without bundling, with Deno</title>
      <link>https://news.onbrn.com/using-typescript-and-web-components-without-bundling-with-deno/</link>
      <guid isPermaLink="false">https://news.onbrn.com/using-typescript-and-web-components-without-bundling-with-deno/</guid>
      <content:encoded>
        <![CDATA[<p><strong>tl;dr;</strong> You can see <a href="https://simple-deno-website-boilerplate.onbrn.com/web-component">an example of using a Web Component in my simple Deno website boilerplate</a>. You can also <a href="https://github.com/BrunoBernardino/deno-boilerplate-simple-website/blob/ab99bfb993485b796028af49006acb31f6a6e162/public/ts/app-button.ts">see the code for it</a>.</p>
<h3>Why Deno?</h3>
<p>It's no surprise I've been quite enamored with <a href="https://deno.land/">Deno</a> and the old-school techniques of building apps and websites, especially the idea of not bundling and not using frameworks.</p>
<p>There's a lot to like, from the lack of large dependencies (in number and size), to the speed of deployment (it's literally up and available in production in a few seconds, as there's no build step!). It's also incredibly easier to maintain.</p>
<h3>Why TypeScript?</h3>
<p><a href="https://github.com/BrunoBernardino/budgetzen-web">Budget Zen</a> and <a href="https://github.com/BrunoBernardino/loggit-web">Loggit</a> are currently built with Deno and both started off from <a href="https://github.com/BrunoBernardino/deno-boilerplate-simple-website">my simple Deno website boilerplate</a>, but as I converted the apps from Next.js to Deno and vanilla JavaScript, I did lose the benefits from TypeScript, which meant I wanted to eventually bring back those benefits to the client-side code.</p>
<p>Still, I didn't want to introduce a build or bundling step, and since Deno is always server-rendered, I just needed to transpile the TypeScript into JavaScript once the browser requested those files. Luckily, <a href="https://github.com/ayame113/ts-serve">such a project already existed</a>, and while <a href="https://github.com/BrunoBernardino/deno-boilerplate-simple-website/blob/ab99bfb993485b796028af49006acb31f6a6e162/lib/utils.ts#L108-L136">I ended up building a simplified version</a>, it was helpful in getting me what I needed.</p>
<h3>Why Web Components?</h3>
<p>Then, one arguable benefit from React and other user interface libraries (or frameworks), is that you can build reusable components, but I've found React to quickly lose performance on more complicated UIs, and I don't like all the boilerplate and client-side code it requires, so, as expected, I looked into Web Standards, and that's where <a href="https://developer.mozilla.org/en-US/docs/Web/Web_Components">Web Components</a> are helpful and beautiful!</p>
<p>Web Components are perfect... web components! You can easily extend an HTML element which modern browsers will interpret without problems, potentially and easily add shadow DOM styles (so they don't clash with global styles), and also local JavaScript.</p>
<p>If you give it a try, you'll see you can get a lot of the benefits people generally associate with React or Vue, without any library!</p>
<p>By allowing Just-In-Time (JIT) TypeScript transpile, we can then build a really nice Web Component, like what I have in <a href="https://github.com/BrunoBernardino/deno-boilerplate-simple-website/blob/ab99bfb993485b796028af49006acb31f6a6e162/public/ts/app-button.ts">my simple Deno website boilerplate</a>, (and <a href="https://github.com/BrunoBernardino/deno-boilerplate-simple-website/blob/ab99bfb993485b796028af49006acb31f6a6e162/public/ts/web-component.ts">this is how it's loaded</a>).</p>
<p>Easy, right?</p>
<p>I hope you enjoy it.</p>
<p>Thank you for your attention and kindness. I really appreciate it!</p>
]]>
      </content:encoded>
      <pubDate>Tue, 02 Aug 2022 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Convert 1Password .1pux export file to .csv</title>
      <link>https://news.onbrn.com/convert-1password-1pux-export-file-to-csv/</link>
      <guid isPermaLink="false">https://news.onbrn.com/convert-1password-1pux-export-file-to-csv/</guid>
      <content:encoded>
        <![CDATA[<p><strong>tl;dr;</strong> You can check out <a href="https://www.npmjs.com/package/1pux-to-csv">the package I published here</a>.</p>
<p>For the last 6 months, since Apple forced me to ditch them entirely (because they were the true owners of my devices, and due to many privacy and security breaches), I've been re-evaluating all the software I use and try to replace it with end-to-end encrypted Open Source, ideally built by &quot;small tech&quot;. In the process, I noticed that 1Password changed their export format, and in order to test other tools, I had to easily convert it to CSV. So I built a simple CLI tool that takes their new <code>.1pux</code> files and converts them to <code>.csv</code> (and you can use the exposed functions and types to parse and export directly to whatever you want/need). It's suprisingly named <code>1pux-to-csv</code>.</p>
<p>You can find it in <a href="https://www.npmjs.com/package/1pux-to-csv">NPM</a> and in <a href="https://github.com/BrunoBernardino/1pux-to-csv">GitHub</a>, along with instructions on how to use it.</p>
<p>I hope it adds value to your life.</p>
<p>Thank you so much for being here. I really appreciate you!</p>
<p>P.S.: If you're wondering why there hasn't been an &quot;update&quot; here for almost half a year, it's because I've been <a href="https://blackandbluedrawings.onbrn.com/">focusing on art</a>, more than engineering software, on my free time.</p>
]]>
      </content:encoded>
      <pubDate>Sun, 19 Dec 2021 00:00:00 GMT</pubDate>
    </item>
  </channel>
</rss>