❌

Normal view

Ignacio Casal Quinteiro: Mecalin

15 January 2026 at 19:13

Many years ago when I was a kid, I took typing lessons where they introduced me to a program called Mecawin. With it, I learned how to type, and it became a program I always appreciated not because it was fancy, but because it showed step by step how to work with a keyboard.

Now the circle of life is coming back: my kid will turn 10 this year. So I started searching for a good typing tutor for Linux. I installed and tried all of them, but didn’t like any. I also tried a couple of applications on macOS, some were okish, but they didn’t work properly with Spanish keyboards. At this point, I decided to build something myself. Initially, I  hacked out keypunch, which is a very nice application, but I didn’t like the UI I came up with by modifying it. So in the end, I decided to write my own. Or better yet, let Kiro write an application for me.

Mecalin is meant to be a simple application. The main purpose is teaching people how to type, and the Lessons view is what I’ll be focusing on most during development. Since I don’t have much time these days for new projects. I decided to take this opportunity to use Kiro to do most of the development for me. And to be honest, it did a pretty good job. Sure, there are things that could be better, but I definitely wouldn’t have finished it in this short time otherwise.

So if you are interested, give it a try, go to flathub and install it: https://flathub.org/apps/io.github.nacho.mecalin

In this application, you’ll have several lessons that guide you step by step through the different rows of the keyboard, showing you what to type and how to type it.

This is an example of the lesson view.

You also have games.

The falling keys game: keys fall from top to bottom, and if one reaches the bottom of the window, you lose. This game can clearly be improved, and if anybody wants to enhance it, feel free to send a PR.

The scrolling lanes game: you have 4 rows where text moves from right to left. You need to type the words before they reach the leftmost side of the window, otherwise you lose.

For those who want to support your language, there are two JSON files you’ll need to add:

  1. The keyboard layout: https://github.com/nacho/mecalin/tree/main/data/keyboard_layouts
  2. The lessons: https://github.com/nacho/mecalin/tree/main/data/lessons

Note that the Spanish lesson is the source of truth; the English one is just a translation done by Kiro.

If you have any questions, feel free to contact me.

Asman Malika: Think About Your Audience

14 January 2026 at 12:07

When I started writing this blog, I didn’t fully understand what “think about your audience” really meant. At first, it sounded like advice meant for marketers or professional writers. But over time, I’ve realized it’s one of the most important lessons I’m learning, not just for writing, but for building software and contributing to open source.

Who I’m Writing (and Building) For

When I sit down to write, I think about a few people.

I think about aspiring developers from non-traditional backgrounds, people who didn’t follow a straight path into tech, who might be self-taught, switching careers, or learning in community-driven programs. I think about people who feel like they don’t quite belong in tech yet, and are looking for proof that they do.

I also think about my past self, about some months ago. Back then, everything felt overwhelming: the tools, the terminology, the imposter syndrome. I remember wishing I could read honest stories from people who were still in the process, not just those who had already “made it.”

And finally, I think about the open-source community I’m now part of: contributors, maintainers, and users who rely on the software we build.

Why My Audience Matters to My Work

Thinking about my audience has changed how I approach my work on Papers.

Papers isn’t just a codebase, it’s a tool used by researchers, students, and academics to manage references and organize their work. When I think about those users, I stop seeing bugs as abstract issues and start seeing them as real problems that affect real people’s workflows.

The same applies to documentation. Remembering how confusing things felt when I was a beginner pushes me to write clearer commit messages, better explanations, and more accessible documentation. I’m no longer writing just to “get the task done”. I’m writing so that someone else, maybe a first-time contributor, can understand and build on my work.

Even this blog is shaped by that mindset. After my first post, someone commented and shared how it resonated with them. That moment reminded me that words can matter just as much as code.

What My Audience Needs From Me

I’ve learned that people don’t just want success stories. They want honesty.

They want to hear about the struggle, the confusion, and the small wins in between. They want proof that non-traditional paths into tech are valid. They want practical lessons they can apply, not just motivation quotes.

Most of all, they want representation and reassurance. Seeing someone who looks like them, or comes from a similar background, navigating open source and learning in public can make the journey feel possible.

That’s a responsibility I take seriously.

How I’ve Adjusted Along the Way

Because I’m thinking about my audience, I’ve changed how I share my journey.

I explain things more clearly. I reflect more deeply on what I’m learning instead of just listing achievements. I’m more intentional about connecting my experiences, debugging a feature, reading unfamiliar code, asking questions in the GNOME community, to lessons others can take away.

Understanding the Papers user base has also influenced how I approach features and fixes. Understanding my blog audience has influenced how I communicate. In both cases, empathy plays a huge role.

Moving Forward

Thinking about my audience has taught me that good software and good writing have something in common: they’re built with people in mind.

As I continue this internship and this blog, I want to keep building tools that are accessible, contributing in ways that lower barriers, and sharing my journey honestly. If even one person reads this and feels more capable, or more encouraged to try, then it’s worth it.

That’s who I’m writing for. And that’s who I’m building for.

Flathub Blog: What's new in Vorarbeiter

14 January 2026 at 00:00

It is almost a year since the switch to Vorarbeiter for building and publishing apps. We've made several improvements since then, and it's time to brag about them.

RunsOn​

In the initial announcement, I mentioned we were using RunsOn, a just-in-time runner provisioning system, to build large apps such as Chromium. Since then, we have fully switched to RunsOn for all builds. Free GitHub runners available to open source projects are heavily overloaded and there are limits on how many concurrent builds can run at a time. With RunsOn, we can request an arbitrary number of threads, memory and disk space, for less than if we were to use paid GitHub runners.

We also rely more on spot instances, which are even cheaper than the usual on demand machines. The downside is that jobs sometimes get interrupted. To avoid spending too much time on retry ping-pong, builds retried with the special bot, retry command use the on-demand instances from the get-go. The same catch applies to large builds, which are unlikely to finish in time before spot instances are reclaimed.

The cost breakdown since May 2025 is as follows:

Cost breakdown

Once again, we are not actually paying for anything thanks to the AWS credits for open source projects program. Thank you RunsOn team and AWS for making this possible!

Caching​

Vorarbeiter now supports caching downloads and ccache files between builds. Everything is an OCI image if you are feeling brave enough, and so we are storing the per-app cache with ORAS in GitHub Container Registry.

This is especially useful for cosmetic rebuilds and minor version bumps, where most of the source code remains the same. Your mileage may vary for anything more complex.

End-of-life without rebuilding​

One of the Buildbot limitations was that it was difficult to retrofit pull requests marking apps as end-of-life without rebuilding them. Flat-manager itself exposes an API call for this since 2019 but we could not really use it, as apps had to be in a buildable state only to deprecate them.

Vorarbeiter will now detect that a PR modifies only the end-of-life keys in the flathub.json file, skip test and regular builds, and directly use the flat-manager API to republish the app with the EOL flag set post-merge.

Web UI​

GitHub's UI isn't really built for a centralized repository building other repositories. My love-hate relationship with Buildbot made me want to have a similar dashboard for Vorarbeiter.

The new web UI uses PicoCSS and HTMX to provide a tidy table of recent builds. It is unlikely to be particularly interesting to end users, but kinkshaming is not nice, okay? I like to know what's being built and now you can too here.

Reproducible builds​

We have started testing binary reproducibility of x86_64 builds targetting the stable repository. This is possible thanks to flathub-repro-checker, a tool doing the necessary legwork to recreate the build environment and compare the result of the rebuild with what is published on Flathub.

While these tests have been running for a while now, we have recently restarted them from scratch after enabling S3 storage for diffoscope artifacts. The current status is on the reproducible builds page.

Failures are not currently acted on. When we collect more results, we may start to surface them to app maintainers for investigation. We also don't test direct uploads at the moment.

Jussi Pakkanen: How to get banned from Facebook in one simple step

13 January 2026 at 18:06

I, too, have (or as you can probably guess from the title of this post, had) a Facebook account. I only ever used it for two purposes.

  1. Finding out what friends I rarely see are doing
  2. Getting invites to events
Facebook has over the years made usage #1 pretty much impossible. My feed contains approximately 1% posts by my friends and 99% ads for image meme "humor" groups whose expected amusement value seems to be approximately the same as punching yourself in the groin.

Still, every now and then I get a glimpse of a post by the people I actively chose to follow. Specifically a friend was pondering about the behaviour of people who do happy birthday posts on profiles of deceased people. Like, if you have not kept up with someone enough to know that they are dead, why would you feel the need to post congratulations on their profile pages.

I wrote a reply which is replicated below. It is not accurate as it is a translation and I no longer have access to the original post.

Some of these might come via recommendations by AI assistants. Maybe in the future AI bots from people who themselves are dead carry on posting birthday congratulations on profiles of other dead people. A sort of a social media for the deceased, if you will.

Roughly one minute later my account was suspended. Let that be a lesson to you all. Do not mention the Dead Internet Theory, for doing so threatens Facebook's ad revenue and is thus taboo. (A more probable explanation is that using the word "death" is prohibited by itself regardless of context, leading to idiotic phrasing in the style of "Person X was born on [date] and d!ed [other date]" that you see all over IG, FB and YT nowadays.)

Apparently to reactivate the account I would need to prove that "[I am] a human being". That might be a tall order given that there are days when I doubt that myself.

The reactivation service is designed in the usual deceptive way where it does not tell you all the things you need to do in advance. Instead it bounces you from one task to another in the hopes that sunk cost fallacy makes you submit to ever more egregious demands. I got out when they demanded a full video selfie where I look around different directions. You can make up your own theories as to why Meta, a known advocate for generative AI and all that garbage, would want a high resolution scans of people's faces. I mean, surely they would not use it for AI training without paying a single cent for usage rights to the original model. Right? Right?

The suspension email ends with this ultimatum.

If you think we suspended your account by mistake, you have 180 days to appeal our decision. If you miss this deadline your account will be permanently disabled.

Well, mr Zuckerberg, my response is the following:

Close it! Delete it! Burn it down to the ground! I'd do it myself this very moment, but I can't delete the account without reactivating it first.

Let it also be noted that this post is a much better way of proving that I am a human being than some video selfie thing that could be trivially faked with genAI.

Arun Raghavan: Accessibility Update: Enabling Mono Audio

13 January 2026 at 00:09

If you maintain a Linux audio settings component, we now have a way to globally enable/disable mono audio for users who do not want stereo separation of their audio (for example, due to hearing loss in one ear). Read on for the details on how to do this.

Background

Most systems support stereo audio via their default speaker output or 3.5mm analog connector. These devices are exposed as stereo devices to applications, and applications typically render stereo content to these devices.

Visual media use stereo for directional cues, and music is usually produced using stereo effects to separate instruments, or provide a specific experience.

It is not uncommon for modern systems to provide a “mono audio” option that allows users to have all stereo content mixed together and played to both output channels. The most common scenario is hearing loss in one ear.

PulseAudio and PipeWire have supported forcing mono audio on the system via configuration files for a while now. However, this is not easy to expose via user interfaces, and unfortunately remains a power-user feature.

Implementation

Recently, Julian Bouzas implemented a WirePlumber setting to force all hardware audio outputs (MR 721 and 769). This lets the system run in stereo mode, but configures the audioadapter around the device node to mix down the final audio to mono.

This can be enabled using the WirePlumber settings via API, or using the command line with:

wpctl settings node.features.audio.mono true

The WirePlumber settings API allows you to query the current value as well as clear the setting and restoring to the default state.

I have also added (MR 2646 and 2655) a mechanism to set this using the PulseAudio API (via the messaging system). Assuming you are using pipewire-pulse, PipeWire’s PulseAudio emulation daemon, you can use pa_context_send_message_to_object() or the command line:

pactl send-message /core pipewire-pulse:force-mono-output true

This API allows for a few things:

  • Query existence of the feature: when an empty message body is sent, if a null value is returned, feature is not supported
  • Query current value: when an empty message body is sent, the current value (true or false) is returned if the feature is supported
  • Setting a value: the requested setting (true or false) can be sent as the message body
  • Clearing the current value: sending a message body of null clears the current setting and restores the default

Looking ahead

This feature will become available in the next release of PipeWire (both 1.4.10 and 1.6.0).

I will be adding a toggle in Pavucontrol to expose this, and I hope that GNOME, KDE and other desktop environments will be able to pick this up before long.

Hit me up if you have any questions!

Khrys’presso du lundi 12 janvier 2026

12 January 2026 at 06:42

 

Comme chaque lundi, un coup d’Ɠil dans le rĂ©troviseur pour dĂ©couvrir les informations que vous avez peut-ĂȘtre ratĂ©es la semaine derniĂšre.


Tous les liens listĂ©s ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez Ă  activer votre bloqueur de javascript favori ou Ă  passer en “mode lecture” (Firefox) ;-)

Brave New World

Spécial IA

Les Facepalm de la semaine

Spécial Renée Nicole Good

Spécial femmes dans le monde

Spécial Palestine et Israël

Spécial France

RIP

  • GĂ©nĂ©alogies fĂ©ministes et fractures politiques : Ă  la mĂ©moire d’Eleni Varikas (blogs.mediapart.fr)

    Le vendredi 9 janvier 2026, Eleni Varikas s’est Ă©teinte Ă  Paris. Son travail s’est concentrĂ© sur la thĂ©orie fĂ©ministe, le colonialisme, les origines du racisme et les problĂ©matiques de l’exclusion. À travers une lecture exigeante de l’universalisme moderne, Eleni Varikas n’a cessĂ© d’en interroger les angles morts, les exclusions constitutives et les hiĂ©rarchies qu’il prĂ©tend pourtant abolir.

Spécial femmes en France

Spécial médias et pouvoir

Spécial emmerdeurs irresponsables gérant comme des pieds (et à la néolibérale)

SpĂ©cial recul des droits et libertĂ©s, violences policiĂšres, montĂ©e de l’extrĂȘme-droite


Spécial résistances

Spécial outils de résistance

Spécial GAFAM et cie

Les autres lectures de la semaine

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

Les trucs chouettes de la semaine

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Allan Day: GNOME Foundation Update, 2026-01-09

9 January 2026 at 15:56

Welcome to the first GNOME Foundation update of 2026! I hope that the new year finds you well. The following is a brief summary of what’s been happening in the Foundation this week.

Trademark registration renewals

This week we received news that GNOME’s trademark registration renewals have been completed. This is an example of the routine legal functions that the GNOME Foundation handles for the GNOME Project, and is part of what I think of as our core operations. The registration lasts for 10 years, so the next renewal is due in 2036. Many thanks to our trademark lawyers for handling this for us!

Microsoft developer account

Another slow registration process that completed this week was getting verified status on our Microsoft Developer Account. This was primarily being handled by Andy Holmes, with a bit of assistance on the Foundation side, so many thanks to him. The verification is required to allow those with Microsoft 365 organizational accounts to use GNOME Online Accounts.

Travel Committee

The Travel Committee had its first meeting of 2026 this week, where it discussed travel sponsorships for last month’s GNOME.Asia conference. Sadly, a number of people who were planning to travel to the conference had their visas denied. The committee spent some time assessing what happened with these visa applications, and discussed how to support visa applicants better in future. Thanks in particular to Maria for leading that conversation.

GNOME.Asia Report

Also related to GNOME.Asia: Kristi has posted a very nice report on the event, including some very nice pictures. It looks like it was a great event! Do make sure that you check out the post.

Audit preparation

As I mentioned in previous posts, audit preparation is going to be a major focus for the GNOME Foundation over the next three months. We are also finishing off the final details of our 2024-25 accounts. These two factors resulted in a lot of activity around the books this week. In addition to a lot of back and forth with our bookkeeper and finance advisor, we also had a regular monthly bookkeeping call yesterday, and will be having an extra meeting to make more process in the next few weeks.

New payments platform rollout

With it being the first week of the month, we had a batch of invoices to process and pay this week. For this we made the switch to a new payments processing system, which is going to be used for reimbursement and invoice tracking going forward. So far the system is working really well, and provides us with a more robust, compliant, and integrated process than what we had previously.

Infrastructure

Over the holiday, Bart cleared up the GNOME infrastructure issues backlog. This led him to write a service which will allow us to respond to GitLab abuse reports in a better fashion. On the Flathub side, he completed some work on build reproducibility, and finished adding the ability to re-publish apps that were previously marked as end of life.

FOSDEM

FOSDEM 2026 preparations continued this week. We will be having an Advisory Board meeting, for which attendance is looking good, so good that we are currently in the process of booking a bigger room. We are also in the process of securing a venue for a GNOME social event on the Saturday night.

GNOME Foundation donation receipts

Bart added a new feature to donate.gnome.org this week, to allow donors to generate a report on their donations over the last calendar year. This is intended to provide US tax payers with the documentation necessary to allow them to offset their donations against their tax payments. If you are a donor, you can generate a receipt for 2025 at donate.gnome.org/help .

That’s it for this week’s update! Thanks for reading, and have a great weekend.

Jussi Pakkanen: AI and money

9 January 2026 at 13:56

If you ask people why they are using AI (or want other people to use it) you get a ton of different answers. Typically none of them contain the real reason, which is that using AI is dirt cheap. Between paying a fair amount to get something done and paying very little to give off an impression that the work has been done, the latter tends to win.

The reason AI is so cheap is that it is being paid by investors. And the one thing we know for certain about those kinds of people is that they expect to get their money back. Multiple times over. This might get done by selling the system to a bigger fool before it collapses, but eventually someone will have to earn that money back from actual customers (or from government bailouts, i.e. tax payers).

I'm not an economist and took a grand total of one economics class in the university, most of which I have forgotten. Still, using just that knowledge we can get a rough estimate of the money flows involved. For simplicity let's bundle all AI companies to a single entity and assume a business model based on flat monthly fees.

The total investment

A number that has been floated around is that AI companies have invested approximately one trillion (one thousand billion or 1e12) dollars. Let's use that as the base investment we want to recover.

Number of customers

Sticking with round figures, let's assume that AI usage becomes ubiquitous and that there are one billion monthly subscribers. For comparison the estimated number of current Netflix subscribers is 300 million.

Income and expenses

This one is really hard to estimate. What seems to be the case is that current monthly fees are not enough to even pay back the electricity costs of providing the service. But let's again be generous and assume that some sort of a efficiency breakthrough happens in the future and that the monthly fee is $20 with expenses being $10. This means a $10 profit per user per month.

We ignore one-off costs such as buying several data centers' worth of GPUs every few years to replace the old ones.

The simple computation

With these figures you get $10 billion per month or $120 billion per year. Thus paying off the investment would take a bit more than 8 years. I don't personally know any venture capitalists, but based on random guessing this might fall in the "takes too long, but just about tolerable" level of delay.

So all good then?

Not so fast!

One thing to keep in mind when doing investment payback calculations is the time value of money. Money you get in "the future" is not as valuable as money you have right now. Thus we need to discount them to current value.

Interest rate

I have no idea what a reasonable discount rate for this would be. So let's pick a round number of 5.

The "real-er" numbers

At this point the computations become complex enough that you need to break out the big guns. Yes, spreadsheets.

Here we see that it actually takes 12 years to earn back the investment. Doubling the investment to two trillion would take 36 years. That is a fair bit of time for someone else to create a different system that performs maybe 70% as well but which costs a fraction of the old systems to get running and operate. By which time they can drive the price so low that established players can't even earn their operating expenses let alone pay back the original investment. 

Exercises for the reader

  • This computation assumes the system to have one billion subscribers from day one. How much longer does it take to recuperate the investment if it takes 5 years to reach that many subscribers? What about 10 years?
  • How long is the payback period if you have a mere 500 million paid subscribers?
  • Your boss is concerned about the long payback period and wants to shorten it by increasing the monthly fee. Estimate how many people would stop using the service and its effect on the payback time if the fee is raised from $20 to $50. How about $100? Or $1000?
  • What happens when the ad revenue you can obtain by dumping tons of AI slop on the Internet falls below the cost of producing said slop?

Engagement Blog: GNOME ASIA 2025-Event Report

9 January 2026 at 11:35

GNOME ASIA 2025 took place in Tokyo, Japan, from 13–14 December 2025, bringing together the GNOME community for the featured annual GNOME conference in Asia.
The event was held in a hybrid format, welcoming both in-person and online speakers and attendees from across the world.

GNOME ASIA 2025 was co-hosted with the LibreOffice Asia Conference community event, creating a shared space for collaboration and discussion between open-source communities.

Photo by Tetsuji Koyama, licensed under CC BY 4.0

About GNOME.Asia Summit

The GNOME.Asia Summit focuses primarily on the GNOME desktop while also covering applications and platform development tools. It brings together users, developers, foundation leaders, governments, and businesses in Asia to discuss current technologies and future developments within the GNOME ecosystem.

The event featured 25 speakers in total, delivering 17 full talks and 8 lightning talks across the two days. Speakers joined both on-site and remotely.

Photo by Tetsuji Koyama, licensed under CC BY 4.0

 

 

 

 

 

 

 

 

Around 100 participants attended in person in Tokyo, contributing to engaging discussions and community interaction. Session recordings were published on the GNOME Asia YouTube channel, where they have received 1,154 total views, extending the reach of the event beyond the conference dates.

With strong in-person attendance, active online participation, and collaboration with the LibreOffice Asia community, GNOME ASIA 2025 once again demonstrated the importance of regional gatherings in strengthening the GNOME ecosystem and open-source collaboration in Asia.

Photo by Tetsuji Koyama, licensed under CC BY 4.0

 

 

This Week in GNOME: #231 Blueprint Maps

9 January 2026 at 00:00

Update on what happened across the GNOME project in the week from January 02 to January 09.

GNOME Core Apps and Libraries

Maps ↗

Maps gives you quick access to maps all across the world.

mlundblad announces

Thanks to work done by Jamie Gravendeel Maps has now been ported to use Blueprint to define the UI templates. Also Hari Rana ported the share locations (“Send to”) dialog to AdwDialog.

Third Party Projects

Giant Pink Robots! says

Version v2026.1.5 of the Varia download manager was released with automatic archive extraction, improvements to accessibility and tons of bug fixes and small improvements. The biggest part of this new release however is macOS support, albeit in an experimental state for now. With this, Varia now supports all three big desktop OS platforms: Linux, Windows and Mac. https://giantpinkrobots.github.io/varia/

francescocaracciolo announces

Newelle, AI Assistant for Gnome, received a new major update!

  • Added MCP server support, enabling integration with thousands of apps
  • Added Tools, extensions can now add new tools very easily
  • Added the possibility to set some models as favoutites
  • You can now trigger recording and TTS stop with keyboard shortcuts

Download it on Flathub

Phosh ↗

A pure wayland shell for mobile devices.

Guido announces

Phosh 0.52 is out:

We’ve added a QR code to the Wi-Fi quick setting so clients can connect easily by scanning it and there’s a new gesture to control brightness on the lock screen.

There’s more — see the full details here.

Flare ↗

Chat with your friends on Signal.

schmiddi announces

Version 0.18.0-beta.1 of Flare was now released on flathub-beta. This release includes fixes for using Flare as a primary device, which I have done successfully for a while now. Feel free to test it out and provide feedback. Note that if you want to try it out, I would heavily encourage linking Signal-Desktop to Flare in order to set your profile information and to start new chats. Feel free to give feedback if you have any issues with this beta in the Matrix room or issue tracker.

Emergency Alerts ↗

Receive emergency alerts

Leonhard reports

Emergency Alerts 2.0.0 has been released! It finally brings the long-awaited weather alerts for the U.S. and air raid alerts for Ukraine. Location selection is now also more powerful, allowing you to choose any point on Earth, and the new map view lets you see active alerts and affected areas at a glance. Please note that to make all this possible, the way locations are stored had to be updated. When you first launch the app after updating, it tries to migrate your existing locations automatically. In rare cases, this may not work and you might need to re-add them manually. If that happens a notification will be sent.

Highlights:

  • Weather alerts now available across the U.S.
  • Air raid alerts now available for Ukraine
  • Pick any point on Earth as a location
  • New map view showing active alerts and impacted areas

GNOME Websites

Sophie (she/her) says

The www.gnome.org pages are now available in English, Bulgarian, Basque, Brazilian Portuguese, Swedish, Ukrainian, and Chinese. You can contribute additional translations on l10n.gnome.org.

Miscellaneous

Guillaume Bernard reports

Damned Lies has been refreshed during the last weeks of 2025.

To refresh the statistics of branches, many of you complained that the task was synchronous and ended in timeouts. I have reworked this part in anticipation of ticket #409 (asynchronous git pushes) and the refresh now delegates refresh statistics to a Celery worker. For git pushes, we’ll use Celery tasks the same way!

In short, this means every time you click the refresh statistics button, it will start a job in the background, and a progress bar will show you the refresh status of the job in real time. There will be a maximum of three concurrent refreshes at a time, that should be enough :-).

In addition to these major changes, I reworked the presentation of languages and POT files in modules:

  1. The date & time of the POT file generation is now shown with the number of messages.

  2. Your languages are shown on top of the list; it will no longer be necessary to scroll down to find your language in the language list.

Arjan reports

PyGObject 3.55.1 has been released. It’s the second development release (it’s not available on PyPI) in the current GNOME release cycle.

Notable changes include:

  • A fix do do_dispose() is always called on your object.
  • You can define a do_constructed() method that will be called after the object is initialised.
  • A regression in 3.55.0 has been fixed: instance data is now saved and outlives the garbage collector.

All changes can be found in the Changelog

This release can be downloaded from Gitlab and the GNOME download server.If you use PyGObject in your project, please give it a swing and see if everything works as expected.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Les Accords du Lion d’Or, un tiers-lieu Ă  dimension culturelle en cours de dĂ©gafamisation

7 January 2026 at 09:00

Parce qu’il nous semble toujours aussi important de promouvoir les dĂ©marches de transition vers les outils numĂ©riques Ă©thiques opĂ©rĂ©es par les organisations, voici un nouvel opus pour notre sĂ©rie de tĂ©moignages de DĂ©googlisation. Un grand merci Ă  Étienne d’avoir pris le temps de nous raconter comment le tiers-lieu Les Accords du Lion d’Or dans lequel il est investi, a changĂ© de vie numĂ©rique..

Bonjour, peux-tu te présenter briÚvement pour le Framablog ?

Bonjour, je suis Étienne rĂ©gisseur et technicien du spectacle en transition, passionnĂ© d’informatique depuis fort longtemps, je quitte le milieu du spectacle pour me consacrer dĂ©sormais Ă  ma premiĂšre passion. J’ai rencontrĂ© l’association Les Accords du Lion d’Or en 2016, un tiers-lieu Ă  vocation culturelle fraĂźchement installĂ© dans mon village natal, juste Ă  cĂŽtĂ© de la ville dans laquelle je suis revenu m’installer aprĂšs ma formation et quelques annĂ©es de travail Ă  Bruxelles.
C’est un projet aux multiples facettes, spectacles vivant, lieu de mĂ©moire du village, projet de forĂȘt nourriciĂšre, recherche sur le numĂ©rique, en lien avec les habitant⋅e⋅s
 J’avais Ă©tĂ© invitĂ© Ă  coanimer une rencontre avec des Ă©coliers au sujet des photos et vieilles cartes postales du village, comment faire un travail de mĂ©moire. C’est un projet qui ressemblait beaucoup Ă  ce que j’avais moi-mĂȘme vĂ©cu Ă  l’école de Simandre en 2003 : numĂ©riser et classer dans une base de donnĂ©es sommaire, une partie de ces photos. C’est ainsi que j’ai rencontrĂ© l’association.
SituĂ©e dans un lieu emblĂ©matique au cƓur du village et de part la volontĂ© d’ĂȘtre Ă  la rencontre des habitants, de nombreuses histoires et matiĂšres, cartes postales, images, menus, rĂ©cits et autres sont arrivĂ©es au Lion d’Or ; le besoin d’enregistrer et prĂ©server les souvenirs s’est accentuĂ©.

EntĂȘte du site Les accords du Lion d'or

EntĂȘte du site Les Accords du Lion d’Or

Nous avons alors choisi de dĂ©marrer une base de donnĂ©es avec comme objet les images. Fort de mes convictions elle serait sur GNU/Linux, ce choix Ă©tait entre mes mains et la confiance de l’équipe Ă©tait lĂ .
À ce jour, nous sommes un collectif multiforme, un conseil collĂ©gial d’administration, 1 salariĂ©e Ă  80 % chargĂ©e de missions, 1 salariĂ©e Ă  70 % animatrice nature, 1 salariĂ© Ă  25 % agent en charge du dĂ©veloppement des usages numĂ©riques en 2023 : c’est moi, une artiste plasticienne et trois artistes du spectacle pleinement impliquĂ©s dans la vie de l’association.
Au fil des projets il s’est avĂ©rĂ© que plusieurs personnes au sein de l’équipe Ă©taient sensibles aux questions de souverainetĂ© numĂ©rique. Rapidement, nous nous sommes rendu compte des compĂ©tences que j’avais accumulĂ©es au fil des annĂ©es et de l’intĂ©rĂȘt pour l’association d’en faire un sujet commun.

 

Quel a été le déclencheur de votre dégafamisation ?

En fait on a pas vraiment eu un dĂ©clic, ça s’est fait au fur et Ă  mesure en fonction des besoins des salariĂ©s de l’association. Pas Ă  pas nous avons fait des choix de plus en plus importants toujours dans une dĂ©marche de recherche et d’expĂ©rimentation qui sont des valeurs importantes au Lion d’Or. Par exemple, le site de l’association est Ă©co-conçu : sobriĂ©tĂ© et inclusion. Ce premier acte avait Ă©tĂ© posĂ© avant mĂȘme mon arrivĂ©e.
Ma rencontre avec l’association a probablement Ă©tĂ© un des dĂ©clencheurs tout de mĂȘme, car j’arrivais avec une dĂ©marche engagĂ©e personnellement depuis longtemps : explorer l’auto-hĂ©bergement. J’ai apportĂ© mon expĂ©rience du numĂ©rique dans plusieurs projets, lors de la crĂ©ation d’un escape-game en assistant l’équipe, dont le duo artistique « ScĂ©nocosme », la crĂ©ation de la base de donnĂ©es d’images, la crĂ©ation de documents pour les expositions en coopĂ©ration avec les habitants
 Et de fil en aiguille on a tissĂ© ce lien de confiance avec un numĂ©rique diffĂ©rent.

BanniĂšre du couple d’artistes ScĂ©nocosme

Comme nous sommes une petite Ă©quipe de salariĂ©s (en lien avec un conseil d’administration qui a confiance lui aussi !), la question de la dĂ©gafamisation nous concernait directement. Être peu nombreux a Ă©tĂ© clairement un atout pour la rapiditĂ©, la simplicitĂ© dans toutes les Ă©tapes de cette transition, on en reparlera souvent.
Tout le monde Ă©tait Ă©veillĂ© d’une maniĂšre ou d’une autre sur le sujet, certains ayant dĂ©jĂ  fait des choix pour leur vie numĂ©rique personnelle (il faut dire que dans les livres qui sont posĂ©s ici et lĂ  dans le tiers-lieu il y a Yggdrasil, Pablo Servigne, Cyril Dion, Socialter ;-)). Quand j’ai proposĂ© de passer une premiĂšre Ă©tape dĂ©cisive, passer de GDrive Ă  Nextcloud sur un petit NAS, le choix a Ă©tĂ© rapidement fait. Les quelques craintes soulevĂ©es ont Ă©tĂ© discutĂ©es directement and voilà ! Elles concernaient principalement le maintien des donnĂ©es, ne pas perdre le travail en cours. Nous n’avons rien perdu et ça a mĂȘme Ă©tĂ© l’occasion de donner une nouvelle arborescence au dossier de travail qui avait dĂ©jĂ  3 ans de donnĂ©es.
Nous avons par la suite organisĂ© une rencontre avec les membres du CA pour leur prĂ©senter les outils et les fonctionnements qui ont Ă©tĂ© reçus avec des avis mitigĂ©s mais confiants sur le moment car l’intĂ©rĂȘt pour eux n’était pas direct.

 

Comment avez-vous organisé votre dégafamisation ?

Pour nous, ça s’est vraiment fait au fur et Ă  mesure, Ă  petit pas. L’association est toujours en recherche, en expĂ©rimentation sur tous les sujets qui la concerne, donc Ă  chaque fois que nous nous posions la question nous pouvions faire un choix dans cette direction.
J’avais connaissance du rĂ©seau des C.H.A.T.O.N.S. et nous avons contactĂ© Hadoly pour avoir un avis, c’est grĂące Ă  eux que nous utilisons Yunohost qui est un Ă©lĂ©ment technique important de cette expĂ©rience.

Le logo d’HADOLY, un CHATON lyonnais qui vient de fĂȘter ses 10 ans.

On peut rĂ©sumer les grandes Ă©tapes qu’on dĂ©taillera plus bas :

  • 2018 – rĂ©alisation du site internet en Ă©co-conception
  • 2019 – dĂ©marrage de la base de donnĂ©es d’images : GNU/Linux et DigiKam
  • 2019 – N.A.S. pour les sauvegardes et premiĂšre bascule pour le partage de fichiers et les agendas
  • 2022 – installation d’un serveur dĂ©diĂ© pour rapatrier plus de services.
  • 2023 – changement de systĂšme d’exploitation pour 2 salariĂ©es de MacOS vers GNU/Linux
  •  2024 – changement d’outil de comptabilitĂ©

 

Est-ce que vous avez rencontrĂ© des rĂ©sistances que vous n’aviez pas anticipĂ©es, qui vous ont pris par surprise ? Au contraire, y a-t-il eu des changements dont vous aviez peur et qui se sont passĂ©s comme sur des roulettes ?

Ça a franchement roulĂ©. Je crois que pour tous dans l’équipe la transition a Ă©tĂ© fluide mĂȘme si elle a demandĂ© des temps d’adaptation et lorsqu’il y avait Ă  faire un ajustement, on a pu rĂ©agir tout de suite. Par exemple la migration des agendas, nous Ă©tions tous dans la mĂȘme piĂšce et je guidais chacun·e dans la marche Ă  suivre.
Une nouvelle fois, ĂȘtre en petit nombre a Ă©tĂ© un atout. Un autre point non nĂ©gligeable est d’avoir quelqu’un « dĂ©dié » Ă  la question, rĂ©guliĂšrement prĂ©sent pour rĂ©pondre aux questions ou difficultĂ©s techniques. C’est presque de la formation continue. Les choses se sont faites au fur et Ă  mesure et ça a permis Ă  chacun et chacune de s’approprier chaque outil petit Ă  petit. Un gros passage a quand mĂȘme Ă©tĂ© le changement de systĂšme d’exploitation pour la chargĂ©e de mission, VĂ©ro, lors de notre premiĂšre install’party, aprĂšs 30 annĂ©es avec MacOS, passer Ă  Kubuntu a demandĂ© beaucoup d’énergie et d’adaptabilitĂ©. Elle a fait preuve de beaucoup de souplesse et dĂ©termination pour changer d’un seul coup tout un environnement de travail (contact, e-mail, suite bureautique, classification
).

Kubuntu

On pourrait parler des problĂ©matiques techniques mais ça a quand mĂȘme bien fonctionnĂ© de ce cĂŽtĂ© lĂ , c’est aussi grĂące Ă  l’arrivĂ©e de la fibre optique dans le village qui nous a permis de franchir l’étape de l’auto-hĂ©bergement.

 

Parlons maintenant outils ! Quels outils ou services avez-vous remplacé, et par quoi, sur quels critÚres ?

Voici un tableau récapitulatif que je vais vous détailler ci-dessous :

Phase Service Outil d’avant RemplacĂ© par
NAS 2019 Agenda partagé Google Agenda Nextcloud calendar
Partage de fichiers Google Drive Nextcloud files
Serveur auto-hébergé 2022 E-mails Gmail Yunohost
Sondages Doodle Nextcloud poll
Formulaires Google forms Nextcloud forms
2024 Suivi des adhésions Excel Paheko
Comptabilité Numbers Paheko

 

Les critÚres étaient simples :

  • nous ne voulions pas donner d’argent Ă  une entreprise comme Alphabet (maison mĂšre de Google) ;
  • nous avions besoin que ce soit ouvert, interopĂ©rable et que ça puisse durer dans le temps ;
  • nous voulions de la collaboration.

C’est quand le compte Google Ă  commencer Ă  afficher « votre espace de stockage est faible » que les choses ont rĂ©ellement commencĂ© Ă  bouger. On avait deux choix, payer pour agrandir le cloud ou trouver une autre solution. On venait tout juste d’acheter un NAS pour pouvoir sauvegarder notre base de donnĂ©e d’images, du stockage on en avait ! Ça a donc rĂ©pondu Ă  notre premier besoin, la ressource on l’avait, pas besoin de payer.
J’avais commencĂ© Ă  tester pour moi des systĂšmes avec Owncloud, avant mĂȘme le fork qui a donnĂ© naissance Ă  Nextcloud, et je trouvais ça « fou » ces outils, vraiment puissants. Nextcloud Ă©tait apparu en 2016 avec des valeurs clairement posĂ©es, une communautĂ© hyper active. J’ai donc proposĂ© de l’installer sur notre NAS. Tout le monde est toujours partant pour les expĂ©riences ici. Ça rĂ©pondait clairement Ă  notre deuxiĂšme critĂšre qu’on retrouve dans tous les logiciels libres, on pouvait y importer nos donnĂ©es existantes et on savait qu’on pourrait les rĂ©cupĂ©rer Ă  tout moment, pour les mettre ailleurs si notre expĂ©rience Ă  domicile ne marchait pas.
Le choix de Nextcloud a Ă©tĂ© fait pour la simplicitĂ© de mise en Ɠuvre. Une fois installĂ©, nombres d’applications sont disponibles en un clic. On avait besoin du partage de fichier, l’agenda Ă©tait lĂ  en mĂȘme temps.
La suite découle un peu de ça, on avait Nextcloud, il était facile de rapatrier nos sondages et formulaires.
Rapatrier nos e-mails n’a pas Ă©tĂ© un choix facile, mais la volontĂ© de le faire Ă©tait vraiment trĂšs prĂ©sente. Techniquement, j’avais mis le nez dans le systĂšme des e-mails mais c’est vraiment complexe et fragile. Quand Hadoly nous a parlĂ© de Yunohost j’ai fait quelques mois de test et puis j’ai proposĂ© Ă  l’association une nouvelle expĂ©rience : depuis nous avons nos e-mails sur notre serveur.
Suite au passage en conseil collĂ©gial en 2023 et de changements qui en ont dĂ©coulĂ©, j’ai fait le constat suivant : Denis enregistrait les adhĂ©rents dans Paheko, Marie-Line faisait les dĂ©pĂŽts en banque puis notait son travail dans un tableur, Gilles pointait les relevĂ©s de banque au fluo, BĂ©nĂ©dicte triait les factures dans un classeur, VĂ©ro suivait un peu tout ça Ă  la fois avec ses propres tableurs, Pierre faisait le suivi de trĂ©sorerie sur un autre tableur ; tout ceci coĂ»tait beaucoup d’énergie Ă  chacun et chacune et la mise en commun Ă©tait laborieuse. J’avais mis Ă  l’essai Paheko dans une association plus petite et je me suis vite rendu compte que ce pourrait ĂȘtre l’outil idĂ©al pour que chacun puisse continuer Ă  faire ce qu’il fait, en rĂ©duisant la lourde charge de la mise en commun. C’est donc le critĂšre de la collaboration qui nous a permis cette derniĂšre bascule.

Logo de Paheko, logiciel libre de gestion d’association.

 

Est-ce qu’il reste des outils auxquels vous n’avez pas encore pu trouver une alternative libre et pourquoi ?

Oui il en reste deux qui sont liĂ©s : les e-mails de notre lettre d’information (newsletter) et un moyen de communiquer sur nos Ă©vĂšnements (Facebook).
La raison principale du non changement est le temps nĂ©cessaire Ă  la transition et Ă  l’apprentissage d’un nouvel outil. Nous avons regardĂ© pour une alternative sur notre serveur (listmonk) mais il y a un gros travail Ă  faire pour migrer depuis MailChimp et apprĂ©hender ce nouveau programme. Nous venons de toucher la limite des 2000 inscriptions d’un compte gratuit chez ce fournisseur, donc nous nous pencherons sur la question en 2025, une fois que nous aurons menĂ© Ă  bien la transition comptable vers Paheko.
Nous avons fait le choix fort de quitter Facebook, aprĂšs avoir constatĂ© que nous ne faisions que fournir de la matiĂšre premiĂšre Ă  cette entreprise afin qu’elle puisse placer ses annonces, les fils d’actu ne ressemblent plus Ă  rien de nos jours, l’information n’arrive mĂȘme plus jusqu’au destinataire. Nous avons regardĂ© du cĂŽtĂ© de Mastodon mais ce n’est pas vraiment d’un rĂ©seau social virtuel dont nous avons besoin mais d’un espace ou pouvoir partager nos Ă©vĂšnements et convier les publics. On pose tout de mĂȘme nos Ă©vĂšnements autour du numĂ©rique sur l’Agenda Du Libre.
Questionner notre communication nous pose grandement la question de l’attention disponible de maniĂšre gĂ©nĂ©rale.
Il y a aussi des considĂ©rations techniques plus ou moins abstraites. Dans l’univers des e-mails la chasse est vraiment faite aux indĂ©pendants par les entreprises qui monopolisent le domaine, les e-mails peuvent ne pas arriver Ă  destination sans raison valable, une exclusion arbitraire peut tomber Ă  tout moment et empĂȘcher tous les e-mails d’arriver Ă  destination. Je crois que les e-mails ne sont plus utilisĂ©s Ă  bon escient de nos jours, cela en fait un systĂšme sur-sollicitĂ©s, sous pression. Malheureusement c’est encore un canal prĂ©cieux pour la communication.
Jusqu’à il n’y a pas si longtemps on ne trouvait aucun C.H.A.T.O.N.S. dans la catĂ©gorie des campagnes d’e-mailing et ceux qui le proposent maintenant, n’assurent pas livraison des e-mails, seulement leur crĂ©ation.

Quels étaient vos moyens humains et financiers pour effectuer cette transition vers un numérique éthique ?

Alors concernant le matĂ©riel nous avions obtenu une subvention de la rĂ©gion Bourgogne-Franche-ComtĂ© pour l’achat du NAS et du PC qui accueillerait la base de donnĂ©es d’images.
Nous avons aussi Ă©tĂ© soutenus par la CAF, une de nos partenaires pour le faible investissement qu’a reprĂ©sentĂ© l’achat du serveur d’occasion de la phase 2.

Pour le travail humain la premiĂšre phase de mise en route s’est faite bĂ©nĂ©volement, la place pour l’expĂ©rimentation est grande ici au Lion d’Or, cela correspond aussi Ă  la pĂ©riode COVID ou j’avais pas mal de temps disponible. Pour la deuxiĂšme phase nous avons obtenu un financement du FNADT (Fonds National d’AmĂ©nagement et de DĂ©veloppement du Territoire) pour mon poste Ă  1/4 de temps (35h/mois) pour le « dĂ©veloppement des usages du numĂ©rique » qui comprenait un temps dĂ©diĂ© Ă  la mise en place de ces nouveaux outils entre-autres.

 

Étienne lors d’un accompagnement individualisĂ©. (source : site Les Accords du Lion d’Or)

 

Est-ce que votre dégafamisation a un impact direct sur votre public ou utilisez-vous des services libres uniquement en interne ? Si le public est en contact avec des solutions libres, comment y réagit-il ? Est-il informé du fait que ça soit libre ?

Comme je le disais plus haut, c’est vraiment ce qui a fait notre force pour cette transition, le fait que je sois prĂ©sent sur plusieurs projets ici a permis un accompagnement rĂ©gulier des salariĂ©s et des autres utilisateurices.
Je mĂšne aussi un atelier mensuel que nous avons appelĂ© Causeries, ou nous traversons de nombreux sujets autour du numĂ©rique et oĂč j’ai rĂ©guliĂšrement l’occasion de prĂ©senter nos outils et dĂ©tailler leur fonctionnement.

La causerie informalion (Source : site Les Accords du lion d’Or)

Quels conseils donneriez-vous à des structures comparables à la vÎtre qui voudraient se dégafamiser aussi ?

Cette dĂ©gafamisation a principalement un impact interne Ă  l’association. Nous avons un peu communiquĂ© sur le sujet mais notre public est peu confrontĂ© Ă  ce changement, quelques dossiers partagĂ©s, quelques sondages, surtout Ă  l’adresse des adhĂ©rents. Les retours sont neutres.
C’est quelque-chose que l’on pourrait voir changer, nous n’avons eu absolument aucuns soucis jusqu’à prĂ©sent et nous dĂ©battons de la possibilitĂ© d’ouvrir des accĂšs Ă  d’autres structures proches ou aux adhĂ©rents. Faire comprendre la nature expĂ©rimentale du projet et ramener sur le devant le fait que les services proposĂ©s sont Ă  Ă©chelle modeste et donc faillibles, est une question Ă  ne pas prendre Ă  la lĂ©gĂšre mais correspond intĂ©gralement aux valeurs de l’association, « parfaitement imparfait » disons-nous souvent ici. Ramener cette faillibilitĂ© c’est remettre en question nos usages, la dĂ©pendance que nous avons Ă  nos outils et trouver des solutions de repli, retrouver une Ă©chelle de temps plus souple sont des valeurs que nous portons pour l’avenir.

Un mot de la fin, pour donner envie de migrer vers les outils libres ?

En ce tournant vers le monde du libre et en acceptant les remises en questions liées, on gagne en liberté, de moyens, de mouvements et en humanité.
Et n’hĂ©sitez pas Ă  venir faire un tour au Lion d’Or pour en discuter !

Daiki Ueno: GNOME.Asia Summit 2025

7 January 2026 at 08:36

Last month, I attended the GNOME.Asia Summit 2025 held at the IIJ office in Tokyo. This was my fourth time attending the summit, following previous events in Taipei (2010), Beijing (2015), and Delhi (2016).

As I live near Tokyo, this year’s conference was a unique experience for me: an opportunity to welcome the international GNOME community to my home city rather than traveling abroad. Reconnecting with the community after several years provided a helpful perspective on how our ecosystem has evolved.

Addressing the post-quantum transition

During the summit, I delivered a keynote address regarding post-quantum cryptography (PQC) and desktop. The core of my presentation focused on the “Harvest Now, Decrypt Later” (HNDL) type of threats, where encrypted data is collected today with the intent of decrypting it once quantum computing matures. The talk was followed by the history and the current status of PQC support in crypto libraries including OpenSSL, GnuTLS, and NSS, and concluded with the next steps recommended for the users and developers.

It is important to recognize that classical public key cryptography, which is vulnerable to quantum attacks, is integrated into nearly every aspect of the modern desktop: from secure web browsing and apps using libsoup (Maps, Weather, etc.) to the underlying verification of system updates. Given that major government timelines (such as NIST and the NSA’s CNSA 2.0) are pushing for a full migration to quantum-resistant algorithms between 2027 and 2035, the GNU/Linux desktop should prioritize “crypto-agility” to remain secure in the coming decade.

From discussion to implementation: Crypto Usage Analyzer

One of the tools I discussed during my talk was crypto-auditing, a project designed to help developers identify and update the legacy cryptography usage. At the time of the summit, the tool was limited to a command-line interface, which I noted was a barrier to wider adoption.

Inspired by the energy of the summit, I spent part of the recent holiday break developing a GUI for crypto-auditing. By utilizing AI-assisted development tools, I was able to rapidly prototype an application, which I call “Crypto Usage Analyzer”, that makes the auditing data more accessible.

Conclusion

The summit in Tokyo had a relatively small audience, which resulted in a cozy and professional atmosphere. This smaller scale proved beneficial for technical exchange, as it allowed for focused discussions on desktop-related topics than is often possible at larger conferences.

Attending GNOME.Asia 2025 was a reminder of the steady work required to keep the desktop secure and relevant. I appreciate the efforts of the organizing committee in bringing the summit to Tokyo, and I look forward to continuing my work on making security libraries and tools more accessible for our users and developers.

Sebastian Wick: Improving the Flatpak Graphics Drivers Situation

5 January 2026 at 23:30

Graphics drivers in Flatpak have been a bit of a pain point. The drivers have to be built against the runtime to work in the runtime. This usually isn’t much of an issue but it breaks down in two cases:

  1. If the driver depends on a specific kernel version
  2. If the runtime is end-of-life (EOL)

The first issue is what the proprietary Nvidia drivers exhibit. A specific user space driver requires a specific kernel driver. For drivers in Mesa, this isn’t an issue. In the medium term, we might get lucky here and the Mesa-provided Nova driver might become competitive with the proprietary driver. Not all hardware will be supported though, and some people might need CUDA or other proprietary features, so this problem likely won’t go away completely.

Currently we have runtime extensions for every Nvidia driver version which gets matched up with the kernel version, but this isn’t great.

The second issue is even worse, because we don’t even have a somewhat working solution to it. A runtime which is EOL doesn’t receive updates, and neither does the runtime extension providing GL and Vulkan drivers. New GPU hardware just won’t be supported and the software rendering fallback will kick in.

How we deal with this is rather primitive: keep updating apps, don’t depend on EOL runtimes. This is in general a good strategy. A EOL runtime also doesn’t receive security updates, so users should not use them. Users will be users though and if they have a goal which involves running an app which uses an EOL runtime, that’s what they will do. From a software archival perspective, it is also desirable to keep things working, even if they should be strongly discouraged.

In all those cases, the user most likely still has a working graphics driver, just not in the flatpak runtime, but on the host system. So one naturally asks oneself: why not just use that driver?

That’s a load-bearing “just”. Let’s explore our options.

Exploration

Attempt #1: Bind mount the drivers into the runtime.

Cool, we got the driver’s shared libraries and ICDs from the host in the runtime. If we run a program, it might work. It might also not work. The shared libraries have dependencies and because we are in a completely different runtime than the host, they most likely will be mismatched. Yikes.

Attempt #2: Bind mount the dependencies.

We got all the dependencies of the driver in the runtime. They are satisfied and the driver will work. But your app most likely won’t. It has dependencies that we just changed under its nose. Yikes.

Attempt #3: Linker magic.

Until here everything is pretty obvious, but it turns out that linkers are actually quite capable and support what’s called linker namespaces. In a single process one can load two completely different sets of shared libraries which will not interfere with each other. We can bind mount the host shared libraries into the runtime, and dlmopen the driver into its own namespace. This is exactly what libcapsule does. It does have some issues though, one being that the libc can’t be loaded into multiple linker namespaces because it manages global resources. We can use the runtime’s libc, but the host driver might require a newer libc. We can use the host libc, but now we contaminate the apps linker namespace with a dependency from the host.

Attempt #4: Virtualization.

All of the previous attempts try to load the host shared objects into the app. Besides the issues mentioned above, this has a few more fundamental issues:

  1. The Flatpak runtimes support i386 apps; those would require a i386 driver on the host, but modern systems only ship amd64 code.
  2. We might want to support emulation of other architectures later
  3. It leaks an awful lot of the host system into the sandbox
  4. It breaks the strict separation of the host system and the runtime

If we avoid getting code from the host into the runtime, all of those issues just go away, and GPU virtualization via Virtio-GPU with Venus allows us to do exactly that.

The VM uses the Venus driver to record and serialize the Vulkan commands, sends them to the hypervisor via the virtio-gpu kernel driver. The host uses virglrenderer to deserializes and executes the commands.

This makes sense for VMs, but we don’t have a VM, and we might not have the virtio-gpu kernel module, and we might not be able to load it without privileges. Not great.

It turns out however that the developers of virglrenderer also don’t want to have to run a VM to run and test their project and thus added vtest, which uses a unix socket to transport the commands from the mesa Venus driver to virglrenderer.

It also turns out that I’m not the first one who noticed this, and there is some glue code which allows Podman to make use of virgl.

You can most likely test this approach right now on your system by running two commands:

rendernodes=(/dev/dri/render*)
virgl_test_server --venus --use-gles --socket-path /tmp/flatpak-virgl.sock --rendernode "${rendernodes[0]}" &
flatpak run --nodevice=dri --filesystem=/tmp/flatpak-virgl.sock --env=VN_DEBUG=vtest --env=VTEST_SOCKET_NAME=/tmp/flatpak-virgl.sock org.gnome.clocks

If we integrate this well, the existing driver selection will ensure that this virtualization path is only used if there isn’t a suitable driver in the runtime.

Implementation

Obviously the commands above are a hack. Flatpak should automatically do all of this, based on the availability of the dri permission.

We actually already start a host program and stop it when the app exits: xdg-dbus-proxy. It’s a bit involved because we have to wait for the program (in our case virgl_test_server) to provide the service before starting the app. We also have to shut it down when the app exits, but flatpak is not a supervisor. You won’t see it in the output of ps because it just execs bubblewrap (bwrap) and ceases to exist before the app even started. So instead we have to use the kernel’s automatic cleanup of kernel resources to signal to virgl_test_server that it is time to shut down.

The way this is usually done is via a so called sync fd. If you have a pipe and poll the file descriptor of one end, it becomes readable as soon as the other end writes to it, or the file description is closed. Bubblewrap supports this kind of sync fd: you can hand in a one end of a pipe and it ensures the kernel will close the fd once the app exits.

One small problem: only one of those sync fds is supported in bwrap at the moment, but we can add support for multiple in Bubblewrap and Flatpak.

For waiting for the service to start, we can reuse the same pipe, but write to the other end in the service, and wait for the fd to become readable in Flatpak, before exec’ing bwrap with the same fd. Also not too much code.

Finally, virglrenderer needs to learn how to use a sync fd. Also pretty trivial. There is an older MR which adds something similar for the Podman hook, but it misses the code which allows Flatpak to wait for the service to come up, and it never got merged.

Overall, this is pretty straight forward.

Conclusion

The virtualization approach should be a robust fallback for all the cases where we don’t get a working GPU driver in the Flatpak runtime, but there are a bunch of issues and unknowns as well.

It is not entirely clear how forwards and backwards compatible vtest is, if it even is supposed to be used in production, and if it provides a strong security boundary.

None of that is a fundamental issue though and we could work out those issues.

It’s also not optimal to start virgl_test_server for every Flatpak app instance.

Given that we’re trying to move away from blanket dri access to a more granular and dynamic access to GPU hardware via a new daemon, it might make sense to use this new daemon to start the virgl_test_server on demand and only for allowed devices.

Andy Wingo: pre-tenuring in v8

5 January 2026 at 15:38

Hey hey happy new year, friends! Today I was going over some V8 code that touched pre-tenuring: allocating objects directly in the old space instead of the nursery. I knew the theory here but I had never looked into the mechanism. Today’s post is a quick overview of how it’s done.

allocation sites

In a JavaScript program, there are a number of source code locations that allocate. Statistically speaking, any given allocation is likely to be short-lived, so generational garbage collection partitions freshly-allocated objects into their own space. In that way, when the system runs out of memory, it can preferentially reclaim memory from the nursery space instead of groveling over the whole heap.

But you know what they say: there are lies, damn lies, and statistics. Some programs are outliers, allocating objects in such a way that they don’t die young, or at least not young enough. In those cases, allocating into the nursery is just overhead, because minor collection won’t reclaim much memory (because too many objects survive), and because of useless copying as the object is scavenged within the nursery or promoted into the old generation. It would have been better to eagerly tenure such allocations into the old generation in the first place. (The more I think about it, the funnier pre-tenuring is as a term; what if some PhD programs could pre-allocate their graduates into named chairs? Is going straight to industry the equivalent of dying young? Does collaborating on a paper with a full professor imply a write barrier? But I digress.)

Among the set of allocation sites in a program, a subset should pre-tenure their objects. How can we know which ones? There is a literature of static techniques, but this is JavaScript, so the answer in general is dynamic: we should observe how many objects survive collection, organized by allocation site, then optimize to assume that the future will be like the past, falling back to a general path if the assumptions fail to hold.

my runtime doth object

The high-level overview of how V8 implements pre-tenuring is based on per-program-point AllocationSite objects, and per-allocation AllocationMemento objects that point back to their corresponding AllocationSite. Initially, V8 doesn’t know what program points would profit from pre-tenuring, and instead allocates everything in the nursery. Here’s a quick picture:

diagram of linear allocation buffer containing interleaved objects and allocation mementos
A linear allocation buffer containing objects allocated with allocation mementos

Here we show that there are two allocation sites, Site1 and Site2. V8 is currently allocating into a linear allocation buffer (LAB) in the nursery, and has allocated three objects. After each of these objects is an AllocationMemento; in this example, M1 and M3 are AllocationMemento objects that point to Site1 and M2 points to Site2. When V8 allocates an object, it increments the “created” counter on the corresponding AllocationSite (if available; it’s possible an allocation comes from C++ or something where we don’t have an AllocationSite).

When the free space in the LAB is too small for an allocation, V8 gets another LAB, or collects if there are no more LABs in the nursery. When V8 does a minor collection, as the scavenger visits objects, it will look to see if the object is followed by an AllocationMemento. If so, it dereferences the memento to find the AllocationSite, then increments its “found” counter, and adds the AllocationSite to a set. Once an AllocationSite has had 100 allocations, it is enqueued for a pre-tenuring decision; sites with 85% survival get marked for pre-tenuring.

If an allocation site is marked as needing pre-tenuring, the code in which it is embedded it will get de-optimized, and then next time it is optimized, the code generator arranges to allocate into the old generation instead of the default nursery.

Finally, if a major collection collects more than 90% of the old generation, V8 resets all pre-tenured allocation sites, under the assumption that pre-tenuring was actually premature.

tenure for me but not for thee

What kinds of allocation sites are eligible for pre-tenuring? Sometimes it depends on object kind; wasm memories, for example, are almost always long-lived, so they are always pre-tenured. Sometimes it depends on who is doing the allocation; allocations from the bootstrapper, literals allocated by the parser, and many allocations from C++ go straight to the old generation. And sometimes the compiler has enough information to determine that pre-tenuring might be a good idea, as when it generates a store of a fresh object to a field in an known-old object.

But otherwise I thought that the whole AllocationSite mechanism would apply generally, to any object creation. It turns out, nope: it seems to only apply to object literals, array literals, and new Array. Weird, right? I guess it makes sense in that these are the ways to create objects that also creates the field values at creation-time, allowing the whole block to be allocated to the same space. If instead you make a pre-tenured object and then initialize it via a sequence of stores, this would likely create old-to-new edges, preventing the new objects from dying young while incurring the penalty of copying and write barriers. Still, I think there is probably some juice to squeeze here for pre-tenuring of class-style allocations, at least in the optimizing compiler or in short inline caches.

I suspect this state of affairs is somewhat historical, as the AllocationSite mechanism seems to have originated with typed array storage strategies and V8’s “boilerplate” object literal allocators; both of these predate per-AllocationSite pre-tenuring decisions.

fin

Well that’s adaptive pre-tenuring in V8! I thought the “just stick a memento after the object” approach is pleasantly simple, and if you are only bumping creation counters from baseline compilation tiers, it likely amortizes out to a win. But does the restricted application to literals point to a fundamental constraint, or is it just accident? If you have any insight, let me know :) Until then, happy hacking!

❌