skoop.dev

  • About
  • @skoop@phpc.social
  • PHPStorm 2026.1 dark overlay fix

    April 1, 2026
    jetbrains, phpstorm, wayland, wsl2

    Yesterday my PHPStorm updated to the latest version 2026.1, and since then I had issues with it. It seemed to have a dark overlay or a shadow of a window over part of or the whole window (depending on whether the screen was maximized or not).

    I could still work in PHPStorm, but it was really annoying and made my code very unreadable.

    After posting on Mastodon and then searching around the Internet for a bit, I found out that this release is the first that turns on Wayland support by default. And this seems to be what is causing this problem. Luckily, Jetbrains already had a support issue with the information I needed!

    Basically, when you have PHPStorm open, in the Help menu you click the option to Edit Custom VM Options. That opens a file in the editor. There, you simply add


    -Dawt.toolkit.name=XToolkit

    Save, restart PHPStorm, and the issue is gone.

    Yes, this does mean you’re not on Wayland anymore. But hey, if that makes sure I can do my work, I’m happy.

    Technical information: I run PHPStorm from WSL2 from an Ubuntu image. PHPStorm is not running on my Windows host, but inside the WSL2. This might have something to do with it. I have reports that this issue does not seem to happen on Linux Mint.

  • Dutch PHP Conference 2026

    March 24, 2026
    conferences, DPC, Dutch PHP Conference, php

    It took me a while to really sit down and reflect on it, but March 13 was the day of Dutch PHP Conference aka WebDevCon aka AppDevCon. What I love about this threesome of conferences is that it isn’t limited to just PHP, while you can still enjoy a whole lot of PHP content. Funny enough though, my planning this year did not even include that much PHP. Yet I still had a blast and learned a lot. If you can ever make it to Amsterdam, take that chance. You won’t regret it.

    The day started with PHP, and mostly the power of PHP today and tomorrow. Johan Janssens of Joomla! fame told us about the olden days of PHP including some key moments and key people. But then he switched to the now, and to the bright future of PHP. He went into FrankenPHP and what it means to PHP and what it could mean to PHP. About the possibility of distributing executables of PHP applications that can run without dependencies, without even having PHP installed. I know most of what Johan was saying, but it really boosted my morale. It confronted me with YES, PHP is awesome and it has a very bright future ahead.

    Next up was my second and also last PHP-related talk of the day. How could I not go to a talk about another part of the future of PHP: PIE. This talk had a bit of a similar structure to the talk Johan just did. We got a bit of history (PEAR, PECL etc) but then moved on to the now and the future. PIE is a new installer for PHP extensions (say: PECL on steroids). James gave a clear overview of what it already can do and where it’s heading. This was very inspiring (to me) and really triggered me to try and make some time in the coming months to replace my PECL usage with PIE in several projects. This talk really made me feel YES, PHP is awesome and it has a very bright future ahead. Again.

    Crime is bad. Let’s do crime. In How to Git away with murder, Sergès Goma presented her usual combination of serious learning with a great sense of humor. There is no way you’re ever going to fall asleep when Sergès is speaking. During this talk, we learned about some of the crimes of Git and version control, and ways to either avoid those, or get away with it. Good stuff to know for any developer.

    After lunch, it was my time to hit the stage. I presented my sociocratic decision making talk in which I first present sociocracy in general and then zoom in on the sociocratic decision making method. I could elaborate but I’m planning to talk some time to write a full post on this subject at a later moment.

    They always say silence is golden, right? Nope, wrong! Helvira Goma presented a very engaging talk on the power of music to support your work. I really wanted to see this talk because I have long ago realised how important music is for my ability to focus and perform certain tasks, but I had no idea about the science behind that. Helvira gave me exactly what I was hoping for: Explanations on why music helps (or well, can help, not everyone is the same) and even how different styles of music are a better fit for different types of tasks. I can now be more concious about this in the future.

    The last talk of the day was Derick Rethans. A DPC regular and someone I really respect talking about a subject I also care deeply about: owning your content. Any content you post in one of the several big tech platforms, whether that is Xitter, Facebook, or any of the other big platforms is basically not yours anymore. It’s extremely hard to control that content: who reads it, where it goes, how it’s used. Derick introduced the ActivityPub protocol and the many applications that the Fediverse already offers as an alternative to big tech applications. As someone who made the move away from big tech (nope, not completely done yet) I really enjoyed Derick’s talk. On my personal blog, I also wrote about this topic. I really hope that Derick’s talk will inspire more people to join the fediverse and consider using more open platforms and protocols.

    Concluding

    Dutch PHP Conference was awesome as ever. It was very interesting to hear so much critique on usage of (generative) AI at a conference that started a sister conference on AI this year from both speakers and attendees. Having said that, most conversations on that topic were constructive and not simply bashing because of the bashing.

    In terms of line-up, there were several timeslots where it was really hard to choose between different speakers. I really had FOMO several times during the day. That is a sign of an excellent conference with an excellent line-up.

    Of course, I also spent some time in the hallway track, aka talking to people and visiting sponsor booths. There were some interesting sponsors this year to talk to. I really enjoyed my conversation with Rentman for instance.

    I do have to conclude that from a social perspective, a single-day conference is really too short. There are so many people that I would’ve loved to catch up with that I only saw in passing or just could say “hi” to between talks and other conversations. This partially had to do with me as well, since I had very little time around the conference to hang out with people. Ah well. I’m sure I’ll see some people again soon(-ish). It was a wonderful day. I will certainly be there again next year.

  • Deploying Mattermost to Nexaa

    February 20, 2026
    hosting, kubernetes, mattermost, nexaa, slack

    This post is long overdue. We made the switch from Slack to Mattermost early last year, in our effort to have more control over the systems that we rely on, and also to try and remove the dependency on the US given the current (geo)political climate there.

    I’d been wanting to try out Nexaa for a while now. Some of my friends work there, and their serverless containers solution seemed quite easy. I had not realized how incredibly easy it would be, but I’d find out soon enough. In this post I want to document what I did, lessons I learned and give you some tips if you’d want to host your own Mattermost on Nexaa.

    The plan

    Our initial plan was to do a full user-and-data migration from our Slack. We’d read that this could be done and since our history felt important (even if we only rarely searched through our history) we wanted to try and do a full migration.

    Did I reach all the goals set in this plan? Nope. Let’s dissect what happened.

    First of all: we need a database

    Before we can even set up Mattermost we need a database. Mattermost supports both Postgres and MySQL, but since I plan on deploying more apps on Nexaa and most of those are MySQL, I set up a managed database in the Nexaa portal. It’s literally completing a form, and the database server is set up for you. I decided on a 1 node setup. Well, that was easy.

    Setting up Mattermost

    This is probably the second easiest step of this whole setup (after the database). Nexaa makes it super easy to deploy containers, so all you need to do is configure the registry or, in this case, use a publicly available image. Since we’re going to deploy Mattermost, it’s as simple as using the Mattermost image. Right now we’re on mattermost/mattermost-team-edition:release-10.12. We still need to upgrade to Mattermost 11, it’s on our TODO. But when we started, we were on an even lower version. Upgrading is as simple as editting the container and changing the tag (unless there are other, manual, steps to be done as part of the upgrade).

    The rest of the form for creating a new container is relatively straightforward, although some things require some research. I found the minimum requirements for the system Mattermost ran on and decided to run our server on a 1CPU 2GB RAM container.

    It took me a bit to figure out which environment variables I should set. The most important one is the MM_SQLSETTINGS_DATASOURCE. That contains the DSN for your database. The DOMAIN is also important, it’s the URL that your Mattermost instance will be available on. Other settings I configured: MM_LOGSETTINGS_CONSOLEJSON, MM_SQLSETTINGS_DRIVERNAME (which is mysql in our case) and MM_LOGSETTINGS_CONSOLELEVEL. Although the latter was mostly useful to set during initial debugging (set to debug) so you can figure out what goes wrong if anything goes wrong.

    Nexaa: setting ports and Internet access

    By default, your container is not publicly reachable, so you need to configure that it is. You also add a port mapping: In this case public port 80 maps to port 8065 of the Mattermost container.

    Under step 8, you can also configure the ingress to use TLS. It will do the SSL termination for you, so your containers only need to listen to port 80. You also configure the URL on which your container is available.

    Next thing is volumes. Some things need to be kept even if the container restarts. In the above screenshot you can see how we configured this. As you can see, we configured waaaaay too big volumes. Smaller volumes are also possible. Better yet: You can better start out small and then make them bigger if needed. Why? Because you can not shrink volumes, but you can always make them bigger. The above usage is after a year of using Mattermost with about 10 users. That might give you an idea of what kind of volume size you may need.

    The last step is to configure the scaling. Nexaa has autoscaling and manual scaling. Since our Mattermost is only used by a few people, manual scaling will do the trick, and a single replica is fine for us.

    Once I saved this, the container was set up and booted. And indeed, things worked. Well… I of course needed to configure some things.

    Configuring Mattermost

    On the system console of Mattermost, I can now start configuring things. Most of the settings are fine by default, but some things you might want to change.

    You can click all the options to customize stuff as needed, but some things should be configured:

    • Environment -> SMTP: You need to configure this to allow Mattermost to send email. We use our Mailgun for this
    • Site configuration -> Customization: I guess this is not required, but fun: Give your server a name, a description, a custom logo and configure some other things.
    • Authentication -> Signup: Unless you want your server to be open to everyone, make sure that Enable open server is set to False.
    • Integrations -> GIF: Very important: Make sure the Gif picker is enabled
    • User management -> Teams: Make sure to create your first team!

    Teams

    Now this perhaps warrants an explanation. Within a Mattermost server, you can have multiple teams. Consider a team similar to a workspace in Slack: Every team has its own set of channels and it’s own environment. Teams in Mattermost can be public (anyone on the server can join the team) or private (if you have an account on the server you still need an invite to be part of the team). In our case, we only needed a single team: Ingewikkeld.

    Importing from Slack

    Now this is where things failed. Slack allows you to easily export all your data (if you have a paid plan). Mattermost has great documentation on how to migrate. So we downloaded all the users, the channels and the messages and used the mmetl tool to convert it to the Mattermost format. You do that on your local machine.

    For me, that was when the trouble started. We’ve used Slack for over 10 years within Ingewikkeld, so the archive was huge. And it seems the Nexaa ingress has a maximum request size/upload size that you can not configure. The result for us was that we would get errors when trying to import into Mattermost.

    I spend way too long on trying to solve this and eventually gave up. The import had succeeded for users and channels, and that was the most important part for me. The message archive… we’ve backed it up somewhere so we can always search in it, but have not imported it into our Mattermost. A fresh start is nice as well, right?

    Using Mattermost

    Now you can use Mattermost on the web (through the configured URL, assuming you did change the DNS) or using the Mattermost app on your computer or smartphone. And so far, it works really well for us. The only downtime so far was when I updated the configuration and made a mistake, but that was easily fixed.

    Fun fact

    Fun fact: I did most of the setup of our Mattermost on the train from SymfonyCon Vienna to Klosters, Switzerland (where I would be visiting my sister and her family). If I can do this on an international train in the alps, you can do it in your (home) office as well 😉

  • Firefox NS_ERROR_CORRUPTED_CONTENT confusion

    November 19, 2025
    403, 404, firefox, NS_ERROR_CORRUPTED_CONTENT

    OK, so this is a short and quick post just to clear some confusion. I spent hours and hours trying to figure out what the hell was going on when I was running into this NS_ERROR_CORRUPTED_CONTENT for javascript files

    For the life of me I couldn’t figure out what the cause was for this error. It turns out that, at least in some situations, the error is actually more confusing than it should be. When I opened the URL in my browser, it turns out that this was simply a 404 or 403 (I have two different situations where I encountered this). There is nothing corrupted about the response even though the error seems to imply that. Don’t get fooled by the weird error string, just inspect the response and check the status code and you’ll probably quickly figure out what the actual solution will be.

  • My first Azure Pipeline

    September 12, 2025
    azure, ci, php, pipeline

    Last week I started at a new customer and they’re fully committed to the Azure DevOps environment. They use their Git repositories, their project boards and also their pipelines. Since the project I’m working is currently their only PHP project, they did not yet have any experience with setting up pipelines for a PHP project. So I set out to do just that, and found it surprisingly easy.

    Just like many other Git hosting sites, you can configure the pipelines using a YAML file, in this case azure-pipelines.yml, in the root of your project. They do also offer an online editor, but I haven’t actually tried that, preferring the YAML format for configuring pipelines.

    If you have experience with Bitbucket pipelines or Gitlab pipelines, then configuring Azure pipelines will be a breeze. Most things make a lot of sense. There’s a few things that I did need to take into account while setting up, and in this article I want to share these things.

    Tasks

    The first thing I found out: The predefined tasks beat having to create your custom scripts any day. Azure Pipelines have a huge list of predefined tasks that can make your life with pipelines a lot easier. For instance, this project uses private composer packages that are located inside Git repositories. So I have to insert an SSH key into the runner to make sure that when running composer install it can fetch those packages. A pretty common situation. Before I knew about the tasks I tried to manually do this myself. But I have very little knowledge of the insides of the runner, so it’s pretty hard to figure out exactly how to do that. Luckily, there is a task for that. This will save you a lot of time.

    Secure files

    Speaking about SSH keys (or other things that need to be injected that need to be securely stored) the Secure files feature is the answer. It’s really easy to insert those secure files into your runner context, and tasks such as the earlier mentioned InstallSSHKey@0 task allows you to simply reference the secure file to use. Again, this will save you a lot of time and frustration.

    Similar to secure files, there’s also variables that you can configure, that are stored securely and can then be used in the azure-pipelines.yml file.

    Conditions

    By default the whole pipeline is triggered on each push of each branch. You can limit when things are run by using the triggers. But there is another way of limited when things are run: by using conditions. Triggers are configured on the top level and so apply to your whole pipeline configuration. But sometimes you want certain stages, jobs or steps to run only in specific situations. For instance, you only want to build and push the image when all previous steps have succeeded and only for your main branch.

    This is where conditions come in. Let’s take the above example of wanting to run a stage only when all previous stages have succeeded and only when the branch is main. You can add a relatively simple condition to do that:

    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))

    By adding the above to a stage, job or step, that item will in this case only be run when previous items have succeeded (succeeded()), and when the branch this runs on (Build.SourceBranch) is main (refs/heads/main).

    And you can start doing pretty fancy stuff with this. There’s a lot of logic you can add, based on variables, attributes of the current build, success or failure of previous steps, etc.

    This is pretty cool

    My experience with Microsoft stuff has always been a bit hit or miss, but so far I must say I’m actually quite impressed by the Azure Pipelines and the ease and speed with which it works. And not just that: Their git repository hosting in general works pretty well (except that pipelines work on branches and are not directly linked to pull requests, which is unfortunate). Their Jira replacement also seems to work pretty well. So yes, I quite like how this works.

  • Explicit code

    August 8, 2025
    code, explicit code, inclusive code, php

    One of the most discussed subjects during code reviews or in projects is not a functional thing but more a style thing: How does the code look? What coding style do we adopt?

    And while it feels like mostly a cosmetic thing, the code style is actually quite important. It determines not just how you write code, but also how you can read it. And since you usually read the code a lot more often than you write it, this is quite important.

    The race for shorter code

    More and more I see that developers opt for the shortest bit of code they can possibly write. And PHP as a language is also changing to make things a lot shorter. Think of the Null Coalescing Assignment Operator or the Pipe Operator, for instance.

    Now in essence of course, the longing for shorter code can be understood. A lazy developer is a good developer. And if anything, wanting to type less characters is a great property of a lazy developer.

    Write once, read often

    Up to a certain point I can follow the idea of a lazy developer. Hell, I’m a lazy developer. However, as I mentioned before, you read the code a lot more than you write it. And if you think about that, it’s much more important to optimize your code for reading than it is for writing.

    Combine that with the fact that our modern tooling with IDE’s that help us write the code makes it very easy to write code. With templates, macro’s and auto-complete, we don’t actually have to type all our characters anymore. Our laziness is supported by our tooling. So is it really needed to shorten our code that much anymore?

    Optimize for reading

    I don’t know about you, but I spend a lot of time reading code. Whether that is brand new code or older legacy code, if I need to modernize a codebase or simply implement a new feature into some existing code, I need to write what is there, determine what it does, then make whatever change I was going to make. And usually this involves stepping into called methods to see what these do, sometimes several levels deep. The most important thing for me, therefore, is to be able to quickly read what code does. How it behaves, how it alters values, what it returns.

    Let’s borrow an example from the pipe operator RFC:

    function getUsers(): array {
        return [
            new User('root', isAdmin: true),
            new User('john.doe', isAdmin: false),
        ];
    }
     
    function isAdmin(User $user): bool {
        return $user->isAdmin;
    }

    So far so good, right? Two methods. Now to figure out how many users are admins, I’d write a pretty simple bit of code:

    $numberOfAdmins = 0;
    
    foreach (getUsers() as $user) {
        if (isAdmin($user) === true) {
            $numberOfAdmins++;
        }
    }

    The new pipe operator would turn this into the following piece of code:

    $numberOfAdmins = getUsers()
        |> fn ($list) => array_filter($list, isAdmin(...)) 
        |> count(...);

    Granted, this is indeed shorter. But does this really make it more readable? I understand that this is subjective, but if you just take a quick glance, which gives you more information in less time, the top or the bottom code snippet?

    Let’s borrow another example from an RFC, this time the earlier references Null Coalescing Assignment Operator. This change to the PHP language is a shortening of an earlier syntax shortening. Old man speaking: Back in the old days, we’d write this bit of code as follows:

    if ($this->request->data['comments']['user_id'] !== null) {
        $this->request->data['comments']['user_id'] = 'value';
    }

    Earlier on, this was shortened to:

    $this->request->data['comments']['user_id'] = $this->request->data['comments']['user_id'] ?? 'value';

    and now, it’s become:

    $this->request->data['comments']['user_id'] ??= 'value';

    I would like to repeat my questions from the last set of code snippets: But does this really make it more readable? I understand that this is subjective, but if you just take a quick glance, which gives you more information in less time, the top or the bottom code snippet?

    And it doesn’t stop at that. I still encounter the exclamation mark a lot. For instance:

    if (!$variable) {
        // do something
    }

    That exclamation mark is easily overlooked when quickly scanning over code. And how much more typing does it take to make this:

    if ($variable === false) {
        // do something
    }

    Inclusive coding

    Even if you don’t have an issue with reading these shorter versions, it might be good to adopt a more explicit style of coding. Because most probably, you won’t be the only person reading the code. And the shorter the code and more complex the syntax, the heavier the mental load is on developers to read the code. Not just you (or you in six months after focussing on a lot of other stuff), but also other developers (including potentially those under high stress, with mental health issues, junior developers who just don’t have that much experience yet, etc). By keeping the code explicit and basic, you help those developers understand what is happening. Doing so will make your code more inclusive.

    Make your code more explicit

    I would love to see more people adopting more explicit coding styles. Keep in mind that someone, possibly you, might be reading this code in a few months or years time, trying to figure out what’s going on. Do you really want them to have to take a long time to understand what is happening?

  • The lost art of training?

    July 4, 2025
    learning, lessons, php, training

    Disclaimer: I deliver courses and organize training sessions myself as part of Ingewikkeld Trainingen. As such, I have written this with a certain bias.

    With the risk of sounding old: When I was young, if you wanted to learn something, you basically had two options: Read a book and start self-teaching, or book a training session and learn through that.

    Since then, a lot has changed. And that’s a good thing. Video tutorials, podcasts, blogs, magazines, monthly usergroups, conferences, even LLMs… there are so many ways to learn these days. It’s fantastic! Because not everyone learns well from reading a book or just trying something. Not everyone learns well from attending a training session.

    What I’ve noticed is that all these new ways of learning have caused the original way of learning to almost be lost. Having a teacher come in to teach a course or booking a seat on a classroom training. It’s something that rarely happens anymore. And I can sort of see why: Booking a course means committing one, two or even three(+) days to that course. It costs a set amount of money (either per student or per course). It’s cheaper to recommend some podcasts, get a subscription to Udemy or give someone a ChatGPT subscription. That also takes less time. I get that.

    Advantages of the course

    Actually attending a physical course with a trainer teaching you things does have certain advantages that all the other forms of learning don’t have. Let’s have a look at four of those advantages.

    1. Focus

    A lot of things are happening in your work, your life and the world. There are a lot of things that are constantly asking for our attention. Whether that is your manager, your social media notifications, your team or the latest BREAKING NEWS, keeping focus on learning is hard. And that while focussing on the topic at hand is instrumental in actually learning something. As Cal Newport wrote:

    To remain valuable in our economy, therefore, you must master the art of quickly learning complicated things. This task requires deep work. If you don’t cultivate this ability, you’re likely to fall behind as technology advances.

    A physical teacher can not just be paused every time something asks for your attention. They can also see you and call you out on any distractions they might notice, adding motivation for paying attention and focussing on the material you are learning.

    2. Interactivity

    Most of the “modern” training options have no interactivity. There is material, and you can consume that material. You can not ask questions, or discuss certain topics. You get what you get. This can be nice because it’s predictable, but once you don’t understand something or would like to dive deeper into something, you’ll have to figure that out by yourself.

    Having someone that delivers the content also allows you to ask questions, get clarification or even steer the content that you’re given into a certain direction (within the limits of the course).

    This can be invaluable in understanding the topic and not getting stuck somewhere without knowing how to move forward.

    3. Customization

    In line with the previous point, customization is also great. With “standard” content such as a podcast, video tutorial or book, you get what you get. But when you book a training course for your team, the trainer will be able to prepare custom content for your team. If you book a PHPUnit Masterclass and your team has a pre-existing codebase, you can ask the trainer to use that for the exercises on testing untestable code instead of the default exercises. If you book a training on Docker and Kubernetes and your company uses a specific cloud provider, you can ask the course the be customized for that specific platform. This is priceless: Your team is getting exactly the content they need.

    4. The hallway track

    Do not underestimate the informal contact different students have during breaks when attending a training course. While the formal content of the course is of course very important, the social aspect of sharing experiences, lessons learned and also non-course-related topics is also very important. Your team can reflect, learn from each other and also get to know each other better. If you booked seats on an external classroom training, they can also do this with people with completely different background and experiences, which could make the learning experience even more extensive.

    Book a course?

    Nope. I’m not going to tell you that you should now immediately book a course. First of all, in previous years it seems that, especially in the programming world, courses have become rarer. They’re not offered as much anymore unfortunately. But what I do want to ask you is to seriously consider booking a course in case it is available for the topic that you or your team wants or needs to learn more on.

  • Migrating to e/OS

    May 30, 2025
    android, apple, e/OS, fairphone, gadget bridge, ios, pinetime, pocket casts, technology

    For as long as I can remember, I’ve been an Apple Fanboi. Well, no, not that long, but ever since I started working at Ibuildings in the 00’s and getting my first Mac, I was sold. Unfortunately at some point for a power user like me the laptops started becoming less interesting (and also way too expensive), but for my phone I stuck with Apple because iOS was far superior to Android in terms of UX and consistency.

    In recent times, however, I’ve been becoming more privacy aware. Aside from that, with the current geopolitical situation and the “mightiest country in the world” being led by an unstable and unpredictable leader, I want to prevent my dependency (US-based) big tech.

    Some months ago I got introduced to the de-Googled GrapheneOS and e/OS. I got to try out e/OS on a secondary phone for a while and had to admit: Android has improved a lot since the last time I played with it. It compares quite well with iOS these days in terms of usability. So that’s when I started considering the move.

    It’s not an easy move though. Migrating from one platform to the other requires some planning and serious thought. When you’ve invested so many years into a platform, switching to another platform means figuring out how to replace certain platform-specific features. To my surprise, however, this turned out to not be that hard. Over the time I’ve switched mostly to apps that support both iOS and Android, and files I’ve been storing in for instance Proton Drive, not iCloud, so… hey, switching might not be as hard as I thought.

    Fairphone

    The first thing then is to think of which hardware to get. I was using an iPhone 12 mini, which is a very small phone, and quite quickly I came to the conclusion that it wouldn’t be that easy to get a similar device. Especially since I needed to keep in mind the device had to be compatible with e/OS. I preferred a phone that would also have official support instead of community support so that I could use the official installer. A sustainable and European company would also be nice. I’d heard a lot of good things about Fairphone and given their focus is on sustainability and right to repair, and they’re Amsterdam-based… that seemed like a fine choice.

    Installing e/OS

    My previous experience with installing e/OS on an old Samsung device was horrible. Hence my mention of wanting to use an officially supported device. The Fairphone I ordered came with a standard Android instead of e/OS, but hey, I’ve gone through the ordeal of installing it on that Samsung device, I can do this.

    So, connect Phone to laptop, go through setup steps so that I can start using the official installer, go through the first couple of steps… so far so good. One step was confusing: the e/OS documentation mentions having to unlock the bootloader, but as I did that it asked for a code, which the documentation did not mention anywhere. Turns out this is a Fairphone-specific thing. Fairphone offers a tool to calculate that code based on your IMEI and serial number.

    The phone is recognized and I can connect to it. But at the second point where I need to connect with the laptop… nope. It didn’t connect. Searching around a bit, this turned out to be related to the phone being locked. I found this blogpost and by just going through the first couple of steps (up to and including the fastboot flashing unlock_critical step), I got it to work. Now the second connect step of the installer did work.

    One thing to note in the official installer: Your progress is usually at the bottom of the screen, but any errors are shown in tiny letters at the top of the screen. So while you might be waiting for things to finish, there might already be errors. Keep your eyes on the top of the screen as well!

    After this, the installer was able to finish all the way to the end, and I had a Fairphone with e/OS. Yay!

    Another thing I noticed, however, is that I do not find anything in the documentation about, after finalizing the install, putting to lock back on using the fastboot flashing lock_critical and fastboot flashing lock commands. I did that regardless.

    App lounge

    After booting into e/OS I had another issue. The App lounge, the app with which you install other apps, would not load in anonymous mode. It would just keep on loading without being able to do anything. Unfortunately clearing the cache and storage, as recommended by the official documentation, did not solve it either. The official documentation suggested that if it couldn’t be solved, to still use a Google account. Which kind of defeats the purpose of using a de-Googled Android, imho. On the Internet I found some people who suggested just to wait a bit, however, and a friend made the same recommendation. So I wanted and lo and behold: It started working. I’m still not sure why it didn’t work before, but who am I to complain.

    Migrating

    Next step: migration. I had to install a whole bunch of apps of course. The first one was my password manager, because after installing all those apps, I would have to log in to most of them. This was a boring but very straightforward task. A big shout-out to Pocket Casts that after I logged in even remembered exactly where I was in the podcast I was listening to. Talk about a seamless migration experience!

    Contacts

    One thing I was quite scared of was how to ensure my contacts were migrated. Turns out that fear was based on nothing. The e/OS documentation offers a very simple tutorial: Download a vcf-file from iCloud, transfer that to the new phone, then import that file into your Contacts app. Everything was transferred!

    The watch

    My trusty iPhone had a trust companion: The Apple Watch. In my evaluation of my Apple usage, I’d come to the conclusion that while the Apple Watch can do a whole lot of things, all I really used it for on a daily basis was keeping track of my exercise and getting notifications of important things happening on my phone. I don’t really need such a complex watch for that. After looking around I found the PineTime watch: I very simple “sort of smart” watch. After posting on Mastodon about this I got a response from someone one village away who had one lying around that I could test-drive. I’ve been wearing it for three days now and I haven’t even had to charge it yet! And yes, this is not as fancy as an Apple Watch, but it tracks my steps (not all my exercise, sure, but my steps) and after setting up the Gadgetbridge app on my Fairphone I do get notifications. So hey, this works just fine!

    Airpods

    The only thing now that I still have from the Apple ecosystem is a set of airpods. I hardly use those, though, since I also own a pair of Bose QC-35 headphones. And the airpods connect just fine with the Fairphone, so I have no real need to replace those. And even if I wanted to replace them, Fairphone has a solution for that as well.

    Concluding

    I had expected that escaping the Apple ecosystem would be hard. But, some minor setbacks aside, it was pretty much smooth sailing. Whether I will bump into other things, only time will tell of course, but so far my experience with the Fairphone, with e/OS and with the Pine Time has been great!

  • Dutch PHP Conference 2025

    March 18, 2025
    amsterdam, conferences, DPC, php

    With only a few more days to go until Dutch PHP Conference 2025 it’s time to look forward to the conference. DPC is always a good conference (and has always been so), but I’m going to put focus on some talks that I’m really looking forward to.

    Sacred Syntax: Dev Tribes, Programming Languages, and Cultural Code

    One of the best and most entertaining talks at DPC last year was by Serges Goma and was titled Evil Tech: How Devs Became Villains. In it, Serges put focus on ethics in software development in a very fun and accessible way.

    With the great way that topic was approached I’m really looking forward to Serges’ take on tribalism, which is the subject of the talk this year!

    Small Is Beautiful: Microstacks Or Megadependencies

    Bert Hubert has been big in media in recent times talking about the EU dependency on US-based cloud providers, talking about the risks and even the potentially unlawfulness of doing that. However, Bert is also someone with a big history in tech, being the founder of PowerDNS.

    In his closing keynote, he will talk about microstacks and megadependencies. I’m really curious to hear about this from Bert.

    Don’t Use AI! (For Everything)

    After my recent blogpost on AI I got a response from Ivo Jansch, one of the organizers of DPC but also a really smart person that I’ve enjoyed working with in the past, to attend this talk by Willem Hoogervorst. I am really looking forward to the considerations that Willem will be sharing on when to use AI and when not to.

    Parallel Futures: Unlocking Multithreading In PHP

    Multithreading and PHP is not a good combination? Apparently it is! Florian Engelhardt will be talking about ext-parallel and I do want to hear more. I’m intrigued!

    Our tests instability Prevent Us From Delivering

    I’ve seen this situation with several of my customers: tests that can not be trusted. Either tests turned out green when something was wrong, or tests would be red while everything was fine. I’m looking forward to hearing from Sofia Lescano Carroll on what can lead to this and how to mitigate this.

  • Post-mortem

    March 14, 2025
    accountability, development, incidents, post-mortem, programming

    It is hard to imagine a world where nothing goes wrong. Especially in software development, which is not an exact science, things will go wrong. As far as I am aware, no definitive research has been done on this, and different sources give different numbers: Security Week talks about 0.6 bugs per 1000 lines of code, while Gray Hat Hacking mentions 5-50 bugs per 1000 lines of code. I am sure things like this also depends on your QA process. But it’s impossible to write bug-free code.

    So when things inevitably go wrong and your production environment goes down or errors out, it is important to figure out what went wrong. If you know what went wrong, you can figure out how to prevent that issue the next time. Part of that is a good post-mortem. A post-mortem usually includes a meeting where the event is discussed openly and freely, and a written report of the findings (a summary of which you could and should send to your customers).

    In the past days I’ve seen this blogpost from Stay Saasy do the rounds on social media and in communities. As I already said on Mastodon, I couldn’t disagree more. I feel the need to expand on just that statement, so I’ll focus on some statements from the blogpost and why I disagree so much.

    Frequency

    The first thing that I noticed in the blogpost is an assumption that shocked me a bit:

    Many companies do weekly incident reviews

    Hold up. Weekly? I realize the statistics are wildly varying and if indeed you have 50 bugs per 1000 lines of code, you’ll have a lot of bugs, but I would hope that you have a QA process that weeds out most bugs. I am used to have several steps between writing code and it going to production. That may include:

    • Code reviews by other developers
    • Static analysis tools
    • Automated tests (unit tests and functional tests)
    • Manual tests by QA
    • Acceptance tests by customers

    Let’s go from that worst-case number of 50/1000. I would expect, with the above steps, that the majority of bugs are therefore caught before the code even ends up on production servers. If this is true, why would you have weekly incident reviews? I mean, that’s OK if it is indeed needed, but if you need weekly incident reviews, I’d combine looking at the incident with looking at your overall QA process, because something is wrong then.

    Is it somebody’s fault?

    In the blogpost, Stay Saasy states that it is always somebody’s fault.

    it must be primarily one person or one team’s fault

    No. Just no. If you look back at the different ways you can catch bugs that I described earlier, you can already see that it is impossible to blame a single person. One or more developers write the code, one or more other developers review the code, one or more people set up and configured the static analysis tools and one or more people interpreted the results of those, the tests were written, the QA team did manual tests where needed, and the customer did acceptance testing. Bugs going into production is, aside from just being something that happens sometimes, a shared responsibility of everyone. It is impossible and unfair to blame a single person or even a single team.

    Accountability

    It feels that Stay Saasy mixes up blameless post-mortems with non-accountability. But these are two different things, with two different motivations. The post-mortem is not about laying blame. It is about figuring out what went wrong and how we can prevent it in the future. It is a group effort of all involved. The accountability part if something that is best done in a private meeting between the people who were involved in the cause of the issue. To mix these two up would indeed be a mistake, which is why blameful post mortems is such a bad idea.

    On the flip side, if you really messed up, you might get fired. If we said we’re in a code freeze and you YOLOed a release to try to push out a project to game the performance assessment round and you took out prod for 2 days, you will be blamed and you will be fired.

    While I agree up to a certain point with this statement, I think in this case you might also want to fire the IT manager, CTO or whoever is responsible for the fact that an individual developer could even YOLO a release and push it to production during a code freeze. Again, have a look at the process please.

    But yes, even if it is possible to do this on your own, you should not actually do this. So if you do this, it might warrant repercussions up to and including termination of your contract.

    Fear as an incentive

    There is one main incentive that all employees have – act with high integrity or get fired.

    I can’t even. Really. If fear is your only tactic to get people to behave, you should really have a good look at your hiring policy, because you’re hiring the wrong people.

    In every role where I was (partially) responsible for hiring people, my main focus would be to hire on people with the right mindset. Skills were not even the main focus, it would be mindset. People who are highly motivated to write quality code, who will take the extra effort of double-checking their code, who welcome comments from other developers that will improve the code. People that are always willing to learn new things that will improve their skills. You do not need fear to keep people in check when you hire the right people, because they are already motivated by their own yearning to write good code, to deliver high-quality software.

    So how to post-mortem?

    It might not be a surprise to you after all I read that I am a big supporter of blamless post-mortems. Why? Because of the goal of a post-mortem. The main goal (in my humble opinion) is to find out what went wrong, and brainstorm about ways to prevent this from happening again. There are four main phases in a post-mortem process:

    • Figure out what went wrong
    • Think of ways to prevent this from happening again
    • Assign follow-up tasks
    • Document the meeting results

    Figure out what went wrong

    The first phase of the meeting is to figure out what went wrong? This first phase should be about facts, and facts alone. Figure out which part of your code or infra was the root cause of the incident. Focus nost just on that offending part of your software, but also on how it got there? Reproduce the path of the offending bit from the moment it was written to the moment things went wrong.

    In the first phase, it is OK to use names of team members, but only in factual statements. So Stefan started working on story ABC-123 to implement this feature, and wrote that code or Tessa took the story from the Ready For Test column and started running through testcases 1, 2 and 5. Avoid opinions or blame. Everyone should be free to add details.

    Think of ways to prevent this from happening again

    Now that you have your facts straight, you can look at the individual steps the cause took from the keyboard to your production server, and figure out at which steps someone or something could’ve prevented the cause from proceeding to the next step. It can also be worth it to not just look at individual steps, but also the big picture of your process to identify if there are things to be changed in multiple steps to prevent issues.

    Initially, in this phase, it works well to just brainstorm: put the wildest ideas on the table, and then look at which have the most impact and/or take the least effort to implement. Together, you then identify which steps to take to implement the most promising measures to prevent the issue in the future.

    Let everyone speak in this meeting. Involve your junior developers, your product manager, your architect, your QA and whoever else is a stakeholder or in another way involved in this. You might be surprised how creative people can get when it comes to preventing incidents.

    Assign follow-up tasks

    Now that you have a list of tasks to do to prevent future issues, it’s time to assign who will do what. Someone (usually a lead dev or team lead, sometimes a scrum master, manager or CTO) will follow up on whether the tasks are done, to make sure that we don’t just talk about how to fix things, but we actually do.

    Document the meeting results

    Aside from talking about things and preventing future issues, you should also document your findings. Pretty extensively for internal usage, but preferably also in a summarized way for publication. Customers will notice issues, and even if they don’t notice, they will want to be informed. Honest and transparent communication about the things that go wrong will help your customers to trust you more: You show that you care about problems, you do all you can to solve them and to prevent them in the future. Things will go wrong, that’s inherent in software development. The way you handle the situation when things go wrong is where you can show your quality. In all documentation, try to avoid blaming as well. That isn’t important. What’s important is that you should you care and put in effort to prevent future issues.

    So what about accountability?

    Blameless post-mortems do not stop you from also holding people accountable for the things they do. If someone messes up, they should be spoken to directly. But it should not be a lynch mob setting, but a preferably one-on-one setting where two individuals evaluate the situation. And yes, there can be consequences. The most important thing is that the accountability is completely seperate from the post-mortem. It is not the focus of a post-mortem to hold someone accountable. That is a completely seperate process.

1 2 3 … 61
Next Page

skoop.dev

  • Bandcamp
  • Mastodon
  • Bandcamp