<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://softwaremaxims.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://softwaremaxims.com/" rel="alternate" type="text/html" /><updated>2026-01-11T10:52:27+00:00</updated><id>https://softwaremaxims.com/feed.xml</id><title type="html">Musings about software</title><subtitle>The Blog of Thomas Depierre, Elixir and DevOps consultant.</subtitle><entry><title type="html">The Hobbyist Maintainer Economic Gravity Well</title><link href="https://softwaremaxims.com/blog/hobbyist-gravity-well" rel="alternate" type="text/html" title="The Hobbyist Maintainer Economic Gravity Well" /><published>2026-01-11T00:00:00+00:00</published><updated>2026-01-11T00:00:00+00:00</updated><id>https://softwaremaxims.com/blog/hobbyist-gravity-well</id><content type="html" xml:base="https://softwaremaxims.com/blog/hobbyist-gravity-well"><![CDATA[<p>In the OpenSource Supply Chain discourse in the past few years, we got many
versions of the same article. The title is usually something like “unpaid
maintainer of library X demand Big Company to shut up or pay them money”. There
are variations on that theme, like Github Sponsors launching, pieces that
explains how the CRA will magically make companies pay maintainers, etc. It is
usually cheered on by the peanuts gallery, which applaud making the Evil
Big Tech pays for the abuse they impose over their “exploitation of the
Commons”.<!--more--></p>

<p>Funnily enough, it seems no-one sat down to ask if this could help the
maintainers or not. And if it was solving a problem. Everyone seem convinced
that there is a problem, and that this lack of money transfer is the root of it.
On my previous post, <a href="https://www.softwaremaxims.com/blog/how-foss-won-consequences">How FOSS Won and Why It
Matters</a>, I
pointed out that forcing this kind of Commercial Supply Chain relationship on
FOSS would make it unusable for corporations.</p>

<p>But I did not talk of the other side of the equation. Would it help
maintainers? And why are hobbyist maintainers not trying to get this kind of
relationship going. After all, it works for some people. Let’s try to tackle it
in this blogpost. Spoiler alert, the amounts of money are not large and stable
enough to make it a viable endeavour.</p>

<h2 id="the-income-level-needed">The Income Level Needed</h2>

<p>What I am going to talk about here is pretty “spherical cow”. We are going to
build a relatively average profile of a software engineer. It means it will
represent no-one well. The cost of living in your country, disabilities,
personal situations, average income, and more will vary from maintainer to
maintainer. But this “spherical cow” engineer profile will still be useful for
our model here.</p>

<p>So let’s assume someone in NA or Western Europe, with a family, and an income of
something like 5k USD-or-equivalent monthly income post-tax. Taking into
consideration the cost of a family, a car, housing, food, and the rest, they
probably have some leftover money, but not that much. There is some wiggle room
if their partner work, but we will still consider “5k per month” as our income
level goal.</p>

<p>This is both low and high. From a “crowdsourced donation” point of
view, this would be a highly successful project, probably in the top 1% of
project using this kind of model. If not the 0.1%. And that is for a single
maintainer. So we can put it in the bucket of “possible in theory, but need a
lot of work, risky, and unrealistic”. So we can forget this model as a
way to get the hobby out of the hobbyist maintainer with this model.</p>

<p>From a “get a stipend or funding by a government or charity” point of view,
things get more complex. A lot of charity, foundations or government stipends
would not cover that much. There are understandable reasons for it, which we
will explore later. But it could still go in a bucket we will call “could work
in combination with other part time income stream”. More about that in a bit.</p>

<p>But there are also stipends that do reach this income level. Sounds good right ?
Except they tend to be limited in time, from 6 months to a year. And they tend
to be for specific project implementing a feature or something comparable. Let’s
keep this for last, because it is the more complex one.</p>

<h2 id="the-time-problem">The Time Problem</h2>

<p>The biggest problem when trying to justify paying a full cost salary for
hobbyist maintainers is that a full time salary is, in a lot of the World,
considered to be something around 38 to 40 hours of work per week. The
thing is, most hobbyist maintainer packages do not have enough work to do to
justify that much time spent on it. As such, even if we could secure some income
streams for this (which is dubious), these would probably be unstable. After
all, it is easy to cut on spending if you feel that you are not getting enough
bang for your bucks.</p>

<p>What about Part-Time then? You work for commercial software most of the week,
but you spend your Fridays on FOSS. Seems like a cool gig, I would go for it…
If Part-Time jobs existed in software engineering. I genuinely would offer you
to try to get one. I have tried the past 5 years. Impossible. Part-Time jobs do
not exist in software. The only way to make this kind of stuff work is through
freelancing. Which brings us to the next problem.</p>

<h2 id="stable-income-is-not-negotiable">Stable Income Is Not Negotiable</h2>

<p>Well yeah, duh. Let’s start with the 6 months program of acceptable income, but
for 6 to 12 months. If a maintainer want to go full time on these programs on
his maintainership, it means they need to leave their, current, full time
employment. And they know that after the program end, they will be out of
income.</p>

<p>Except, that is not acceptable. Our maintainer has a mortgage or a rent to pay,
a loan on the family car, and other fixed costs. If the program end,
they will still need a stable income. So they would need to go on another grant
or find a new full-time employment. Here is the thing. And you will not
believe me, but please try.</p>

<p>Finding a new source of income as a software engineer take 6 months to a year of
near full time work. When the market is looking good. And lately it does not
look good at all. My friends are regularly searching for 2 years. And you may
not believe me on the near full time part, but let me tell you. Software hiring
is a effed up process, which demand massive homework and learning for no
good reasons and is grueling on the mind.</p>

<p>Now, let’s go back to the 6 months grant. Well. I leave a FTE job for a 6 months
grant, which I will need to spend finding a new FTE because I need income after
the 6 months. I am not sure that grant helped me maintain my package.
Even if it is 12 months, 6 months in I need to be switch to searching for a
job if I have nothing yet. Because I cannot afford to not have income after that
grant ends.</p>

<p>And freelancing is comparable. If I want to leave my current FTE for
freelancing, I need to already have clients onboard. And I will spend a lot of
my hours doing marketing, client management, and all that stuff, because I
cannot afford to go without income. Which means the time I have left to do my
maintainership is not going to be that large. When I do freelancing, I usually
consider that only half of my workweek is done in software engineering. The rest
is client management and prospecting. Not really efficient way to get out of the
“Hobbyist” situation.</p>

<h2 id="aside-the-paid-feature-implementation-trap">Aside: The Paid Feature Implementation Trap</h2>

<p>A few org will pay a fixed grant to implement a specific feature. This
can be a large grant over many years. Sounds good, right? Except there is a
small catch. In order to get that, you need to provide a compelling case of what
the feature will be, what it will achieve and why you have a high chance of
getting it to work.</p>

<p>Seems great, we don’t want to spend our money on frivolous thing after all!
Except building that kind of case demands a lot of work and research, usually
over many years of building expertise, testing prototypes and maybe a mock
implementation. Then you need to write the whole grant demand, which is also
time consuming.</p>

<p>And that is time that will not be compensated and that usually hobbyists
maintainers do not have. One of the shared characteristic of hobbyists
maintainers, that we observe again and again, is that they are heavily
ressource, in particuler time, constrained. Asking them for a massive time
investment to maybe get a grant that will barely cover their
living expenses is not a realistic solution. They will not apply. They can’t
afford it.</p>

<h2 id="so-what-we-cannot-pay-you">So What? We Cannot Pay You?</h2>

<p>Yeah, you can’t. Being a Full Time Software Engineer combined with a
Hobbyist Maintainer put you into an economic gravity well. In theory, you can
get out. But it takes a lot of energy, a lot of work, and a lot of luck. You
could design a program to help these maintainers get out of that situation, but
it would be hard with current funding source. And at any point it could fail and
they would fall back at the bottom of the gravity well.</p>

<p>You would need something like a 3 years program, at silicon valley level of
yearly income, with you as the program management doing all the work of vetting
the potential recipient, with no demand placed on the recipient. There are a few
people doing things comparable in other charity areas. <a href="https://www.vox.com/future-perfect/470404/mackenzie-scott-amazon-trust-based-philanthropy-explained">McKenzie Scott comes to
mind</a>.</p>

<p>But she is definitely an exception. And this is why being a hobbyist maintainer
is an economic gravity well. You can get out of it, being with help or by
yourself. But it takes a lot of effort, energy and risks. And you probably are
going to end up back at the bottom anyway. Please keep this in mind when you
offer “solutions” to the hobbyist maintainer problem.</p>]]></content><author><name>Thomas Depierre</name></author><summary type="html"><![CDATA[In the OpenSource Supply Chain discourse in the past few years, we got many versions of the same article. The title is usually something like “unpaid maintainer of library X demand Big Company to shut up or pay them money”. There are variations on that theme, like Github Sponsors launching, pieces that explains how the CRA will magically make companies pay maintainers, etc. It is usually cheered on by the peanuts gallery, which applaud making the Evil Big Tech pays for the abuse they impose over their “exploitation of the Commons”.]]></summary></entry><entry><title type="html">How FOSS Won and Why It Matters</title><link href="https://softwaremaxims.com/blog/how-foss-won-consequences" rel="alternate" type="text/html" title="How FOSS Won and Why It Matters" /><published>2025-11-16T00:00:00+00:00</published><updated>2025-11-16T00:00:00+00:00</updated><id>https://softwaremaxims.com/blog/how-foss-won-consequences</id><content type="html" xml:base="https://softwaremaxims.com/blog/how-foss-won-consequences"><![CDATA[<p>I regularly comment on the Internet on my views on most schemes proposed to fix
FOSS problems. They are mostly negative. I think that most of these schemes
cannot achieve any meaningful impact. It seems that most of these
disagreements come from the fact that I seem to work on different models of how
FOSS work. Over the years, I have tried to share parts of my model. This is part
of this endeavor.<!--more--></p>

<h2 id="enforcing-cost-control">Enforcing Cost Control</h2>

<p>I will not discuss in there that at this point FOSS won. If you want to discuss
it, I have other blog-post more appropriate. Let’s agree, for the sake of the
argument, that FOSS won at this point. The vast majority of code shipped,
even in commercial products, and running on computing device is now some flavor
of FOSS. What we talk less these days is how we got there.</p>

<p>Usually, the usual model of FOSS victory I see thrown around is pretty simple.
Corporations don’t like to pay for things, there was a free-as-in-beer offering,
so they simply took it and used it. This is a pretty direct anti-capitalist
argument, and it is one that resonate a lot with people. I do not fundamentally
disagree with it… except that I find it too reductive, to the point of
being harmful.</p>

<p>See, it is true that corporations do not like to pay for things. After all, if
you want to make a profit, you can raise your price or you can reduce your
costs. So, not paying is great isn’t it? Yes. So I have a question for you. How
does corporations achieve this goal? How do you make your employee stop
buying software? Especially when, for so long, corporations were afraid of FOSS
licenses.</p>

<p>The answer reside in Cost Control. Cost Control is the business term for “making
buying something as hard, painful and time consuming as possible, so our
employees will not do it”. It manifest itself particularly through Procurement,
through approval from accounting and from Legal validation of contracts. Let’s
look at an example.</p>

<h2 id="cost-control-in-action">Cost Control in Action</h2>

<p>Let’s say that I need to get a good Date Picker for our website. I got assigned
a ticket to create a form in which I will need to use a “fancy” Date Picker
experience. My corporation doesn’t like FOSS, so I have to use a commercially
licensed software. What would it look like to solve that ticket?</p>

<p>First, I would have to collect my requirements and go to Procurement. Do they
already have an approved provider for this? They will probably not understand my
demand, so this is going to need a few rounds of meetings. After a few
weeks, they will confirm that we do not already have an approved provider for
this. Or my manager got tired of this and decided we would ignore Procurement.
So now, I would have to collect all existing solutions and a rough estimation of
their cost. This could be a bit complex, especially if different products have
different ways to calculate their prize. Once equipped with this knowledge,
I would need to validate with Accounting, my manager and probably their manager
that this line in the budget is approved.</p>

<p>Once that is done, it is now time to move back to Procurement. Indeed, we went
with a supplier that is not yet approved. As such, Procurement need to enter it
in the system with all the details needed so that the payments can be done on
time. That one is tedious, a lot of paperwork and back and forth with the
supplier, but relatively straightforward.</p>

<p>The next step is to sign the contract. This means reviewing it with Legal. Legal
probably got involved early, because they also had to validate all
kind of stuff. After all, if we take a supplier in, liability become a thing. We
have to protect our corporation. You need to check all the different
legislation, all the ways a commercial contract can go wrong, and bake it all
in. This is an important step. We don’t want this contract to be badly worded
and end up costing money to the company.</p>

<p>And don’t forget that we need this to be ironclad and complex, because software
is under complex Copyright Laws. This is not something you can easily reuse and
buy, like a few screws using expired patents. No, you <em>need a detailed contract
for every single piece of software</em> because of how Copyright laws around
software works.</p>

<p>Ok, so we got Accounting approval, the supplier is in the Procurement system and
Legal signed off on the contract. We can now start integrating the Date Picker.
It took us 3 to 6 months to get there in most cases. Until next year, when we
will have to renegotiate the contract, sit down with their sales person to hear
their potential upsale, and of course amend the contract because the law change
since last year. This is going to now take us a dozen engineering and manager
hours every year.</p>

<h2 id="foss-enters-the-scene">FOSS enters the scene</h2>

<p>As you can see, this is a tedious, painful and even expensive exercise for the
corporation. And as an engineer or a product manager, this is also really
painful. I am now 6 months in to a ticket to produce a simple form that needs a
Date Picker. Without counting the recurring cost of maintaining the license. Is
there a better solution? One that could allow me to just… ship software
tomorrow so that my feature is live in time for the launch?</p>

<p>Yes, of course there is. It is called Free and Open Source Software. You see,
FOSS is an amazing hack for Cost Control and Copyright Licensing law problems.
It allows us to decentralize and eliminate the whole process. Legal pre-approved
a handful of well known licenses. If the code use this license, we do not need
to go through the whole copyright contract, because approval is baked into the
code itself.</p>

<p>It also cost nothing and no one restrict our access, so we do not have to talk to
Procurement. And Accounting does not have to be involved either. In term of
liabilities, no need to decide who is responsible for what and write it down in
a contract, the license says clearly that we are the only one responsible for
everything. Not the best in term of Risk Management, but it makes Legal work
easier.</p>

<p>At the cost of nothing, we got back 6 months of velocity on this
ticket. This is what FOSS provide to corporations. The cost of the software
itself would usually not be a huge budget line. But the cost of procuring that
software are pretty high. Multiply this by the thousands if not hundreds of
thousands of different dependencies in the code stack you use, and the
organizational cost of managing the licenses spiral out of control.</p>

<p>FOSS solve that problem far more than it solves “we don’t want to pay”. Most
corporations would be surprised by how cheap it would cost them to pay
for the FOSS software they use. But due to copyright, liabilities law and
procurement rules of an actual Supply Chain, they cannot survive building a
Software Supply Chain. The tools are not adapted. FOSS was the hack to bypass
having to create these tools.</p>

<h2 id="it-works-both-ways">It Works Both Ways</h2>

<p>Of note, this works both ways. See, if you are a FOSS maintainer, by the miracle
of your software bypassing copyright laws, liabilities laws and the procurement
process, anyone can be a FOSS maintainer with very little overhead. You don’t
need a legal entity, you do not need lawyers, you do not need liability
insurance, you do not need sales people, you do not need to make a profit.</p>

<p>All you need is to want to write some code and be a pretty good expert in a
domain. The result is that we get high quality software libraries from FOSS,
because experts now can produce the code, instead of being blocked by the
massive framework of the corporate supply chain. Once free of the trapping of
the Supply Chain, you allow anyone with the expertise to produce.</p>

<p>And this is what made FOSS win. This hack around the whole Supply Chain
trappings allowed the users to go faster and massively reduce their cost in the
management of dependencies. And for creators, it allowed them to write the
code that they could not write or make exist in the limits of the Supply Chain
framework.</p>

<h2 id="you-cannot-go-back">You Cannot Go Back</h2>

<p>Here is the problem though. You cannot go back. If the solution you offer
recreate any of the Supply Chain framework problem, then your solution is Dead
On Arrival. Even if you manage to impose it for a small part of the market or
for some time, the system will revert to the mean. FOSS qualities will not
disappear and you will be back to where we are today.</p>

<p>Making licenses suddenly commercial for big entities? You are forcing people to
go back through the Procurement process. They will shift over time back
to a FOSS solution, even if it is less good. The pain of the Procurement process
is too high. And it slows down velocity too much.</p>

<p>Want to make the corporations introduce a big process for managing their FOSS,
like SBOM or mandatory upgrades or management of liability? Good job, you
made it even harder to actually use a commercial license. So the response will
be to hide the dependencies to everyone. Engineers don’t like to
lie, I promise you. But between that or not shipping, they will pick shipping.</p>

<p>But don’t forget the other side of the coin either. The hack that is FOSS not
only free the user side of all these complex and expensive process. It also
means the creator side doesn’t have to deal with these complex process either.</p>

<p>It significantly lowered the bar to production. It is how we got the whole
society to run on software. If you make it harder for hobbyists maintainers, you
are going to crash society. I don’t think that is the goal either. Another day,
we will talk of other parts of the models and how they can help us actually fix
the problem. Instead of just saying to people tha their solution will not work.</p>

<h2 id="if-you-are-only-here-for-the-ranting-ignore-this-section">If you are only here for the ranting, ignore this section</h2>

<p>Want me to make this part of a talk that I could give to policy-makers? I want
to do it, because I think it is important if we want to make FOSS work for
society. FOSDEM host a room with people from the different EU entities for the
past few years. And I know they would love to host that talk.</p>

<p>The problem? I cannot afford to go to Brussels and spend a night or two there
for FOSDEM. The problem of hobbyists developers. We have to pay our lobbying
from our hobby budget. So I will probably not do it. But if you are an
organisation that think this kind of lobbying (it is lobbying) is necessary and
want to foot the bill, feel free to contact me. I will disclose your
involvement, I have a legal entity to invoice you if you want, I can help make
this work. But to be honest, I don’t think you exist. Would love to be proven
wrong.</p>]]></content><author><name>Thomas Depierre</name></author><summary type="html"><![CDATA[I regularly comment on the Internet on my views on most schemes proposed to fix FOSS problems. They are mostly negative. I think that most of these schemes cannot achieve any meaningful impact. It seems that most of these disagreements come from the fact that I seem to work on different models of how FOSS work. Over the years, I have tried to share parts of my model. This is part of this endeavor.]]></summary></entry><entry><title type="html">You Are All On The Hobbyists Maintainers’ Turf Now</title><link href="https://softwaremaxims.com/blog/open-source-hobbyists-turf" rel="alternate" type="text/html" title="You Are All On The Hobbyists Maintainers’ Turf Now" /><published>2024-04-01T00:00:00+00:00</published><updated>2024-04-01T00:00:00+00:00</updated><id>https://softwaremaxims.com/blog/open-source-hobbyists-turf</id><content type="html" xml:base="https://softwaremaxims.com/blog/open-source-hobbyists-turf"><![CDATA[<p>For quite some time, I have felt some unease at the public discourse around
OpenSource. In the past few years, we have seen a growing discourse around the
sustainability and security of the large body of OpenSource software.<!--more-->
The Software Supply Chain discourse, moves from some startup to leave the
OpenSource movement with their code, movement from entities like the Sovereign
Tech Fund to support the maintenance of critical infrastructure, etc, etc.</p>

<p>But throughout this discourse, I have had a feeling of unease. It
felt like my own experience as a maintainer and as a developer using opensource
dependencies was quite different from what everyone was talking about. And not
only me, but also all the network of maintainers and developers I regularly
interact with.</p>

<p>The solutions offered seemed never to meet the problems we had. They were all
profoundly impractical, if not totally useless. So I was wondering. Am I out of
touch, or are the movement’s elders wrong?</p>

<h2 id="it-is-the-elders-who-are-wrong">It is the Elders Who Are Wrong</h2>

<p>But I needed more solid proof. I had a bunch of shreds of evidence and clues
supporting my hunch, but nothing really directly supporting it. Until I stumbled
upon <a href="https://www.synopsys.com/software-integrity/resources/analyst-reports/open-source-security-risk-analysis.html">Synopsys’ 2024 Open Source Security and Risk Analysis
Report</a>.
In the middle of this report, there is an amazing statistic.</p>

<blockquote>
  <p>77% of all code in the total codebase originated from open-source</p>
</blockquote>

<p>Well, that does feel like open-source won, and the vast majority of software out
there in every app is open-source. Commercial software has already
lost if it is less than 25% of the total amount of software out there. “Closing
back down” some codebases is probably not going to endanger OpenSource any time
soon.</p>

<p>While this provides me with solid support that nearly all of the software out there
is made of opensource dependencies, with a bit of glue code and a top layer of commercial
code on top, it was still not fully answering my hunch,</p>

<p>But then, the <a href="https://tidelift.com/open-source-maintainer-survey-2023">2023 Tidelift state of the open source maintainer
report</a> gave me the
evidence I lacked.</p>

<ul>
  <li>60% of maintainers describe themselves as unpaid hobbyists, while only</li>
  <li>13% describe themselves as professional maintainers earning most or all of
their income from maintaining projects.</li>
  <li>23% of maintainers describe themselves as semi-professionals, earning some of
their income from maintaining projects.</li>
</ul>

<p>If we combine these two sets of data <sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> we obtain a fascinating result<sup id="fnref:2"><a href="#fn:2" class="footnote" rel="footnote" role="doc-noteref">2</a></sup>.</p>

<ul>
  <li>46% of all code out there, in every app, is maintained by hobbyists</li>
  <li>13,8% is maintained by “I sometimes get a bit of pocket money for my code”</li>
  <li>40% of all code out there is maintained by an industry-paid person</li>
</ul>

<p>So, nearly 60% of all code being actively shipped in an app or
product in the wild is hobbyist-maintained open-source. And that probably
undercounts all the build systems and compilers that support this.</p>

<h2 id="how-long-is-your-weekend">How long is your weekend?</h2>

<p>Now, here is the thing. We do not know how much time these “weekend maintainers”
spend on their OpenSource codebase. But I can give you an idea. Probably around
1h to 2h a month.</p>

<p>They are also hundreds of thousands of them, spread across ecosystems,
dependency trees that go wider than you think, and more.</p>

<p>It means that anything you offer must fit in 1h per month. That is it. And if
it does not, if it needs more involvement than that, we, as maintainers, will
not do it. At all. And then what will you do? Throw away the 60% of the code
the world depends on <em>in every software product</em>?</p>

<p>No. You will discover that you made nothing better.</p>

<h2 id="welcome-to-my-world">Welcome To My World</h2>

<p>If your plans for open-source sustainability or security do not align first and
foremost with this population <em>it is not going to achieve anything</em>. Forget
everything you think you know about security, paying for software, maintenance,
tools, etc.</p>

<p>This is a community that evolved parallel to you. And that evolved to deal with
its own constraints that you know nothing about.</p>

<p>And no. If you participated in the Free Software movement of the 90s or early
00s, if you are a Libre/Free Software Activist, if you believe in Digital Rights
or anything like that. You do not know anything about it. This is not the same
world that you were part of. The complexity is off the chart; we are hidden
layers and layers under the scaffolding. And we are used everywhere.</p>

<p>So sit down. Learn. Shut up. Please stop trying to bring solutions, thinking you
get it.
You do not. If you did, you would not offer the thing you are. You would
understand what I say here. You would be among the people who just read what
you post and shake their heads. Before going back trying to keep everyone’s
machine still running after Apple botched another release of their filesystem.
Or of Autoconf.</p>

<p>You are on our turf now. Hobbyists Maintainers’ turf. My turf.You all depend on what
we do and how we do it. And you
need to internalize that you are not the natives here. So observe. Ask
questions. And more importantly, please listen to us. If we tell you that you
are spewing nonsense, if we do not react to what you offer, if we seem not to
respect you,
it is not because we are pricks. Not because we believe in shunning out
outsiders.</p>

<p>If we do not respect you, it is because you are showing your ass.</p>

<h2 id="we-need-you-here">We Need You Here</h2>

<p>You are the one that depends on us. You do not know the rules. You do not know
the systems. You do not understand its sharp edges. You need us. You need the
60%. Everyone in this world now depends, one way or another, on us.</p>

<p>And we know that. And we are terrified of this. Because we know how broken it
is. How fragile. That I could wake up tomorrow and discover that the whole world
is on fire because of my code. We don’t like it, trust us.</p>

<p>We are not shunning responsibilities. If we did <em>we would not keep the world
running</em>. Respect our work, please. But yes. We need help. We want help. We want
you here to help us.</p>

<p>What we ask you, while we are growing the part that we maintain for all of you,
because yes, that percentage is growing every year, is to start by understanding
us. We want your help, but it needs to be helpful. Otherwise, it is just more
stuff we need to handle in 1h per month, on top of keeping the world running.</p>

<p>And the easiest way to not lose our precious time, the time we have so few, is
to ignore you all. Because we have a world to keep running. And we only
have 1 hour. Please don’t waste this time. Who knows what the impact could be?</p>

<hr />

<p>PS: This has been cooking in my head for the past three weeks. At least. This is
not particularly linked to the XZ situation. And yet.</p>

<hr />

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>Do not do this at home; they are not the same thing and cannot be combined
this way if you want to do proper work. However, it is not too bad for this kind of thought
leadership piece, to get a rough idea of the whole field. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:2">
      <p>Beware, I am really conservative here. There is a huge possibility that
the Tidelift report sample of responders is biased toward paid open-source
maintainers, as it is their business. The same is true for Synopsys; their estimates are probably
quite conservative. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Thomas Depierre</name></author><summary type="html"><![CDATA[For quite some time, I have felt some unease at the public discourse around OpenSource. In the past few years, we have seen a growing discourse around the sustainability and security of the large body of OpenSource software.]]></summary></entry><entry><title type="html">Where did the Rust go?</title><link href="https://softwaremaxims.com/blog/memory-safety-end-history" rel="alternate" type="text/html" title="Where did the Rust go?" /><published>2023-08-23T00:00:00+00:00</published><updated>2023-08-23T00:00:00+00:00</updated><id>https://softwaremaxims.com/blog/memory-safety-end-history</id><content type="html" xml:base="https://softwaremaxims.com/blog/memory-safety-end-history"><![CDATA[<p>There is a term that is on a lot of lips lately. “Memory Safety”. The theme of
the early 10s for software security is “Move to memory-safe languages”. You hear
and see it everywhere <!--more-->in <a href="https://www.youtube.com/watch?v=Gh79wcGJdTg">C++
events</a>; <a href="https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf">CyberSecurity
professionals employed by some governments put it in every
document</a>;
<a href="https://arxiv.org/abs/2306.08127">Foundations and academic researchers put it in tons of
papers</a>; <a href="https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html">the Android team boasts of how
focusing on Memory Safety drastically enhances the security of
everyone</a>.</p>

<h2 id="victory-lap">Victory Lap</h2>

<p>And you know what? About. Damn. Fucking. Time. It only took what? Three decades?
For the mainstream to realize what have been said all along by the people that
keep mopping the bodies our field leave around. Memory Safety, the thing
everyone told us could not be the silver bullet for safety we were claiming it
was, actually work. It is indeed a pretty good bullet.</p>

<p>And no, it does not solve everything, but it happens to solve many things. And
as someone that has been fighting that fight for the last decade<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup>; I could
not be more happy about it. And smug. I need to get a proper
<a href="https://enet4.github.io/rust-tropes/rust-evangelism-strike-force/">RESF</a> pin to
wear, nicely enameled. Except… There is a small problem. One that keeps
nagging in the back of my head. What happened to make us win suddenly? How did
we go from decades of pointing out that the Emperor Has No Clothes, but no one
was listening, to suddenly the “mode du jour” is to move to memory-safe language
ASAP? And what does this tell us about ways to make significant changes in
CyberSecurity, in order to improve the field?</p>

<h2 id="the-giant-corroded-crab-in-the-room">The Giant Corroded Crab In The Room</h2>

<p>Oh yes. Rust. That is the change. Suddenly, everyone loves to talk about moving
to Memory Safe languages because something changed in 2018. We got a memory-safe
language that can be used by your average developer. And not only was it Memory
Safe. It is better than every. Single. Other. Language. Targeting. The. System.
Level. It is more expressive. The compiler has better UX. It has a build and
packaging tool that works. It can use 3rd party packages. It can compose code.
Its code is performant. It has a helpful type system. It can handle union types.
It can <em>pattern match</em>. It can do polymorphism without all the pain of OOP based
on classes. It comes with the ability to write tests. It has a community writing
state-of-the-art implementation of parsers, command line interface frameworks,
and all the other things that we need these days.</p>

<p>Someone finally decided to fund the engineering resources needed to take a
research language from the late 80s and <em>industrialize it</em>. You know, to make it
into something the industry can use. Something that has a good User Experience
(UX). An actually helpful tool. It took the team working on it ten years, at a
cost that I, conservatively, (over-)estimate at 10 Million USD total, invested
over that period. Of course, I am talking of Rust. What else?</p>

<p>Rust’s whole idea was to take all the research ideas around these domains from
the 70s, 80s, and 90s. And try to give them a fair chance to compete against the
mainstream. It was an experiment. One informed by a deep interest in the work
done in Programming Language and Computer Science since then. And damn, it is
paying back well, isn’t it? It. Worked.</p>

<h2 id="where-is-the-d-in-rd">Where Is The D In R&amp;D?</h2>

<p>Except for one tiny thing. Everyone seems to forget that if this team and
community had not done the work, none of these new converts to Memory Safety
would be able to talk about it. They would not even know it was possible.
The reason we are suddenly moving to Memory Safety is not because we did not
know how to do it before. As mentioned above, nearly all the tools we needed
were deeply researched and developed in the 80s. Rust really does not bring a
whole lot of new research (initially) in the PL domain. But until this team, no
one had been funded to move these ideas from the research domain to the
production one. What was needed was not research. It was engineering. And we do
not invest in that.</p>

<p>As a field, at least for what concern our tools of the trade, we stopped
investing in engineering a few decades ago. For a field mainly paid by budget
labeled as R&amp;D, we put really little of it into our tools. Oh, don’t get me
wrong. There is plenty of Research. Even today, you can find plenty of
programs, faculties, endowments, and other grants, for <em>research</em> and prototypes
in PL. There is an active academic and prototype scene that is relatively
well-funded.</p>

<p>But are they turning these prototypes and concepts into production tools? That
would be the domain of Development. You know, the second part in R&amp;D. Except we
stopped doing it. And when a few teams and individuals manage to get some of it
done, through the sheer sacrifice of their personal time or rare financing, we
get massive impacts on the industry.</p>

<h2 id="game-changers">Game Changers</h2>

<p>Bundler and lock files. eBPF. Typescript. VScode and the Language Server
Protocol. Rust. Swift. The LLVM (which enables a lot of the developments we are
seeing today). All of these come from small teams, which managed to keep trucking
at Development for a decade, in order to bring these ideas to production. And when
they finally release their tool, it changes the whole discourse.</p>

<p>And yet. Do you know what happened a few months after Rust was finally released
in a stable form and starting to get real traction? The whole Rust team was laid
off. Whoopsie. Well, yes, you see. We do not fund this work. Of course, people
use a memory-safe language now. What else would they do, really? Use some kind
of unsafe language? We are better than that!</p>

<p>I mean, except for the fact that for three decades, if not more, all the same
people kept telling us it was not possible. That we were barking at the wrong
tree. Refused to fund this work. Laughed at these ideas. Dismissed them as
“impractical”. But of course, now that a few of the renegades did the job, now
Oceania was at war with Eurasia: therefore, Oceania had always been at war with
Eurasia. Memory Safety was always there. Nothing to see about how we all were
enabled to ask everyone to rewrite everything in a memory-safe language.</p>

<h2 id="retrospective-anyone">Retrospective Anyone?</h2>

<p>And no talking of the fact we could have had it three decades ago. No
retrospective on how a whole industry slept on well-researched, well- prototyped
ideas that are more productive and safer. And allows to write more code, faster,
with better quality. We will ham fist pattern matching into every language now.
There is no need to think about the fact that this has been a well-known tool
since the 80s. No need to look at what in our industry made us ignore all of
these tools for decades.</p>

<p>Let’s not question the experts that never knew about all these possibilities
before we made it so big they could not ignore it anymore. Let’s not change the
policymakers that kept us in this state. Let’s not question the Infosec
industry. And more importantly. Let’s never ask what else we missed. What other
massive systemic problems could we solve by investing small sums for a decade
into Development teams for these tools? It is all Research and Startups. This is
known. The End of Programming Language Engineering was proclaimed in the 90s.</p>

<p>Or. Maybe. We could sit down. Stop advocating to “move to Memory Safe Language
Yesterday Already”. Stop publishing tons of policy papers and ways to enhance
the “Safety of the Software Supply Chain”. Stop inventing new impractical models
to send a few peanuts to FOSS Maintainers through subscriptions.</p>

<h2 id="i-am-not-bitter-i-promise">I Am Not Bitter, I Promise</h2>

<p>And maybe we could do a proper retrospective of how we got into this rut. As a
field and an industry, we lost our capability to take a prototype from academia
and turn it into a product. And maybe, after we are better informed about our
current system and why it is a problem. And only after that. Perhaps we could
then consider systemic changes, maybe at a government level, maybe at the
industry level, that would allow us not to spend another 30 years ignoring the
<em>other</em> problems that need new paradigm-changing tools to fix.</p>

<p>Like, I don’t know. Why do engineers keep using “curl | sh” despite everyone
knowing it is highly dangerous? Or why we are still fighting worms using a
file-sharing protocol that was deprecated a couple of decades ago? You know.
Small things. No real impact on the world. It is not like this kind of stuff
infected hundreds of highly safety-critical systems all over our society.
Definitely not.</p>

<p>So if anyone of you with power wants to fund this kind of work? If you really
want to have an impact and not just make the headlines? Then feel free to
contact me. I can probably fill a whole workshop for a week in a hotel to do
that retrospective and offer some paths forward. And you would be surprised by
the result. It would probably be also quite cheap all around.</p>

<p>But we need someone with actual financing and power to do it. Because we are
drowning and desperate down there. And we are not bitter. Definitely not. I
promise. The punching-ball next to my desk is definitely not because we are fed
up with this. Definitely.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>Oh yes, I am part of the <em>young</em> generation of people doing this. The
oldest has already left the field. That is how long it took us to get
something through finally. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Thomas Depierre</name></author><summary type="html"><![CDATA[There is a term that is on a lot of lips lately. “Memory Safety”. The theme of the early 10s for software security is “Move to memory-safe languages”. You hear and see it everywhere]]></summary></entry><entry><title type="html">The Cloud Is Not Optional</title><link href="https://softwaremaxims.com/blog/cloud-not-optional" rel="alternate" type="text/html" title="The Cloud Is Not Optional" /><published>2023-07-14T00:00:00+00:00</published><updated>2023-07-14T00:00:00+00:00</updated><id>https://softwaremaxims.com/blog/cloud-not-optional</id><content type="html" xml:base="https://softwaremaxims.com/blog/cloud-not-optional"><![CDATA[<p>When you hear that one of the vendors responsible for keeping government
organizations safe had a security breach, you can easily decide that this is
unacceptable. When you hear that it is hard to know who is affected and how
much, you may start to feel a bit panicked. This is bad; it would be far better
if it never happened.<!--more--> These are dangerous breaches. So you go find
out who made the mistake of allowing a system designed-to-be-safe to be broken
in. And you fire them or blast them for being idiots. And for making the world
worse for all of us.</p>

<p>After this, you can breeze a bit. The bad thing got fixed. The responsible has
been sacked or punished. We moved away from the affected vendors. We added new
systems to ensure we know who does the wrong thing and who is affected next
time. Everything got fixed. Nothing to see here; we can go back to sleep.</p>

<p>Or can we? Did we fix anything? Could we change these systems and punish these
people? Is it even possible to know who is affected? Did anyone do something
wrong? Was the previous system the best we could do, and did we make ourselves
more vulnerable by overreacting?</p>

<h2 id="the-story-of-a-security-breach">The story of a security breach</h2>

<p>A few days ago, by the time I wrote this, Microsoft had to deal with a breach of
some of the encryption systems they use for Office 365, particularly for the US
government. The details are unknown, but it seems the attackers used a
combination of extracting a Root key and some bugs and misconfiguration to
generate access tokens.</p>

<p>This has made a few people mad on the Internet, particularly around the question
of Cloud Security and trusting Cloud providers. There have been a lot of calls
in the press for organizations to not mindlessly move to the cloud, and to ask
themselves if centralizing all these security in a few vendors is not more
dangerous than keeping it on-premise, securely separated from others. After all,
it would be a less juicy target. You would need to breach multiple networks and
systems to get the same amount of information.</p>

<p>This is what I call “pink fluffy unicorn” solutions. Solutions and ideas that
make total sense in theory. I see where these people come from; I understand
what they are trying to do, and I see why they think it makes sense and could
help. And it could! If it was possible.</p>

<p>Just like I would be delighted if I could have a pink fluffy unicorn jumping on
rainbows, but that is not possible. I cannot have a pink fluffy unicorn for all
kinds of reasons. And most of these calls for “choosing the right balance
between centralized on cloud and separated” make a lot of sense. If they were
possible.</p>

<h2 id="why-are-saas-and-cloud-use-exploding-at-the-infrastructure-level">Why are SaaS and Cloud use exploding at the infrastructure level?</h2>

<p>Running an Internet-facing Digital Infrastructure service in 2023 is a prevalent
task. Most organizations need some, from the local group of plumbers to the US
Federal government. Even if you limit yourself to a certain level of safety and
national level security, you still get thousands of organizations across the US
and the world that have real pressing needs for this kind of service. And these
organizations will transfer and handle, through these services, a lot of
secrets.</p>

<p>Running these services well, with high uptime, well configured, and safely with
good security -both active and passive- is a tough job that necessitates a
certain set of skills and knowledge. It also needs an organization that can
hire, reward, support, and manage the teams and individuals doing this work. The
operators need knowledge and skills, and their management structure up to the
top has to be designed to support them and understand their needs. Otherwise,
they will be left to work with the wrong tools, budget, and constraints.</p>

<p>So what do we see when we look for individuals with this knowledge? That they
are not a lot of them. Every analysis of the tech job market talk of massive
deficits, with hundreds of thousands, if not millions, of unfulfilled jobs. With
an enormous shortage of talents and education pipelines that cannot train what
is needed. And this hold for every part of the field that would work on the
Digital Infrastructure, being for software engineers, sysadmins, operator,
Infosec specialists, CyberSecurity specialists, SRE, etc.</p>

<p>In 2023, if you want to run an Internet-Facing Digital Infrastructure service
for yourself, and if everyone that needed one tried to do it themselves, nearly
none of them could get the necessary people to do it. They do not exist. The
supply is too small. Nice try, it made sense, but you are chasing a pink fluffy
unicorn.</p>

<p>This is why we have all moved to the Cloud and SaaS vendors. Because it means
the limited supply of people with the skills to operate these services can be
shared between all the organizations that need them. These are outsourcing
shops. It is not outsourcing to reduce cost. It is outsourcing in order to share
rare skills, that we could not grow but that everybody needs.</p>

<h2 id="the-cloud-is-the-least-bad-option">The Cloud is the least bad option</h2>

<p>This is what all these yelling at Microsoft are missing. Did Microsoft do a
perfect job? No, of course not. Could they do better? Sure, they could. Maybe.
In a different universe. Would anyone running their office server do better, get
better forensics and attribution, or perhaps not have made the same
misconfiguration?</p>

<p>No. Come on. We all know this. All of our experience with organizations that run
their IT is atrocious. Hell, there are MMO guilds with better IT than the vast
majority of the organizations under these attacks. Good forensics? No-one has
them.</p>

<p>So yes. Let’s do a proper system analysis of what happened here. What made the
people that operate these systems think they were doing the right thing? What
assumptions have the designers made about the world that is not true anymore?
Can we devise better taxonomies for our problems than “misconfiguration”? Do
these reports and “cause analysis” really help us get safer?</p>

<p>These are all questions worth asking. And if you want answers to these, there is
a whole community of people working on this in software in the shadows, I can
put you in touch if you care about getting results. But blasting a Cloud Vendor
for not doing “enough” and organizations for using it instead of running their
own? Send me a living and breathing pink fluffy unicorn first, and maybe I will
take you seriously. I mean it. Until then, please shut up, stay on the side, and
let the people trying to keep us all safe alone. You are taking up space.</p>]]></content><author><name>Thomas Depierre</name></author><summary type="html"><![CDATA[When you hear that one of the vendors responsible for keeping government organizations safe had a security breach, you can easily decide that this is unacceptable. When you hear that it is hard to know who is affected and how much, you may start to feel a bit panicked. This is bad; it would be far better if it never happened.]]></summary></entry><entry><title type="html">Remove Constraints To Get Results</title><link href="https://softwaremaxims.com/blog/remove-constraints" rel="alternate" type="text/html" title="Remove Constraints To Get Results" /><published>2023-06-06T00:00:00+00:00</published><updated>2023-06-06T00:00:00+00:00</updated><id>https://softwaremaxims.com/blog/remove-constraints</id><content type="html" xml:base="https://softwaremaxims.com/blog/remove-constraints"><![CDATA[<p>We look at the world and make decisions for our actions through models.
Depending on the context, some models will be more fruitful to apply than
others. There is a model that I have found tremendously helpful, in particular,
when discussing “open source supply chain” but also more regularly as an SRE. I
dub this model Goals/Capability/Constraints. It evaluates action far differently
than most models applied to these domains. The main recommendation it nearly
always offers is to “remove constraints”.
<!--more--></p>

<p>While this is sometimes hard to do, it has the advantage of being particularly
emphatic to the needs of the people that do the work. It also has the
inconvenience of pointing out that most of our great ideas will not help. These
characteristics mean this model tends to be neglected, as it is far easier to
feel right but be wrong than accept we were wrong.</p>

<h2 id="all-models-are-wrong">All Models Are Wrong</h2>

<p>If you listen to the thought leadership around Safety, infosec, or even
management, you tend to get offered two action levers. Changing the Incentives,
making some actions more or less rewarded. And adding regulations or control
translates to punishment for people and organizations that do The Bad Thing. If
you are lucky, “showing what good looks like” will be offered as a third option.
It is also known as “aligning on objectives”.</p>

<p>Equipped with your trio of tools, you can now modify complex social systems to
make “bad” outcomes happen far less. It is a particularly useful trifecta of
tools if you think that the humans in your systems are making bad decisions.
After all, if they make bad decisions, all you need is to reward the good,
punish the bad, and ensure everyone knows what is good and bad. Easy peasy, we
can wrap that up and be home before tea time.</p>

<p>In this model, decision-making is a spherical cow. A human - having to make a
decision- float freely in the space of all possible choices they can make. And
they will pick the most rewarded path, avoiding the punished one while trying to
do the “right thing”, which we explained to them.</p>

<p>Well, despite doing this all the time, people keep making bad decisions. People
seem to be quite the problem. They keep shipping insecure software. Using all
the dependencies. Not vetting all their software dependencies. The FOSS
maintainers keep refusing to sign all their commits cryptographically. They keep
not doing crypto right. They keep refusing to use memory-safe languages. It
seems that despite us trying to be nice, explaining it all, and punishing them
if they do the wrong thing … they keep stubbornly doing the Wrong Thing. Maybe
they are just impossible to fix. Perhaps it is time to bring the regulators.
Let’s double down and up the ante. Or maybe. Just maybe. Maybe it is just not
right.</p>

<p>If a model fails to deliver, it may be because it is not adapted to the problem.
That is not to say it is never suitable, but it does not apply well right now. I
think the “Incentives/Punishment/Goals” model is definitely in that situation.
Despite all our tentative to apply it, we keep getting the system and results
from before it. That is usually a telltale of using the wrong model.</p>

<h2 id="some-models-are-useful">Some Models Are Useful</h2>

<p>The “Goals/Capabilities/Constraints” is slightly different. It is still a model
that analyses how people make decisions. It starts with where the decision maker
is today, in the present. Then we look at what Goals we want to achieve. Goals
represent where we want to be in the future. Once we know where we are now and
where we want to be, then, in the future, we move to how to get there.</p>

<p>Capabilities are the tools, knowledge, skillset, and resources we have access
to. These define the possible paths toward our goals. Their combinations,
through time, give us all the different branching trees of possible routes from
here and now to there in the future. These paths start now, and every choice we
will make branches off until, at some point, we reach the Goals we want. That makes
a lot of branches, so let’s see how we choose by pruning some of them.</p>

<p>Constraints are all the things that limit our choices. Constraints are the realm
of ethics, regulations, laws, punishments, cultural norms, time constraints,
resource limitations, burnout, bankruptcy, or budgets. Anything that could make
us choose not to take a path we are <em>capable</em> of taking but cannot accept to
take. Constraints are applied to the tree of paths generated by Capabilities to
reach Goals and prune these paths. The end result offers a far smaller set of
routes.</p>

<p>Where the previous model considered that you have to push and prod the decision
maker, this model believes that the person’s choices are defined by what they
have available. These choices are then refined through the limitations they have
to deal with. The Goals/Capabilities/Constraints model is built on frustration.</p>

<h2 id="when-there-are-no-way-out">When There Are No Way Out</h2>

<p>But the frustration gets worse. Because the set of Constraints could be so large
that after pruning by Constraints, there are no paths left toward the Goals with
our Capabilities. The Constraints are too numerous and strict, while our
Capabilities are too limited to reach our Goal. Well, that is frustrating.</p>

<p>Things get worse. See, as far as research on Safety tells us, this situation,
with no path forward due to over-constraints, is pretty universally the default
state for workers. Everyday regular work in these situations means having no
good path forward. And these situations are ubiquitous. So what do you do when
you end up in this situation? Well, it is simple, right? You break the rules!
You usually do not control all the goals (after all, if you are employed, you do
not set them), and your capabilities are generally relatively static.</p>

<p>Constraints seem to be the only thing that can change when everything else is
fixed. That is what we mean by a trade-off. If we want workers to reach these
goals with the tools and resources they have, they will have to not respect some
of the constraints fully. Vetting all 3rd party dependencies? Yeah no. Signing
my commits? No one cares; not a significant constraint. Working code? That one I
cannot ignore; otherwise, we cannot reach the goal. Having CI? Non-essential. A
reproducible build? Let’s try to be able to build it all first Jan, and then
maybe one day, sure.</p>

<p>This is the reality of working at the “sharp edge”. This is where every action
is a balancing act, trying to stay at the edge of what is acceptable, breaking
the rules just in the ways that will be enough to achieve goals without getting
too much into the instability that awaits you if you go too far into trading
off constraints for results.</p>

<h2 id="changing-things-for-the-better">Changing Things For The Better</h2>

<p>So, if we use this model to explain our systems work today, how can we use it to
try to change how they will work tomorrow? We will take for granted that right
now, at best, the combination of Capabilities and Constraints gives us no path
toward our Goals or a narrow path. If we take this for granted, then we have
four different levers. We could convince people to want another goal. However,
there is a low chance of impact on the outcome because the constraints will
probably limit the paths as much. We could provide new capabilities, but that is
usually complicated or too expensive to consider. We could add more Constraints,
like adding regulations, but if the problem is already over-constrained, adding
more will have no effects other than forcing the workers to break more of the
constraints just to get things done.</p>

<p>Or we could remove some constraints. This removal may not open a lot of new
possible paths, but at the very least, it would open space for different
trade-offs. After the constraints are removed or loosened; we can trade off the
newly opened space to find a new path forward that may break less of the
Constraints we had.</p>

<p>For example, if we have heavy resource constraints, like a couple of hours of
work per week, then any project that needs sustained attention and memory for
dozens of hours is impossible. Dozens of hours would take us a dozen weeks to
reach. By that point, it is doubtful that we would have maintained sustained
attention for that long time, with many interruptions and unrelated work in
between. We are pretty far from Flow-State. As such, the worker will never
consider this option. Suppose this is the only possibility to get rid of a
legacy, unsafe behavior in our software. In that case, we will simply mark the
behavior as deprecated but never do the work to get rid of it. Not because we do
not want to get rid of it or do not prioritize it. But simply because we <em>cannot
do it</em>. We are too constrained. So we traded off the security constraints.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>

<p>We could attack the constraint in two aspects. First, we could find a way to
work multiple hours per day on this project. This would reduce the
implementation duration to a few days, allowing serious attention and memory. We
could also introduce new tools and techniques, allowing us to reduce the period
of engagement needed. Intermediate states. Tools and languages that would
support doing the work faster. Anything that can reduce the constraints imposed
on the worker would change the trade-offs. And at some point, if we reduce the
Constraints enough, the worker can eliminate the unsafe behavior.</p>

<h2 id="if-you-are-not-reducing-constraints-stop-and-reevaluate">If You Are Not Reducing Constraints, Stop And Reevaluate</h2>

<p>So what have we learned? That a model of Goals/Capabilities/Constraints can
explain how workers make decisions that may seem “wrong” from the outside. In
this case, the model tells us that the situation had so many constraints that
the worker had to trade off some of the goals and constraints to achieve partial
success. If we want workers in these situations to achieve “good.” outcomes, we
have four levers.</p>

<ol>
  <li>Change goals, which are usually hard to achieve and necessitate a lot of convincing</li>
  <li>Provide new Capabilities, which is usually complicated as it means training people</li>
  <li>Add new Constraints, which will be traded off, as there was already no
successful path to a “good” outcome. Reducing the options does not help that
much, does it?</li>
  <li>Remove Constraints, allowing a path to “good” outcomes to become possible and
, as such, used.</li>
</ol>

<p>We can classify all actions to influence the outcome “for the
better” under these four categories. I will let the reader the exercise to
map their organization’s action plan to reduce “bad” outcomes into these
categories. If you do it, I would be interested to know what the distribution of
actions into categories looks like for you. I can give you a bet, though. I bet
the fourth category is nearly empty for all of my readers.</p>

<p>We seldom offer actions that remove Constraints. And yet, this is the most
impactful, if not the only, of the category of actions we described
today, based on this Goals/Capabilities/Constraints model. So here is my plea.
If you imagine an action to make the system better. Please try to see which of
the four categories it corresponds to. And if it is not a fourth-category action,
consider not doing it. Why not try to spend all your energy and time doing
the most effective and impactful activities? Remove Constraints instead.</p>

<hr />

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>Any similarity to actual events, particularly to specific Java libraries,
are allegedly fortuitous. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Thomas Depierre</name></author><summary type="html"><![CDATA[We look at the world and make decisions for our actions through models. Depending on the context, some models will be more fruitful to apply than others. There is a model that I have found tremendously helpful, in particular, when discussing “open source supply chain” but also more regularly as an SRE. I dub this model Goals/Capability/Constraints. It evaluates action far differently than most models applied to these domains. The main recommendation it nearly always offers is to “remove constraints”.]]></summary></entry><entry><title type="html">What Security Tokens For 2FA Say About FOSS Consumers</title><link href="https://softwaremaxims.com/blog/2fa-community-participation" rel="alternate" type="text/html" title="What Security Tokens For 2FA Say About FOSS Consumers" /><published>2023-05-27T00:00:00+00:00</published><updated>2023-05-27T00:00:00+00:00</updated><id>https://softwaremaxims.com/blog/2fa-community-participation</id><content type="html" xml:base="https://softwaremaxims.com/blog/2fa-community-participation"><![CDATA[<p>Recently, PyPI announced that they would force everyone that maintains a
project or an organization on the platform <a href="https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2fa/">will have to enable
2FA</a>. This is
one more step in the direction of strongly protecting the package providers and
their users. I am not opposed to it. But it made me think of the discussions we
have around FOSS about reciprocity and unfair burden<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup>. And about double
standards. And how it is hard to make corporations understand the upside of
Open Source, and how diffuse it is. Let’s talk about security tokens, 2FA, and
how corporations do not understand their place in the FOSS ecosystem.
<!--more--></p>

<h2 id="about-security-tokens">About Security Tokens</h2>

<p><img src="../assets/img/20230528_yubikey.jpg" alt="My yubikey, getting old and soon retired. Beaten up really." /></p>

<p>This is my yubikey. It is… not new. I bought it in 2016, 7 years ago, nearly
to this day. I bought it to secure… I do not remember, but I think I wanted to
secure my password manager at the time. And probably my email. I bought it
because I was losing my mind using OTP codes. It meant I would not have to think
about my passwords leaking, I could secure my password manager on my phone (at
the time, only LastPass supported this on Android). And more importantly, I
would not have to find my phone, unlock it, find a code, and copy it <em>every time
I logged into something</em>.</p>

<p>This may seem nothing to you, but to someone like me with ADHD, doing that a
couple of times a day would make me lose my mind. To the point that I would avoid
2FA. This key changed my life. I would just have to plug it into the device I
used, and whenever needed, just press the button. No need to think. No action to
do. It just worked.</p>

<p>In the next week, I will replace it with a newer version. One that supports FIDO2,
USB-C, and so much more. But I do not talk about this to show off my new
yubikeys. They are not here there anyway. No, the reason I talk of Yubikey and
Security Token is all due to PyPI… And the corporations I have worked for.
See, everywhere in their communication about the mandatory 2FA, PyPI pushes
strongly for you to use a security token as your second factor<sup id="fnref:2"><a href="#fn:2" class="footnote" rel="footnote" role="doc-noteref">2</a></sup>. I strongly
agree with them on that, this is by far the most secure and practical choice.</p>

<h2 id="the-growing-cost-of-participating-in-foss">The Growing Cost of Participating in FOSS</h2>

<p>This is not limited to PyPI. Github is quickly mandating it, and I would not be
surprised if we see this demand propagate. I expect NPM to follow soon. On
the “supply chain” side of the discussion, this makes a lot of sense. A
credential-stuffing attack on the maintainer of a well-used package would allow
uploading a nefarious version, and all other kinds of attacks. And yes, there are
other solutions and defenses, but honestly, we all agree we should 2FA all the
things right? So that is a pretty uncontroversial decision. Right?</p>

<p>Well actually. I am not going to yell too much about this being bad, or undue
burden. After all, I bought one when I was a student with no income. And we have
seen some campaigns to equip maintainers, so there has been some real investment
by concerned parties in reducing the onboarding cost. It is still not available
to everyone, it has a real cost and it is not what people expected when they put
their code online.</p>

<p>It also means that if you live in a less wealthy situation than the tech
industry in the Western side of the world, you are probably now in a relatively
steep situation if you want to be a “good citizen” of FOSS. But it is ok. You
can use your phone with an authenticator App.<sup id="fnref:3"><a href="#fn:3" class="footnote" rel="footnote" role="doc-noteref">3</a></sup></p>

<h2 id="double-standard">Double Standard</h2>

<p>On the other hand, a lot of the organizations that are part of this push for
“supply chain” security are also in the tech industry, regularly offering tools
supposed to be used by software engineers. And for a lot of them, the support
for security tokens is … spotty. At best.<sup id="fnref:4"><a href="#fn:4" class="footnote" rel="footnote" role="doc-noteref">4</a></sup> As mentioned above, I have used
a yubikey everywhere I can for 7 years now. And I can tell you, there are some
providers I have a personal beef with, due to their poor support for them.</p>

<p>It is especially jarring to see this coming from tech corporations. They are
right that maintainers are a juicy target for all kinds of attackers. But
that is also true of their software engineers. These are <em>known</em> to be
“risky” targets and as such we also have seen a push for 2FA toward employees in
these corporate entities. My employer has been tightening the hatches on
this aspect recently. Developers have large access, push code that could be
“infected”, and have a lot of access both on their machine and production machines.
You can understand that if PyPI mandates it, the internal security team of
corporations would do the same.</p>

<p>And yet, bar some exceptions, I know of nearly no corporation that distributes
security tokens to their employees. Even less at the SMB level. And yet, these
are not that expensive. At least compared to some other programs and licenses
that corporations happily spend, with a cost per employee too. They have high
benefits, being more secure but also easier to use. They seem like a no-brainer.
Even less have I seen them distributed as “swag”<sup id="fnref:5"><a href="#fn:5" class="footnote" rel="footnote" role="doc-noteref">5</a></sup> for personal use, despite
seeing far stranger<sup id="fnref:6"><a href="#fn:6" class="footnote" rel="footnote" role="doc-noteref">6</a></sup> gifts by employers to employees.</p>

<h2 id="security-at-home-is-security-everywhere">Security at Home is Security Everywhere</h2>

<p>And we have a precedent here. It is not rare today that the password manager you
use for work is also offered as a “benefit” to use for your family at home. This
is a win-win-win situation for everyone. The password manager provider gets new
users, with stickiness, and in general market penetration. The employer ensures
that your home network and account are secure, as they are another vector to
penetrate their internal network.<sup id="fnref:7"><a href="#fn:7" class="footnote" rel="footnote" role="doc-noteref">7</a></sup> And the employee gets a top-of-the-line
free password manager. Why not do that for security tokens? After all, they walk
hand in hand with a password manager and an SSO service. Give a key as swag. For
Christmas. And provide a way to buy them for the family, at a discount.</p>

<p>But there is also another aspect to this. If corporations are so scared of
supply chain attacks through credential stuffing, they should create an environment
that helps fight it. Make it easy to use a security token on your own services.
So that everyone has incentives to do it. But also, maintainers are nearly all
employed by a corporate entity, in a lot of cases in IT. Especially the often
talked about “burner out, working on it on free time” maintainer, who usually
have a day job in one of these corporations. By distributing security tokens that
they can use at home, you would automatically get
that many maintainers equipped.</p>

<p>Of course, it may not be the maintainers you depend on. But if this was a
practice as widespread as providing a “family” version of password managers as
a benefit, then we would cover a large part of the maintainers. Your employee may
maintain a dependency used by a corporation on the other side of the planet, but
there is a high chance you depend on a package maintained by someone on their
side too. I feel that this understanding of the porosity between the commercial
and FOSS engineers is not well understood.</p>

<p>At least, what we can see is a strong double standard at play here. The reason
FOSS maintainers are getting forced into 2FA is not because they are more at
risk. Or because they cannot lobby. It is because they <em>can accept this change</em>,
while corporations cannot. It is not that corporations are forcing this on the
FOSS world. It is that corporation are far worse security practices than most of
FOSS, and are far less aware of the impact on the wider world of their security
failures.</p>

<h2 id="we-are-not-that-different">We Are Not That Different</h2>

<p>I regularly talk to managers and engineers in these companies that imagine that
OSS developers are not at all <em>like them</em> but live a far different life. That
they are not employed in their own teams. That they are “out there” but
definitely not around here. I hear discussions that FOSS security is a “consumer
of OSS” problem, not a maintainer problem. The security token and
2FA push shows that there is no difference. The consumers of OSS are the
same people as the ones that build it. One side is organized, on the corporate
side of their life. The other may not be. But if we want a safer FOSS, maybe we
should start by making the tech industry at large practice what they ask us.</p>

<p>These two worlds are highly porous between them. If you want things to get better
on the supply side, simply provide your employees with these practices and
tools. And see how it translates so well. Engineering for FOSS is no different
than for commercial software. We have different incentives, resources, and goals.
But deep down? The practices are the same. And the upside of doing better is as
hard to realize and invest in, on both sides. I mean after all. All these 2FA
support and toolings around it are Open Source.</p>

<hr />

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>I do not think PyPI do this here, they seem to have taken the decision
they believe is right. But it is something I feel in the
discussions around the Supply Chain. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:2">
      <p>Realistically, yubikeys are the main answer here. Yes, there are others but please. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:3">
      <p>For now. But seeing how fast this moves, i doubt we will end the decade
with these still being accepted. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:4">
      <p>And I am being polite here. Extremely polite. I am trying to swear less in
this blog, as some of my readers are Americans these days. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:5">
      <p>Think t-shirt, USB keys, backpack, cap, and other “branded” stuff. <a href="#fnref:5" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:6">
      <p>And expensive. And useless. <a href="#fnref:6" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:7">
      <p>As part of the LastPass attack. the machine of a DevOps engineer was
breached through their personal Plex server. <a href="#fnref:7" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Thomas Depierre</name></author><summary type="html"><![CDATA[Recently, PyPI announced that they would force everyone that maintains a project or an organization on the platform will have to enable 2FA. This is one more step in the direction of strongly protecting the package providers and their users. I am not opposed to it. But it made me think of the discussions we have around FOSS about reciprocity and unfair burden1. And about double standards. And how it is hard to make corporations understand the upside of Open Source, and how diffuse it is. Let’s talk about security tokens, 2FA, and how corporations do not understand their place in the FOSS ecosystem. I do not think PyPI do this here, they seem to have taken the decision &#8617;]]></summary></entry><entry><title type="html">The Economics of Developer Tooling</title><link href="https://softwaremaxims.com/blog/economics-developer-tools" rel="alternate" type="text/html" title="The Economics of Developer Tooling" /><published>2023-05-25T00:00:00+00:00</published><updated>2023-05-25T00:00:00+00:00</updated><id>https://softwaremaxims.com/blog/economics-developer-tools</id><content type="html" xml:base="https://softwaremaxims.com/blog/economics-developer-tools"><![CDATA[<p>It would be a major boon to software velocity, maintenance burden and safety to
bring more attention to developer tooling, in particular bringing to
everyone’s toolkit the techniques and technologies developed since the 80s but
that was never mainstreamed. It is at least what I advocated for in <a href="/blog/process-engineering-software">We Need
More Process Engineering in Software</a>. Over
the past few years, I have explained to a lot of people the current state of
developer tooling development and how the economics of them work. This post aims
to summarize all of this in one place.
<!--more--></p>

<p>I am going to try to keep this as neutral as possible. This is not about how
broken or right these economics are, nor if I think they are good. This is a
tentative sketch of the way I see the economics of developer tooling playing out
in the currently. You may have a different understanding of the ecosystem.
You may think I missed essential parts of it. You may think that I am wrong in
considering unimportant some actors that I will dismiss. Feel free to contact me
through the contacts in my footer to talk to me about it.</p>

<h2 id="why-should-we-even-care">Why should we even care?</h2>

<p>What does investing in developer tools bring us? That is a great question. I
recommend going back to <a href="/blog/process-engineering-software">We Need More Process Engineering in
Software</a> and the links I post there for a
deeper dive, but here I will try to summarize it in economic terms. Better
Developer Experience (DX) has an impact in multiple ways.</p>

<p>The first is that it makes it harder to write bugs, and easier to write working
software. Said otherwise, it leads to a rise in <em>Quality</em> of the software
produced, for the same cost. The second aspect, maybe the least obvious, is that
it allows for new <em>Capabilities</em>. Things that were hard or impossible to build
without this specific tooling are now possible with relative ease. The third
aspect is <em>Cost Reduction</em>. Better DX means that we can produce the software
faster, meaning cheaper, but also that it is easier to <em>Maintain</em>, reducing the
total cost of ownership. In addition, all of these combined means that we also
get software that is <em>Safer</em> and <em>Easier to Understand</em>. This is because the
software written matches the mental model the engineers have of it better,
making it easier to find how the software is not doing what is expected or
harder to use it “the wrong way”.</p>

<p>When we combine all of these impacts with the vast amount of software out there
that we have to maintain and produce, it is easy to understand how impactful
better DX can be. A few percent increase in DX would translate nearly instantly
into <em>billions</em> of euros. <sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> And that is without counting the increase in
security, especially for heavily resource-constrained projects like the FOSS
that is our digital infrastructure. Any small increment that makes the
maintenance load easier on these has rippling effects, as they end up being in
all the software running outside of the world.</p>

<p>There is another aspect, on top of making the existing economy better, which is
that by reducing cost and difficulty in writing software products, we also
reduce the bar of entry for a project. The more the cost of entry is lowered,
the more projects become possible, because broader groups and people can now
act on their ideas.<sup id="fnref:2"><a href="#fn:2" class="footnote" rel="footnote" role="doc-noteref">2</a></sup></p>

<h2 id="but-what-the-hell-is-developer-tooling-for-you">But what the hell is Developer Tooling for you?</h2>

<p>I consider Developer Tooling and DX everything that is in the direct toolkit of
developers. It is a long list, so instead I will give some illustrative
examples. Programming Languages themselves (their expressiveness, semantics, and
syntax as much as meta-programming abilities), but also compilers. Test
frameworks. Formatters. Package Managers (OS specific or languages specific)
and other dependencies handling. Terminal emulators. Shells. IDE. Code Editors. Type
checkers. Type Hints. AutoComplete, both in editor and shells. REPLs.
Interpreters. Profilers. Debuggers. Linters. Scripting tools. Build Systems.
Development Environment managers. GUI Frameworks. Web Frameworks. Documentation
tooling. Fuzzers.</p>

<p>What does a better DX mean for them? Well, it can translate in multiple ways,
but I would define it as <em>“the ability to reduce both the length, the number,
and the difficulty of understanding the feedback loop between the developer
writing code and this code being declared good or bad”</em>. That means that the
speed of the said tool is part of DX. The ease with which their messages can
be understood. How integrated these messages and tools are in the developer
process of writing the code? How much work needs to be done to integrate these
tools into this process? How much help do these tools provide at the right
moment? All these aspects of ergonomics and more, participate in DX.</p>

<h2 id="how-do-we-get-this-out-in-the-world">How do we get this out in the world</h2>

<p>My point in <a href="/blog/process-engineering-software">We Need More Process Engineering in
Software</a> was that getting this improvement
in DX to the actual toolkit that developers use is a long process.</p>

<p>The first part of this process is to find a problem in the current toolkit, then
develop an idea on how to fix it and prototype it. The economics of finding a
problem are relatively well known. You only have to do enough ethnographic and
user studies. We have had multiple decades of work on that domain, and that
means that even if not well funded, the problems have been relatively well
defined at this point. Producing that much software, with that many engineers,
over multiple decades, you will manage to build a few solid theories and
experiments to find out what needs to be solved. Or at least, you will have
experimented with enough things that a few problems will emerge at some point.
This has indeed happened.</p>

<p>The second part is then to prototype the solutions. The economics are relatively
straightforward here too. This is the domain of academics, theoretical or
applied. For multiple reasons, multiple actors have funded that work. It helps
that prototypes are relatively easy to achieve compared to a full-fledged
product. The academic sector and the enthusiast communities have been happily
churning out ideas, prototypes, and refinement over them for the past few
decades. Multiple philanthropic organizations, governments, and industry
organizations have funded this domain over the years. They have generated a bevy
of ideas and progress, tested them, prototyped them, and validated a few of them
as having interesting futures. Some of them have been integrated, with more or
less success and skill, into some niche communities.</p>

<p>Indeed this is where the economics change. Once ideas have been generated,
validated with prototypes, and filtered through experiments, now start the
Process Engineering section of the pipeline. These ideas need to be analyzed and
transformed into the shape expected by developers. They will need to be heavily
engineered and adapted to different contexts, often time not considered in the
prototype phase. This may sometimes need new inventions or a total
re-architecture. Sometimes, the old tools need to be completely thrown away and
new ones need to be created that fit the new techniques better. A typical
example of this is Rust. Rust did not invent a lot of its techniques. They are
coming directly from research from the decades before, up to the early 00s. On
the other hand, Rust has necessitated a lot of engineering in the guts of the
compiler and diverse tooling, to the point that they probably ended up inventing
quite a lot of techniques to <em>adapt</em> the engineering toolkit to the needs of the
new technique.</p>

<p>All this engineering takes time and money and skill. All of these elements
impact the economics of Developer Tooling. The cost is in general relatively low
<sup id="fnref:3"><a href="#fn:3" class="footnote" rel="footnote" role="doc-noteref">3</a></sup> compared to the general cost of producing software, and in particular low
compared to the upside. This is good, but end up not factoring that much into
the economics here. Time is more problematic, and constrains a lot of the
domain. The cost, while low, needs to be paid for multiple years before seeing
the impact of the investment. That is bad risk management. A low-cost bet, but
with years before return on investment, means multiple years in which this money
may be lost. This put a heavy limit on the economics of developer tooling.</p>

<p>Magnifying the problem, the impact of this progress, while massive in the
aggregate, tends to be relatively small in relative proportion. A few percent in
cost reduction translates into a few thousand in real-world currency per
software project. This has the interesting effect of making the upside near
invisible to smaller actors. The impact is only visible in aggregate. As such,
the biggest organization, with hundreds or thousands of software projects, is
the only one that can meaningfully justify the size of the bet and the time to
get a return on investment. We could imagine a reality in which a lot of smaller
actors could band together to participate in funding a project, at the pro rata
of the upside expected after a few years, discounted. But in practice, that does
not happen. The industry has not found a way for these smaller organizations to
participate in cooperative efforts around this tooling.</p>

<p>It also means that selling these tools is hard. The community has mostly settled
on a FOSS model for the vast majority of developer tooling, in part because the
upside is limited in the small, and only big due to the magnitude of the
software industry. This is a typical example of a Common, where everyone
benefits in ways that they do not realize, but only need to participate an even
smaller amount to maintain the shared resource. It means that these tools need
to be produced only for their cost reduction, without the hope of <em>selling
them</em>. It highly limits the number of financial instruments that could be
applied to fund the work in these.</p>

<h2 id="the-emptiness-of-a-professional-field">The emptiness of a professional field</h2>

<p>The combination of these produces an interesting field. A field with a lot of
ideas, prototypes, and techniques that have been developed. With ample proof of
the possible impact and a limited, if risky, need for investment. But a field
that has nearly no investment, outside of some outsized organization that can
realize the upside. There are a few commercial projects that succeed. Jetbrain
of course comes to mind. And some highly driven people volunteered work to
bridge the gap between prototype and production. They tend to have had outsized
impact<sup id="fnref:4"><a href="#fn:4" class="footnote" rel="footnote" role="doc-noteref">4</a></sup>.</p>

<p>On the other hand, you have larger organizations that invested heavily in their
developer tooling. Google, Facebook, LinkedIn, Microsoft themselves, and
Bloomberg, are examples brought up regularly. The world outside these giant
organizations have benefited from this multiple time. But there is also a
reality that the context and needs of these organizations may simply not be the
same as the one of the rest of the world. The maintainer of a FOSS fundamental
library, which is a 30-year-old codebase, with a few hours per week maximum to
spend on it, has fairly different needs from the member of the Start Menu team
at Microsoft.</p>

<p>There are also some enthusiasts that tend to write tools that they <em>think</em>
should exist in the world. These are usually adjacent to real user needs in
terms of DX, but written more to fix the problems imagined by the author than
properly researched and designed tools. This is how most of the programming
languages of the past few decades came to be. PHP, Perl, Python, Ruby, Elixir,
Go, C, C++… These are all great programming languages, written first to fit
the wants of their author, to fill the platonic form of a perfect language the
author imagined. Only after they found success have they slowly been learning
about user needs. Under growing pressure to fit better the needs of their users,
they tend over time to adapt some of their principles to fit better DX. This
process is usually limited, as the fundamental shape of the tool limits its
ability to implement the solution needed without extensive and potentially
compatibility-breaking changes.<sup id="fnref:5"><a href="#fn:5" class="footnote" rel="footnote" role="doc-noteref">5</a></sup></p>

<p>The impact of this landscape is that even if some organizations had the money to
invest into better DX for the FOSS maintainer and could accept to wait for a few
years to see a return on investment… There is a knowledge and skill gap. For
the past 30 years, the only people that developed that skill were in extreme
niches, already flooded with demand and work in their chosen domain, or working
in a context that does not match the needs of the general developer population.
The industry of developer tooling for small organizations, small teams, and FOSS
does not exist. This means that the techniques needed to conduct user research,
define needs and requirements, port an academic prototype and ideas to a usable
tool, and then iterate on it with users feedback need to be rediscovered every
time one of these projects manages to happen. There is a really limited amount
of institutional knowledge about these.</p>

<p>And this brings the last aspect of the economics of developer tooling. When time
and money are already limited, the skills available are also limited,
compounding the two other elements. Acquiring these skills is of course
possible, but it needs time and as such money, to ensure the stability of this
career choice for the individuals that do the work. This makes the ticket of
entry into impacting the field higher for anyone wanting to invest. The upside
makes it a compelling business case. But it will take years, if not a decade, to
see them realized. The bets are still low, but less than expected. And the time
for them to pay off, or possibly fail, is now far longer. This makes it harder
to launch projects, making the career even riskier and less attractive, reducing
the amount of skill… which lengthens the time needed for a project to get
launched, as the skill set needs to be re-discovered and learned, but also
raising the cost of hiring the rare few with the knowledge.</p>

<h2 id="the-cynical-conclusion">The Cynical Conclusion</h2>

<p>And such are the economics of Developer Toolings. A field with a lot of ideas
researched and prototyped, low cost of start, long time to return on investment,
high collective upside but hard to commercialize, and lack of institutional
knowledge and engineer resources, where the skill set is usually only adapted to
niche, far from the vast majority of the needs. Does it still make economic
sense to invest in this domain? Yes, the upside would be massive. But the
financial instruments adapted to this particular set of conditions seem to be
lacking. Neither commercial, non-profit, philanthropic, or governmental
organizations have found a sustainable way to contribute after the prototype
phase. The only exception seems to be some massive organization, which can
justify the investment on their internal cost reduction alone.</p>

<hr />

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>Estimates state that EU-based companies already invested around <a href="https://de.statista.com/statistik/daten/studie/1178441/umfrage/umfrage-zum-einsatzvon-open-source-software-in-deutschen-unternehmen-nach-branchen/">€1
billion in OSS in 2018, impacting the European economy to the tune of €65-95
billion.</a> <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:2">
      <p>This can be seen as the reverse of the <a href="https://en.wikipedia.org/wiki/Boots_theory">Sam Vimes “Boots” theory of
socioeconomic unfairness</a>. GNU
Terry Pratchett. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:3">
      <p>My estimate for the total cost of the Rust team at Mozilla until 2018 is
under 10 Millions USD over 10 years. I expect that my estimate is overpadded
here, in typical pessimistic engineer fashion. The real cost was probably
far under that, but it is a good high estimation. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:4">
      <p>Some obvious name come to mind like Yehuda Katz, the Rust community, the
diverse people maintaining package managers and repositories, etc. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:5">
      <p>Examples of these and how painful they can be are numerous. Python 2 to
Python 3 changes to the string syntax and semantics, Ruby expansion to
Fibers and types, PHP acquiring an AST in version 7, Golang acquiring
modules and generics. <a href="#fnref:5" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Thomas Depierre</name></author><summary type="html"><![CDATA[It would be a major boon to software velocity, maintenance burden and safety to bring more attention to developer tooling, in particular bringing to everyone’s toolkit the techniques and technologies developed since the 80s but that was never mainstreamed. It is at least what I advocated for in We Need More Process Engineering in Software. Over the past few years, I have explained to a lot of people the current state of developer tooling development and how the economics of them work. This post aims to summarize all of this in one place.]]></summary></entry><entry><title type="html">We Need More Process Engineering in Software</title><link href="https://softwaremaxims.com/blog/process-engineering-software" rel="alternate" type="text/html" title="We Need More Process Engineering in Software" /><published>2023-04-25T00:00:00+00:00</published><updated>2023-04-25T00:00:00+00:00</updated><id>https://softwaremaxims.com/blog/process-engineering-software</id><content type="html" xml:base="https://softwaremaxims.com/blog/process-engineering-software"><![CDATA[<p>When you peruse the depth of software engineering as a discipline, you find a
lot of techniques and tools laying around in corners. Pattern matching, tighter
type-checking compilers, property-based testing, snappy IDE, debuggers, dynamic
tracing, Result types, effect handlers, capabilities, model checkers, fuzzers,
etc. And yet, they are not in use in the industry. I posit that this is because
software engineering dedicated nearly all of its energy toward the invention of
product part of engineering, while neglecting the Process Engineering part of
the discipline.
<!--more--></p>

<p>If you know what Process Engineering is, feel free to skip the first part, as I
will start by trying to explain through personal experience what it is. To try
to give you an idea of where it sits in engineering practice and what it is that
Process Engineers do. You can directly skip to the second part, in which I try
to highlight why Process Engineering matters for software. In particular, I will
try to show how Process Engineering is a lot of work. And I hope I can conclude
on something useful, but mostly please if you see the same things I do after
reading this, talk about it. Blog about it.</p>

<h2 id="once-upon-a-time">Once Upon A Time</h2>

<p>Before moving to software, I was working in electronics engineering (EE), as
part of a factory doing all kinds of coils and electronics power parts. So we
would create new power delivery parts, we would create new coils, we would adapt
old one, all kinds of R&amp;D stuff, right? Except R&amp;D was only half of all the
engineers working in this factory. What the hell was the other half doing?</p>

<p>As it turns out, Process Engineering. They would find out how to make
the product that the R&amp;D devised. How would we set up the factory, which tools
were needed, which procedures would be used, which operation would be handled
concurrently with another. Regularly they would even have to design a new tool
from scratch to allow assembly or make it more efficient. They would then get
this tool done and sometimes go through multiple design pass on it. Sometimes they
would still be modifying a production line and a process years after the start
of the production, as the experience of actually using the process would inform
changes.</p>

<p>In the same way, when I worked in the car manufacturing industry, we would
talk of “starting a factory” or starting the production of a new model, it would
not only be a problem of designing the model itself but also how to produce it.
Even more, we would consider that a new factory would need months if not years
to “ramp up” and adapt its process before it is considered good enough at
producing said model.</p>

<p>What should you learn from this? That Process Engineering, the discipline of
adapting a product or tool to how people and processes will use it in
practice is a skill set on its own. That it takes as much if not more work than
inventing the product. And that it has a tremendous impact on the result.</p>

<h2 id="process-engineering-in-software">Process Engineering in Software</h2>

<p>So why do I talk about this at all? I am a Software Engineer these days, right?
Well because some recent examples have helped me put into perspective the impact
process engineering could have on software. But let’s start by setting the stage.</p>

<p>Type systems are not new to software. Nor is pattern matching, trait-based
polymorphism, affine type systems, type inferences, Result or Options, Unit
Tests, Package Management, Versioning, Property-Based Testing, Compiler Errors
and checks, Linters, Fuzzers, … I could keep going all day. And we do all
agree that these are mostly good inventions, that have positive value on our
software. And yet … most of the tools, programming language ecosystem and
products we release out there simply do not use them. At all. What gives?</p>

<p>Well the way I look at it is that I compare the few, recent, tools that do some
of these things that managed to get adoption. And then I compare to what are the
difference from the one that failed to get adoption. What were the innovations?
What did they do differently? What did Rust do that Ada do not? What did
Typescript do that Flow did not? It happens that they have one thing in common.</p>

<p>They spent an unorderly amount of time on engineering, for a software tool
targeting developers, toward the interface their users have. They spent
a ton of effort, if not most of the efforts, into actually being usable in
practice. Rust prides itself on not having invented most of their underlying
principles. The thing that Rust considers itself bringing to the table is an
environment in which you can actually use it. You have a working build system.
The standard library makes sense. The error messages point to real problems and
offer solutions. And they fit on your screen. Tests and asserts use colors and
formatting. You can actually use the File API on Windows and it works.</p>

<p>Does it all seems like small things compared to the fundamental progress of a
memory safe systems language? Maybe, but we had memory-safe systems
language before. What Rust bring is packaging it with an ecosystem that you, as
a human, can use. And that is rare. A lot of type system research and type
checking tools exist out there that are far more advanced than what we have in
our current languages. They are usually designed to work on our current
languages. And yet they do not see adoption. Because it happens that the hard
problem is not to type-check the program (even if it can be atrociously hard and
need a lot of engineering and research to get to that point). The hard problem
is actually to check something useful, in a setting that corresponds to how the
language is used, to find impactful problems, in a way that
make sense to the developer.</p>

<p>And you know what nearly none of the tools we give to developers or ops people out
there do? Any of this. We do not spend a lot of time doing Process Engineering
on our invention. We invent them, we offer them to other developers. And when
they end up not using them, or telling us how crap they are, we lament that this
industry is stuck in the dark ages.</p>

<h2 id="step-into-the-light">Step into the light</h2>

<p>As an industry, we regularly lament about the state of security in software.
Nothing is well done, nothing respects good engineering principles. Why does
no one runs Valgrind? Why is not everyone running fuzzers? Why don’t we have
capabilities, or a better firewall? Why is everyone leaving S3 buckets open? I
could keep going for hours.</p>

<p>And yet, the reality is that we have, in a lot of cases, the pieces of knowledge
and inventions to fix this. We could have tools that are far smarter and easier
to use for IaC. We know how to type-check a lot of this. Making firewalls that
are easier to configure and not open by default to everything is something we
know how to do it. We can invent ways to add capabilities to our programs. We
can type-check things. There are great property-based testing engines out there.
We know how to package runtime environments better and deploy them faster and
better. You do not have to deal with the pain that is the current Python package
Management or running Kubernetes.</p>

<p>Do you know what none of these tools and techniques have? A developer experience
(DX) that is … Well, that actually exists. At all. Because as it turns out,
the problem is not into inventing these tools or finding how to tell the
computer how to do it. The problem is in giving a good DX to these. And in
funding that work. Because it turns out it takes a long time to do that, and a
lot of engineering. This is the lesson I take from the success of Typescript and
Rust. The bar to make our tooling better is low because we have this swath of
invention waiting to be deployed. And at the same time, it is high, because
having the means to make the computers run the algorithm is not the hardest
engineering problem to solve there. It is making it legible in a way that works
for the person that uses the tool.</p>

<h2 id="good-dx-for-the-99-is-how-we-get-better-software">Good DX for the 99% is how we get better software</h2>

<p>Jean Yang has an amazing blog post, <a href="https://future.com/software-development-building-for-99-developers/">Building for the 99%
Developers</a>,
which I wholeheartedly endorse. In particular, I think we underestimate how the
current state of Software is heavily due to the tooling we have access to. We
create unsafe, broken, painful to use, low accessibility, and slow software because
it is the only kind we can build with our current tooling. Doing
differently forces so much pain on the engineers that most developers simply
cannot do it.</p>

<p>I do not think this is a fundamental reality. I think applying Process
Engineering to our amazing inventions of the past few decades in Computer
Science would allow us to heavily change the game. It would allow us to get
better software on all these dimensions. Not because we would force people into
only building things well. But because we would make this process easy,
painless and the obvious road.</p>

<p>It is not an easy solution, and we first need to acknowledge we have a problem.
It means acknowledging that we have been atrocious at doing Process Engineering
for our tools, from Programming Languages and compilers to deployment tools. And
then it means doing thankless, long, and painful work over years to study what
engineers do, what they need and change our tools to adapt to them. Process
Engineering is never done.</p>

<p>But here is my hope. My hope is that the 2020s will be about DX and Process
Engineering for the tools of the craft of Software Engineering. There are
reasons to hope out there. It is at least what I get from the successes of
Typescript, Rust, Elixir, and all the other projects coming out of the woodwork,
from the thankless work of low-level engineers during the 10s. They showed us
“we can have nice things”. We can choose to look in the mirror in shame we could
not do it ourselves. Or we can choose a brighter future and thank them for the
lesson. What do you choose?</p>]]></content><author><name>Thomas Depierre</name></author><summary type="html"><![CDATA[When you peruse the depth of software engineering as a discipline, you find a lot of techniques and tools laying around in corners. Pattern matching, tighter type-checking compilers, property-based testing, snappy IDE, debuggers, dynamic tracing, Result types, effect handlers, capabilities, model checkers, fuzzers, etc. And yet, they are not in use in the industry. I posit that this is because software engineering dedicated nearly all of its energy toward the invention of product part of engineering, while neglecting the Process Engineering part of the discipline.]]></summary></entry><entry><title type="html">The devs that the front-end crowd left on the side of the road</title><link href="https://softwaremaxims.com/blog/the-browser-forgotten" rel="alternate" type="text/html" title="The devs that the front-end crowd left on the side of the road" /><published>2023-03-19T00:00:00+00:00</published><updated>2023-03-19T00:00:00+00:00</updated><id>https://softwaremaxims.com/blog/the-browser-forgotten</id><content type="html" xml:base="https://softwaremaxims.com/blog/the-browser-forgotten"><![CDATA[<p>A few things in the world of Web front-end developers have caught my attention lately. Two things mainly. The first is around how defining the front end as centered around JS is problematic, at least if we want people to use our stuff. The other is around Interop 2023 and in particular Declarative Shadow DOM. And I feel that both are more linked than we think. They reflect the reality of the evolution of WHATWG and W3C and by reflection the browser vendors in the past decade.
<!--more--></p>

<h2 id="wait-no-one-can-use-this">Wait, no one can use this?</h2>

<p>There has been a bit of movement on the Internet lately, around the realization that equating Web Front-end with JavaScript generated slow, heavy, and at this point actively hampering your ability to serve users, web sites.</p>

<p>For a bit of back-story, the best place to start is probably <a href="https://infrequently.org/2022/12/performance-baseline-2023/">Alex Russel’s state of browser clients 2023</a>, followed by <a href="https://infrequently.org/2023/02/the-market-for-lemons/">his long and eloquent assault on SPA stacks, which is informed by it</a>. I think the state of the browser is the most important to keep in mind and is less controversial. Yes, you can build amazing UI with any of the SPAs stacks, but the reality is that you would build for an audience that does not exist. The reality of what the machine of our users can run is a strong constraint. One that we cannot wish away.</p>

<p><a href="https://seldo.com/posts/the_case_for_frameworks">Laurie Voss’ “The case for frameworks”</a> was an interesting response, that I think you should read too. I think it is quite representative of the current model of the world that is pervasive in the front-end developer world. I have a lot of problems with it, but I will not try to attack all of them today. But I think I need to at least point out one thing. In their table of tools that are SPAs and successful. Most of them are <em>not economically successful</em>, and I could even argue easily that most of them <em>should not be SPA</em> to deliver that experience. We could also argue if that “small subset” is representative of most of the value produced through the Web and where we want to invest developers’ time.</p>

<p>More interestingly to me, Laurie makes the point that the JS frameworks <em>save developer-time</em>. This is a regularly tooted advantage of all these front-end frameworks. They, allegedly, reduce the cost to build and time to market. I would argue that if no end user can use the amazing tool you just built, as the state of the client landscape shows is probably the case, then the Developer Experience (DX) does not matter. Going faster to produce something useless does not help. But I think there is another interesting argument here, one that matters a lot.</p>

<h2 id="how-can-we-make-the-dx-better-for-the-browser">How can we make the DX better for the browser?</h2>

<p>My argument here is that JS, while native to the browser stack, is not the “native” way to render a UI on the browser. The native way is HTML combined with CSS. JS (and WASM) are supposed to be used for two things. Enhancing the presentation with some dynamic change when needed (a sprinkling of dynamic elements on top of a mostly static one) and compensating for the missing features of the “native” stack.</p>

<p>The reason we use JS, a lot of the time, is to compensate for what the browser APIs are missing. Or our back-ends that generate the native data. We have calendars widgets because the native HTML ones are crap. We have complex forms enhancement because styling the default HTML one was hard. We import a whole WASM image decoder because we do not have Jpeg XL supported in our browsers. We use components in JSX or with Web Component because HTML does not have ways to encapsulate and compose elements. Etc Etc.</p>

<p>We can see that relatively well when we look at how Phoenix Liveview enables people to get dynamic elements and componentization on the back end, without needing to ship these massive amounts of JS. And we see that JS was a crutch to compensate for a handicap because Liveview end up with far less bandwidth demand and far less work on the client. Does it solve <em>every</em> of the problems the JS framework tackles? No of course not. But I think it shows that closing the DX gap lies in bringing the capabilities that the JS frameworks have explored and polished for us to the browsers.</p>

<p>In this model, the JS frameworks play the role of a trailblazer. Exploring the design space until they find out what problems need to be solved. But for this model to work, we need the browser vendors to constantly play catch up. To add to the “native” stack the capabilities that have proven to be game changers in the JS world.</p>

<p>This has not happened as much in the past 5 to 10 years as we would like. It is time for the pendulum to swing. Partly, this is because the browser vendors had to first catch up to make the browsers <em>usable</em>. We may forget it now that we all use Grid, Flexbox, HSL, Fetch, and all. But this space used to be dire. It is a space where JQuery was the crutch and trailblazer, another JS framework.</p>

<p>And I want to acknowledge the tremendous work done by Developer Advocates, WG members at W3C and WHATWG before it got merged back, developers at the vendors, and all the other people that keep going to work every day to help make the web easier to use and nicer for everyone. This is hard work, this has been hard work, and we rarely know about the constant efforts they have to do to bring things forward.</p>

<p>And thanks to their work, the browsers caught up with JQuery, making the DX for the web far better. But now we need to do the same with the path React and co have traced.</p>

<h2 id="can-we-bring-that-dx-to-the-rest-of-us">Can we bring that DX to the rest of us?</h2>

<p>This brings us to the wishlist. I am not a front-end person by choice. This is not my specialty. But it happens that I regularly have to write them, because tools need to interact with humans, and front-ends are kind of needed for that. And the web is the easiest platform to build this on when you need remote tooling. But it also means that I am <em>ruthless</em> with my tools.</p>

<p>I need tools for the front end that are featureful, easy to use, and adapted to developers with limited front-end experience doing it all. I need things that give me instant feedback because I will not have a designer. The only way I can make things pretty is through constant incremental change based on how it looks to me. Also, it means I cannot bring in a full design system, nor a JS framework, because just keeping this stuff properly plumbed in and up to date is more time than I can allocate to it.</p>

<p>So I am an SSR person. Fully static sites are done with Jekyll, Soupault, or an equivalent. Semi-dynamic one, that needs to adapt the data they show, with Phoenix. And if I need a dynamic page, it is going to be Liveview. My CSS is nearly always Tailwind.</p>

<p>Do you know what I cannot use? None of the CSS-in-JS stuff. No JSX. No HTMX. No CSS Modules. No Web Components. None of this stuff. And yet, I can see the needs. And I see how things are being done on the JS side. I believe we <em>can have nice things</em> here. We understand the needs better thanks to all the work done in the JS ecosystem. And there are reasons to have hope if you look in the depths of the WGs. People are working there to try to bring nice things to the SSR crowd.</p>

<p>But if we want things to change, if we want to help them make the Web better for everyone, not just people that can afford the latest iPhone and its walled garden, then we need to help them. We need to show we exist. We need to realize we can have nice things. And we need to start explaining why we need them, to help convince the vendors to inject engineering time into them. I am already working on a draft of my wishlist. Could you do yours? Let’s show that the Web exist also on the server side.</p>]]></content><author><name>Thomas Depierre</name></author><summary type="html"><![CDATA[A few things in the world of Web front-end developers have caught my attention lately. Two things mainly. The first is around how defining the front end as centered around JS is problematic, at least if we want people to use our stuff. The other is around Interop 2023 and in particular Declarative Shadow DOM. And I feel that both are more linked than we think. They reflect the reality of the evolution of WHATWG and W3C and by reflection the browser vendors in the past decade.]]></summary></entry></feed>