Food shelves at a small grocery store

Layoff doomsday prep: the technical side

Tech workers have found themselves in a world where they could be laid off without warning, regardless of their performance or seniority. It’s unsettling, and kind of scary.

A lot of advice about dealing with layoffs focuses on financial concerns, like having an emergency fund built up, paying off debt, and so on – all excellent advice, and arguably more important than what I want to discuss here. (Actually, if you haven’t even started any of that, stop reading now and work on that instead.)

Tech companies that lay off their employees are often generous with severance, but they are also often cruel in vanishing them from their workplace within a few scant hours – or even instantaneously. When that happens to you, you’re permanently locked out of your (um, former) work computer and network. All your files and account accesses become unreachable.

This can be a big problem, because tech professionals accumulate artifacts, like scripts and other customized, personal tooling, that they use to make them more effective at their jobs. If yours only live in your employer’s systems, and you get laid off, you lose them all.

So, let’s discuss what you should do to avoid losing your work artifacts due to a layoff.

Continue reading

Keep on moving

A few weeks ago I posted about my team’s work toward moving our application teams to using Argo CD for Kubernetes deployments. The work entailed a months-long set of changes to our deployment tool to gradually push users from the old way to the new way. At the time of that post, before our final(-ish) transition to requiring Argo CD usage, I’d just sent out a message designed to sound a little scary for those who hadn’t been paying attention, to get them moving. Despite a little bit of negative reception, I was hopeful that the transition would nevertheless go smoothly.

It went smoothly.

Continue reading

Tactics for small code reviews: Pure formatting changes

Coders have differing opinions about seemingly inconsequential aspects of their work, like code style.

  • whether to use spaces or tabs for indentation
  • where to introduce blank lines
  • how to format comments: end-of-line, single line, block
  • whether to comment at all
  • which bracket style to use: GNU or 1TBS or Java or Stroustrup or …
  • how to order import / include statements
  • how long lines may be, and where to break them

As you gain experience (read: get older), you realize that what’s important isn’t which choice is “correct”, but using the same choice consistently across your team’s code. (If you are a lone coder, it’s your world, go nuts!) Establishing a house style makes it easier, mentally, to read over all of the code, regardless of who wrote it.

Continue reading

Send error events using defer statements in Go

Note: In my current job I code in Go, after a long, long time primarily using Java. So I’m still learning Go, through the prism of Java experience. I might write once in a while about it … like this.


Suppose you’ve got a big old function that returns an error if there’s trouble. Commonly, you’d log the error if it comes back.

err := bigOldFunc(input)
if err != nil {
  log.Errorf("Oh no: %v", err)
}

Another option could be to send an event about the error to a collection system. That’s like logging, but the event could store structured details about the context of the call beyond the error string. (Sorry if my indentation isn’t right.)

err := bigOldFunc(input)
if err != nil {
  analytics.SendEvent(analytics.Event{
    Message:    "Oh no",
    Properties: analytics.NewProperties()
      .Set("input", input)
      .Set("error", err.Error()),
    },
  )
}

What if you want the event to include more about what the function was doing at the time? For example, suppose the function was looping through a list, and you’d like the include the current item in the event. You don’t know that by the time the function has returned.

Continue reading

Tactics for small code reviews: No-effect changes

In this little series of posts, I’ll describe techniques that you, humble coder, can use to make your code reviews smaller. Why do that? Justifications abound, so I’ll just link you to a couple of good articles.

In my own writing, I’ll get into the details of how to accomplish the goal of smaller code reviews. I will use git terminology, but the ideas should apply to any version control system where human review is involved.


I’ll start off with one of my favorite tricks for forming a small code review, which is what I’ll call here a “no-effect” change.

Suppose that you have a set of changes spread across a ton of files. All of the changes are related to one idea (great!), but there are simply a lot of them. Find a small set of files that could be split into their own review and do so, even if merging them doesn’t change program behavior.

Continue reading

Phrase to avoid: We need to

A big part of all-hands meetings (not the only part, to be fair) is leaders trying to convince the rank and file that they should do such-and-such things because reasons. Aligning the entire org and all that, you know.

I tend to focus on the words that speakers use. In the arguments for doing new things, I often hear sentences starting with “we need to”. For example!

  • We need to reach profitability by the end of 20xx …
  • We need to ensure our product has the highest quality …
  • We need to cut our expenditures by y percent …
  • We need to function as a single, unified team …

Usually there are reasons why “we need to” do these things, and they are often sensible. So the goals are not silly. But, I’ve realized that in all of these cases it’s not true that “we need to” do any of these things.

Continue reading

bashrcd

In my new job (started middle of 2021) I’m gaining a reputation as quite the bash hacker. It’s possible that I am turning into one of those sage old-timers that I’ve heard legends about, although it doesn’t feel like it personally. Still, it’s nice that the “old” tools like bash, sed, and awk seem to be inspiring awe and respect when they are used … well, appropriately really. I guess there is a reason why the elegant one-liners for those tools are called “incantations” even when they are just ordinary usage. To folks unfamiliar with the tools, they can look magical.

As a bash nerd, then, you know I’ve spent a non-trivial time finessing my .bashrc file. Near the beginning of my new job, in the middle of transitioning my old development environment to my new laptop, I realized just how big my standard file had gotten. So, I devised a better organizational scheme for it using the quasi-standard of stowing scripts in a “.bashrc.d” directory and sourcing them. After fiddling a little more with it today, I’ve posted it for fellow bash fans to use if they like.

https://github.com/bhavanki/bashrcd

By the way, I’m fully aware of the idea that you should abandon shell scripts when they get too complicated for a better language, like Python. I have considered this idea and decided that it’s good advice for a lot of people and situations, but I’m personally fine with continuing to use bash in unholy ways. Sometimes you’ve got to embrace your dark side a little.

Flaky test Kessler syndrome

I have been working within a monorepo (that is, a gigundo Git repository with lots of separate projects in it) and participating in maintaining its test health. This includes unit tests.

  • After every commit to the repository, all of the unit tests across the whole thing are run by our CI system. The idea is that the entire monorepo should always build successfully.
  • Also, for every pull request (proposed change), all of the unit tests across the whole thing are run by our CI system. Same idea as above.

It isn’t feasible for individual contributors to do this themselves, as the monorepo is just too big and varied. I don’t even try to compile the entire thing myself, let alone run all of its unit tests. All I do, and expect from others, is work within the projects that are relevant to the task at hand (and their dependencies), including making sure that existing and new tests pass.

We really need existing tests to be reliable. A flaky test that fails randomly once in a while crops up in random test runs in the CI system. For a PR test, it’s annoying because that test failure seldom has anything to do with the proposed change. For a post-commit test, it’s just noise. In both circumstances, someone tasked to investigate test failures has to spend time discovering that the flaky test is at fault – a false alarm. Additionally, someone proposing a PR has to re-run tests so that, hopefully, that flaky test doesn’t flake out again. This all produces friction in the development process.

Say you have multiple flaky tests. Now, a test run may fail if one or more of them fails. If there are only a few flaky tests, the effect isn’t much worse than having one. But, as the number of flaky tests grows, probability starts to be your enemy.

Continue reading