<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Nishiki Liu</title><description>Blog</description><link>https://nshki.com/</link><item><title>Vibe check, November 2025</title><link>https://nshki.com/vibe-check-november-2025/</link><guid isPermaLink="true">https://nshki.com/vibe-check-november-2025/</guid><description>Some stream of consciousness things in November 2025, over half a year since I&apos;ve last written anything.</description><pubDate>Sat, 22 Nov 2025 18:52:00 GMT</pubDate><content:encoded>&lt;p&gt;It&apos;s been half a year since I&apos;ve written anything on this blog. Time is passing by faster every day, and I&apos;m reminded by John Mayer&apos;s lyrics from &lt;em&gt;Stop This Train&lt;/em&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Stop this train&amp;lt;br /&amp;gt;
I wanna get off&amp;lt;br /&amp;gt;
And go home again&amp;lt;br /&amp;gt;
I can&apos;t take the speed&amp;lt;br /&amp;gt;
It&apos;s movin&apos; in&amp;lt;br /&amp;gt;
I know I can&apos;t&amp;lt;br /&amp;gt;
But honestly&amp;lt;br /&amp;gt;
Won&apos;t someone stop this train?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I&apos;m now solidly in my mid-30s. My body loudly reminds me of its mortality through lower back pain. I thought it was sciatica; turns out it&apos;s arthritis. All those years of poring over lines of code didn&apos;t do my back any favors, but this led me to give tai chi a serious try, and that&apos;s been a delight.&lt;/p&gt;
&lt;p&gt;I&apos;ve been learning how to attune to myself better. When I feel some spidey sense tingling, I don&apos;t just bottle it up and throw it into the sea of my subconscious anymore. I&apos;ve been learning how to identify what it is. Happiness, sadness, anger, serenity, shame, apprehension. I&apos;m learning how to listen to and trust my emotions for the first time since I&apos;ve thrown that all out in my childhood. My incredible therapist has been a guiding compass for me, and I can&apos;t thank her enough.&lt;/p&gt;
&lt;p&gt;I used to identify strongly as a certain type of technologist: a Rails developer who strives for open, empathetic software. I&apos;ve since loosened that a bit. I&apos;m just someone who wants to help people, and that can take so many different shapes. The last time I wrote in Rails professionally was over 5 years ago, and being away from the ecosystem has made me come to terms with things like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tooling that comes out of the box without convention over configuration is just fine, actually. It invites collaboration with your team to develop your own conventions that works best for you all.&lt;/li&gt;
&lt;li&gt;Omakase-less development is good, actually. It forces deliberate decision-making for the tooling that you do use, and it allows you to decide together. When you&apos;re in charge of implementing a new abstraction layer, you&apos;re writing code specifically for your team, and what an amazing avenue to practice empathy that is.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://okayfail.com/2025/in-praise-of-dhh.html#fnref-brain-worms&quot;&gt;I&apos;m grateful that Hamburger Helper helped me truly learn full-stack development, but the brain worms have since taken over&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Being relentless about choosing technologies was also &lt;em&gt;so tiring&lt;/em&gt;. I used to run flavors of desktop Linux on a Dell XPS 13 in the spirit of only running open software. I then transitioned to a Silicon Mac running Asahi Linux because the hardware was great, but I still only wanted to run open software. Now I&apos;m just running macOS on the same machine. I think past me would&apos;ve yelled out, &quot;But this isn&apos;t running open and ethical software!&quot; I was a little too idealistic and dogmatic. There&apos;s no pragmatic end to this line of thought. Instead of being sucked into the void that&apos;ll eventually lead to something like trying to consider how the physical materials of my transistors were sourced or if a threat actor tampered with the network drivers of my device, I&apos;m just choosing to use a readily available tool that enables me to build, collaborate, and connect.&lt;/p&gt;
&lt;p&gt;I&apos;m learning how to choose my battles and know my own limits.&lt;/p&gt;
</content:encoded></item><item><title>My ZSA layouts</title><link>https://nshki.com/my-zsa-layouts/</link><guid isPermaLink="true">https://nshki.com/my-zsa-layouts/</guid><description>Talking about my ZSA Moonlander and Voyager keyboards and the layouts I configured for them.</description><pubDate>Tue, 13 May 2025 04:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve been a huge fan of &lt;a href=&quot;https://zsa.io&quot;&gt;ZSA&lt;/a&gt; for 4 years and counting. They are primarily a keyboard company and they sell three flagship, split keyboard series—two of which I own one of. I use a &lt;a href=&quot;https://www.zsa.io/moonlander&quot;&gt;Moonlander&lt;/a&gt; for the home office and a &lt;a href=&quot;https://www.zsa.io/voyager&quot;&gt;Voyager&lt;/a&gt; for when I&apos;m on-the-go. Both are designed to be ergonomic and aesthetic powerhouses that are highly customizable.&lt;/p&gt;
&lt;p&gt;I don&apos;t want this to be a blog post about the merits of split keyboards or the joy of mechanical keyboards even though this could easily spiral out of control into one. I want to talk about how I landed on the custom layouts that I&apos;ve flashed on both keyboards and my thought process for their key placements. ZSA allows folks to customize and flash their own key layouts using their tool &lt;a href=&quot;https://configure.zsa.io/home&quot;&gt;Oryx&lt;/a&gt;, and it&apos;s been a lifesaver. I give props to anyone who uses the default layouts, but they are abhorrently unnatural.&lt;/p&gt;
&lt;h2&gt;Voyager&lt;/h2&gt;
&lt;p&gt;I want to start by talking about the Voyager. I&apos;ve adjusted both my keyboard layouts to mostly be based on this board because it has the fewest keys, making it a common denominator of sorts. The Voyager handles three layers, and I&apos;ve landed on the following:&lt;/p&gt;
&lt;h3&gt;Layer 0: Main (Voyager)&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/my-zsa-layouts/voyager-layer-0.png&quot; alt=&quot;Layer 0: Main&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The first thing to address is the thumb keys. Unlike a traditional keyboard, the Voyager has 4 total keys available for your thumbs. I&apos;m right-handed, so it only made sense for the top-right thumb key to be &lt;code&gt;Return/Enter&lt;/code&gt; and the top-left to be &lt;code&gt;Space&lt;/code&gt;. I set the secondary thumb keys to be &lt;code&gt;Command&lt;/code&gt; on the left and &lt;code&gt;Right Alt/Option&lt;/code&gt; on the right due to how frequently I use them and to somewhat mirror their placement on a more standard keyboard.&lt;/p&gt;
&lt;p&gt;Due to its compact nature, I had to get a little creative with the placement of modifier keys. I have &lt;code&gt;Command&lt;/code&gt; and &lt;code&gt;Alt&lt;/code&gt; available via the thumbs, but I still needed access to &lt;code&gt;Control&lt;/code&gt;. Oryx lets us customize a key&apos;s function based on action taken: tapped, held, double-tapped, and tapped then held. So to closely mirror a standard keyboard, I&apos;ve set &lt;code&gt;Z&lt;/code&gt; to be &lt;code&gt;Left Control&lt;/code&gt; when held and &lt;code&gt;X&lt;/code&gt; to be &lt;code&gt;Left Option/Alt&lt;/code&gt; when held. That way I don&apos;t have to twist my brain when switching between ZSA and standard keyboards.&lt;/p&gt;
&lt;p&gt;And the last thing I want to point out are the layer keys. These are keys that allow me to switch into the other two layers when held. My &lt;code&gt;Tab&lt;/code&gt; key switches to layer 1 when held and &lt;code&gt;Esc&lt;/code&gt; switches to layer 2.&lt;/p&gt;
&lt;h3&gt;Layer 1: Symbols (Voyager)&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/my-zsa-layouts/voyager-layer-1.png&quot; alt=&quot;Layer 1: Symbols&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This layer enables three main things for me: common symbols, function keys, and the numpad.&lt;/p&gt;
&lt;p&gt;The numpad lives on the left-most side of the right split, with &lt;code&gt;0&lt;/code&gt; living on the top-right thumb key. There&apos;s an argument to be made that the numpad should be shifted one column over to the right to be better aligned with home row hand placement, but I decided against that mainly to provide more comfortable access to the &lt;code&gt;-&lt;/code&gt; key with my right ring finger.&lt;/p&gt;
&lt;p&gt;I&apos;ve duplicated the use of the &lt;code&gt;`&lt;/code&gt; key, one in the very top-left of the board and one where &lt;code&gt;&apos;&lt;/code&gt; normally is. The top-left is to more closely emulate a standard placement of the key and the &lt;code&gt;&apos;&lt;/code&gt; replacement is because that just made sense in my head. &lt;code&gt;&apos;&lt;/code&gt; becomes &lt;code&gt;&quot;&lt;/code&gt; when using &lt;code&gt;Shift&lt;/code&gt;, so it should evolve into &lt;code&gt;`&lt;/code&gt; when using layer 1.&lt;/p&gt;
&lt;h3&gt;Layer 2: Media (Voyager)&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/my-zsa-layouts/voyager-layer-2.png&quot; alt=&quot;Layer 2: Media&quot; /&gt;&lt;/p&gt;
&lt;p&gt;And finally, the media key layer. I don&apos;t actually use media keys all that often, so this one is fairly sparse. The top row is lined with &lt;code&gt;Volume Down&lt;/code&gt;, &lt;code&gt;Volume Up&lt;/code&gt;, &lt;code&gt;Mute&lt;/code&gt;, &lt;code&gt;Brightness Down&lt;/code&gt;, and &lt;code&gt;Brightness Up&lt;/code&gt;-the only media keys I actually use.&lt;/p&gt;
&lt;p&gt;On the right split, I have arrows keys lined up with my fingers in natural position to somewhat follow the placement of arrow keys on standard boards. On the left split, I have mouse movement keys in case I ever end up in a situation where I don&apos;t have a mouse or trackpad available to me. The primary right thumb key is &lt;code&gt;Left Click&lt;/code&gt; and secondary is &lt;code&gt;Right Click&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Finally, the left split&apos;s primary thumb key is &lt;code&gt;SysRq/Print Screen&lt;/code&gt; just so I have access to it at all and the secondary is &lt;code&gt;Change Animation&lt;/code&gt; which is a ZSA-specific key that lets you cycle through its lighting styles.&lt;/p&gt;
&lt;h2&gt;Moonlander&lt;/h2&gt;
&lt;p&gt;Now for the chonker, the Moonlander. &lt;a href=&quot;#the-voyager&quot;&gt;As I mentioned above&lt;/a&gt;, I&apos;ve heavily based this layout on the Voyager&apos;s so that I don&apos;t have to completely context-switch when switching between the two keyboards.&lt;/p&gt;
&lt;h3&gt;Layer 0: Main (Moonlander)&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/my-zsa-layouts/moonlander-layer-0.png&quot; alt=&quot;Layer 0: Main&quot; /&gt;&lt;/p&gt;
&lt;p&gt;As you can see, it&apos;s pretty similar to the Voyager&apos;s main layout. The main differences are with the additional keys available. The thumb keys are arranged slightly differently and there are also more of them. Instead of 4 total thumb keys, we have 6, so I have them revolving around modifiers primarily.&lt;/p&gt;
&lt;p&gt;On the left split, I moved &lt;code&gt;Left Control&lt;/code&gt; and &lt;code&gt;Left Alt/Option&lt;/code&gt; down one row because it&apos;s available, and it&apos;s accompanied by &lt;code&gt;`&lt;/code&gt; and left/right arrow keys which come standard on the Moonlander. The additional column on the right constitutes volume controls.&lt;/p&gt;
&lt;p&gt;On the right split, the additional row contains up/down arrows, brackets, and &lt;code&gt;-&lt;/code&gt;, which I believe is default. The additional column is configured with brightness controls and &lt;code&gt;SysRq/Print Screen&lt;/code&gt; for convenience.&lt;/p&gt;
&lt;h3&gt;Layer 1: Symbols (Moonlander)&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/my-zsa-layouts/moonlander-layer-1.png&quot; alt=&quot;Layer 1: Symbols&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This layer is, I believe, identical to the Voyager&apos;s, so not much to say here!&lt;/p&gt;
&lt;h3&gt;Layer 2: Media (Moonlander)&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/my-zsa-layouts/moonlander-layer-2.png&quot; alt=&quot;Layer 2: Media&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Similar to layer 1, this one&apos;s also identical to the Voyager&apos;s!&lt;/p&gt;
&lt;h2&gt;Downloads&lt;/h2&gt;
&lt;p&gt;If you&apos;re a Voyager or Moonlander owner, you can download my layouts at the following links: &lt;a href=&quot;https://configure.zsa.io/voyager/layouts/VR9dX/&quot;&gt;Voyager&lt;/a&gt; and &lt;a href=&quot;https://configure.zsa.io/moonlander/layouts/Jvry7/&quot;&gt;Moonlander&lt;/a&gt;. At the time of writing, I haven&apos;t written any detailed revision messages, but I will make sure to do so from now on!&lt;/p&gt;
&lt;p&gt;If you have any questions or would just like to start a conversation with me, I&apos;m happy to engage over &lt;a href=&quot;mailto:hello@nshki.com&quot;&gt;email&lt;/a&gt; or the &lt;a href=&quot;https://ruby.social/@nshki&quot;&gt;fediverse&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>Vibe check, April 2025</title><link>https://nshki.com/vibe-check-april-2025/</link><guid isPermaLink="true">https://nshki.com/vibe-check-april-2025/</guid><description>Starting a new, non-technical stream-of-consciousness series on the blog.</description><pubDate>Mon, 21 Apr 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It&apos;s a sunny Sunday in SoCal. I&apos;m sitting on a dark rattan chair right outside a local coffee shop with my laptop on my lap, as its namesake intended, angling away from the rays. There&apos;s a slight breeze and I&apos;m watching the leaves of a nearby tree quietly dance. There&apos;s been quite a bit of change in my life as of late and felt I haven&apos;t had a chance to process it all due to being in go-go-go mode. This is me slowing down and organizing my thoughts &amp;amp; feelings in writing.&lt;/p&gt;
&lt;p&gt;On the career front, I &quot;graduated&quot; from &lt;a href=&quot;https://atomic.vc&quot;&gt;Atomic&lt;/a&gt;, a Silicon Valley venture studio, to be a full-time employee of &lt;a href=&quot;https://elly.ai&quot;&gt;Elly&lt;/a&gt;, one of its portfolio companies. I was at Atomic for almost 4 years, the second longest time I&apos;ve stayed at one company. I ended up making the jump for a few reasons, but the biggest amongst them was that I really like the team. At Atomic, one of my responsibilities included augmenting engineering teams of portfolio companies, and I&apos;ve been helping build Elly&apos;s product as well as engineering team for almost two years. That was more than enough time for me to decide to want to join full-time.&lt;/p&gt;
&lt;p&gt;Elly is built on a tech stack that I&apos;ve traditionally shied away from: full-stack, serverless TypeScript. As someone who has preferred to stay in the Ruby ecosystem as much as possible, I&apos;d imagine past me would be shocked at this transition, but it&apos;s the people that I work with that matter the most. I don&apos;t want this post to become a technical one so I&apos;ll spare the details for now, but the way ES6 enables having small, modular files with its associated tests sitting alongside them is also really quite nice.&lt;/p&gt;
&lt;p&gt;On the personal front, I moved back closer to LA proper after being on its outskirts for a couple years. There really is something for everyone here. When friends tell me that they don&apos;t like LA, I tend to believe it&apos;s only because they haven&apos;t found their community in the city yet. As an Asian American third culture kid who has lived in less diverse places, it&apos;s comforting to live somewhere where a vast variety of cuisines are a short drive away and I have the opportunity to practice multiple languages. I&apos;d love to find a Mandarin as well as Japanese tutor so I can be at least business-level in both. I&apos;m natively conversational, but my vocabulary is sorely, sorely lacking.&lt;/p&gt;
&lt;p&gt;I also started taking Brazilian jiu-jitsu &amp;amp; MMA classes. I&apos;ve never practiced martial arts before, so this has been a breath of fresh air. I wanted a way to stay active while learning a new skill, so why not self-defense? It&apos;s been challenging, an amazing workout, and a great way to build community.&lt;/p&gt;
&lt;p&gt;As for everything else going on in the world, all I can really say is I&apos;m worried. The US is leaning heavily fascist, we&apos;re at a point where open corruption is seemingly without consequences, there are multiple military conflicts occurring outside the borders, and international relations seem generally shaky.&lt;/p&gt;
&lt;p&gt;I realize having this small corner of the Internet isn&apos;t much, but I love it as my outlet. I migrated it from &lt;a href=&quot;https://jekyllrb.com&quot;&gt;Jekyll&lt;/a&gt; to &lt;a href=&quot;https://nuxt.com&quot;&gt;Nuxt&lt;/a&gt; and also from &lt;a href=&quot;https://pages.github.com&quot;&gt;GitHub Pages&lt;/a&gt; to &lt;a href=&quot;https://www.netlify.com&quot;&gt;Netlify&lt;/a&gt;. The title of this post is inspired by &lt;a href=&quot;https://daverupert.com&quot;&gt;Dave Rupert&lt;/a&gt;&apos;s blog.&lt;/p&gt;
&lt;p&gt;I&apos;d love to write more frequently here.&lt;/p&gt;
</content:encoded></item><item><title>Finding boosts</title><link>https://nshki.com/finding-boosts/</link><guid isPermaLink="true">https://nshki.com/finding-boosts/</guid><description>Musings on &quot;boosts&quot;, what they are, and how I think they&apos;re important things to think about.</description><pubDate>Thu, 10 Oct 2024 01:30:00 GMT</pubDate><content:encoded>&lt;p&gt;When I first wrote my first line of HTML back in the early 2000s and saw text appear on the screen, I was elated. It felt like magic; this wasn’t just any instance of mashing a keyboard and seeing something on the screen, it was a deliberate act of formatting and producing a document to be shared on the web. I created this text. I experienced my first boost.&lt;/p&gt;
&lt;p&gt;Something that I’ve found so magnetic about writing software is that there are so many opportunities to find little boosts no matter what I might be working on. In my early days, pointing a browser at a local HTML file was my gateway. Nowadays, I make it a point to try and maximize boosts through things like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Refactoring files so they’re easier to read without language servers or AI copilots.&lt;/li&gt;
&lt;li&gt;Writing comments to help future me and team.&lt;/li&gt;
&lt;li&gt;Reading code and comments later and quickly understanding what’s going on.&lt;/li&gt;
&lt;li&gt;Optimizing server response times and seeing latency graphs go down.&lt;/li&gt;
&lt;li&gt;Writing tests and making them green.&lt;/li&gt;
&lt;li&gt;Fixing bugs and supplying tests to guard against regressions.&lt;/li&gt;
&lt;li&gt;Pair-programming with teammates and reaching ah-ha moments.&lt;/li&gt;
&lt;li&gt;Taking advantage of newly shipped CSS features available on all major browsers.&lt;/li&gt;
&lt;li&gt;Using CSS instead of JavaScript where possible.&lt;/li&gt;
&lt;li&gt;Using HTML instead of JavaScript where possible.&lt;/li&gt;
&lt;li&gt;...and the list goes on and on.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Boosts are different for everyone, but they’re important. They’re critically important, maybe. I think cultivating an environment where the team can find boosts directly affects retention and overall happiness. I’ve definitely left jobs where boosts were discouraged.&lt;/p&gt;
&lt;p&gt;I’ve started using that vision as a guiding rule of thumb of sorts. Will doing this thing help build a culture of finding boosts? Yes? Okay, I’m going to do it. You don’t need to be in a leadership position to contribute, although that certainly helps.&lt;/p&gt;
&lt;p&gt;What do the wider team’s boosts look like? Do they align well? Do they encourage boosts? Communicating to learn each other’s boosts could unlock something.&lt;/p&gt;
&lt;p&gt;Find boosts.&lt;/p&gt;
</content:encoded></item><item><title>A moment of joy with CSS grid</title><link>https://nshki.com/a-moment-of-joy-with-css-grid/</link><guid isPermaLink="true">https://nshki.com/a-moment-of-joy-with-css-grid/</guid><description>Reflecting on how CSS grid made implementing a specific design so much easier than without.</description><pubDate>Wed, 07 Feb 2024 18:56:00 GMT</pubDate><content:encoded>&lt;p&gt;I was recently implementing a seemingly simple layout that could be boiled down to the following:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/a-moment-of-joy-with-css-grid/design.png&quot; alt=&quot;Design mockup&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Some content then some more content with an image to the right. Seems pretty standard. The catch here, though, is that the image must extend all the way to the edge of the viewport while the text must be contained in a center-aligned container with a &lt;code&gt;max-width&lt;/code&gt;. Most of the site&apos;s other content must adhere to this invisible container as well.&lt;/p&gt;
&lt;p&gt;The markup for this looked something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;div class=&quot;image-section&quot;&amp;gt;
  &amp;lt;div class=&quot;image-section__content&quot;&amp;gt;
    &amp;lt;h2 class=&quot;image-section__heading&quot;&amp;gt;...&amp;lt;/h2&amp;gt;
    &amp;lt;p class=&quot;image-section__paragraph&quot;&amp;gt;...&amp;lt;/p&amp;gt;
  &amp;lt;/div&amp;gt;&amp;lt;!-- .image-section__content --&amp;gt;

  &amp;lt;img class=&quot;image-section__image&quot; src=&quot;...&quot; alt=&quot;...&quot; /&amp;gt;
&amp;lt;/div&amp;gt;&amp;lt;!-- .image-section --&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At this point, I was already using &lt;code&gt;.container&lt;/code&gt; in multiple sections, and it was styled like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;.container {
  max-width: var(--container-width);
  padding-inline: var(--container-padding);
  margin-inline: auto;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But that wasn&apos;t going to fly for this particular section since the image must break out of the container. I couldn&apos;t just slap some positioning on the image and call it a day.&lt;/p&gt;
&lt;p&gt;I ended up implementing that section&apos;s layout in CSS grid:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;.image-section {
  --grid-container-width: calc(min(var(--container-width), 100vw) - (var(--container-padding) * 2));

  width: 100%;
  display: grid;
  grid-template-columns: 
    minmax(var(--container-padding), 0.5fr)
    calc(var(--grid-container-width) * 0.33)
    calc(var(--grid-container-width) * 0.67)
    minmax(var(--container-padding), 0.5fr);
  align-items: center;
}

.image-section__content {
  grid-column: 2 / 3;
}

.image-section__image {
  grid-column: 3 / 5;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With &lt;code&gt;grid-template-columns&lt;/code&gt;, I was able to break down the column structure as two columns for the left and right padding, one for the text content, and one for the image. I used &lt;code&gt;minmax()&lt;/code&gt; for the padding columns so that the layout was responsive—the normal container&apos;s padding is applied on thinner viewports but should center the content and image on wider viewports.&lt;/p&gt;
&lt;p&gt;I used a &lt;code&gt;--grid-container-width&lt;/code&gt; variable here to essentially say: I want the content columns to have a &lt;code&gt;max-width&lt;/code&gt; of &lt;code&gt;--container-width&lt;/code&gt; on wider viewports but otherwise should take the entire viewport width on smaller screens. I subtracted the amount of &lt;code&gt;--container-padding&lt;/code&gt; twice to account for the left and right padding columns.&lt;/p&gt;
&lt;p&gt;And finally, &lt;code&gt;grid-column&lt;/code&gt; on the content and image specified which columns they should live in. The content should only take one column but the image needs to extend all the way to the right of the screen.&lt;/p&gt;
&lt;p&gt;I think without CSS grid, implementing this would&apos;ve been a lot jankier. This is responsive, doesn&apos;t add unnecessary markup, and doesn&apos;t add any &lt;code&gt;position: absolute;&lt;/code&gt; or other hard-to-maintain properties in the styles.&lt;/p&gt;
&lt;p&gt;The little joys.&lt;/p&gt;
</content:encoded></item><item><title>WordPress, the Great Divide, and a build-less experience</title><link>https://nshki.com/wordpress-the-great-divide-and-a-build-less-experience/</link><guid isPermaLink="true">https://nshki.com/wordpress-the-great-divide-and-a-build-less-experience/</guid><description>How a small WordPress project got me thinking about the Great Divide and feeling the joys of a build-less development experience.</description><pubDate>Thu, 04 Jan 2024 05:06:00 GMT</pubDate><content:encoded>&lt;p&gt;I recently worked on a small WordPress project for a client and it felt like the 2000s/2010s again. Vanilla HTML, CSS, JavaScript, PHP, MariaDB, &lt;a href=&quot;https://www.advancedcustomfields.com/&quot;&gt;Advanced Custom Fields&lt;/a&gt;, and SFTP. NPM wasn&apos;t needed. A build step wasn&apos;t needed. In some ways, the &quot;DX&quot; in the lens of common workflows today wasn&apos;t great, but in other ways, it was &lt;em&gt;amazing&lt;/em&gt;. And most importantly, all of that actually didn&apos;t matter as long as the end result for the user was good. There are many voices in the web industry that touch on this already, but I wanted to chip in with my own 2¢ here.&lt;/p&gt;
&lt;h2&gt;Broaching &lt;a href=&quot;https://css-tricks.com/the-great-divide/&quot;&gt;the Great Divide&lt;/a&gt;, again&lt;/h2&gt;
&lt;p&gt;While it&apos;s not strictly front-end development, building a fully custom WordPress theme got me thinking about topics that Chris Coyier wrote about back in 2019.&lt;/p&gt;
&lt;p&gt;I could have decided to front-load the build process for this site with some JavaScript-based toolchains, Sass, TypeScript, and other goodies, but I deliberately decided against it. The client often works with other shops that specialize in managing a portfolio of websites. After being in contact with some of the stakeholders, it was clear that it&apos;s important to lower the barrier of entry for this project as much as possible, especially when I won&apos;t be the one maintaining it. As a result, everything on the front end was using vanilla, native technologies. HTML, CSS, and sprinkles of JavaScript.&lt;/p&gt;
&lt;p&gt;It felt &lt;em&gt;really&lt;/em&gt; good, and it&apos;s a testament to how powerful the foundational technologies are. It&apos;s also a reminder that the front-of-the-front end is just as important, if not more important, than back-of-the-front end. e.g. With this particular project, there was no back-of-the-front end.&lt;/p&gt;
&lt;h2&gt;Build-less CSS&lt;/h2&gt;
&lt;p&gt;I generally use vanilla CSS for most of the projects I touch nowadays, but I wanted to touch on a couple small moments of CSS joy during this project. One of my gut reactions to using a build-less setup was that separating stylesheets like in a Sass or CSS module project was not possible. But it absolutely is. &lt;code&gt;&amp;lt;link&amp;gt;&lt;/code&gt;-ing individual stylesheets works perfectly especially in today&apos;s HTTP/2 world.&lt;/p&gt;
&lt;p&gt;I structured top-level page fields for this project with ACF&apos;s (Advanced Custom Fields) Flexible Content field which I named &quot;components.&quot; Editors can just pick which components they want to use for a page and in whatever order they like. I wrote the markup for each component in its own template part file and fired off a &lt;code&gt;wp_enqueue_style()&lt;/code&gt; per each component&apos;s stylesheet in the theme&apos;s &lt;code&gt;functions.php&lt;/code&gt;. This produced several &lt;code&gt;&amp;lt;link&amp;gt;&lt;/code&gt; tags in the end markup, and it didn&apos;t require any build step whatsoever. Anything that needed to be shared across stylesheets was tucked away nicely in CSS variables.&lt;/p&gt;
&lt;p&gt;(I thought about using WordPress&apos;s new block editor and creating a bunch of custom blocks, but I decided against it since that would raise the barrier to entry for any future developers. At the time of writing, &lt;a href=&quot;https://developer.wordpress.org/block-editor/getting-started/tutorial/&quot;&gt;every block requires its own &lt;code&gt;node_modules/&lt;/code&gt; and build process&lt;/a&gt;, and that&apos;s far too much of a headache to deal with.)&lt;/p&gt;
&lt;h2&gt;Build-less JavaScript&lt;/h2&gt;
&lt;p&gt;Only a handful of small JavaScript files were needed for this project. There was no need to use anything that required a build step. The most involved JavaScript file I wrote for this project was 23 lines long, and native browser APIs handled everything that I needed.&lt;/p&gt;
&lt;p&gt;I wanted to make the website&apos;s behavior declarative in the markup as &lt;a href=&quot;https://nshki.com/reflections-on-a-custom-component&quot;&gt;I&apos;ve written before&lt;/a&gt;, so I used &lt;code&gt;data-component&lt;/code&gt; attributes in the markup as hooks for each JavaScript file. Each file was structured something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;document.addEventListener(&apos;DOMContentLoaded&apos;, function () {
  const instances = document.querySelectorAll(&apos;[data-component=&quot;&amp;lt;component-name&amp;gt;&quot;]&apos;);
  for (let instance of instances) {
    // ...
  }
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For things like dynamic image galleries, I reached for dependencies via CDNs which helped me to continue avoiding a build step.&lt;/p&gt;
&lt;h2&gt;Deploying like the old days&lt;/h2&gt;
&lt;p&gt;Finally, I got to use an SFTP client again. This was thrilling. Yes, it&apos;s not very efficient and is prone to all sorts of human error, but it felt &lt;em&gt;great&lt;/em&gt;. Deploying changes is really as easy as drag and drop and that&apos;s exactly what I did. I gave the ol&apos; &lt;a href=&quot;https://filezilla-project.org/&quot;&gt;Filezilla&lt;/a&gt; another download and dragged a directory over to the production server. Magic.&lt;/p&gt;
&lt;h2&gt;Closing thoughts&lt;/h2&gt;
&lt;p&gt;This project was a breath of fresh air since most projects I work on nowadays require some kind of build step, whether that&apos;s through NPM or asset preprocessing, and this one didn&apos;t. It felt so freeing to write vanilla code and drag it into production. The big thing for me is that I didn&apos;t have to think as hard as I normally do for projects.&lt;/p&gt;
&lt;p&gt;I&apos;m not saying projects today need to be absolved of build steps or deployed via SFTP. There are very good reasons to not do either for many projects. But I do think that it&apos;s a great idea to always evaluate how much complexity is necessary for any project before a single line of code is written. Modern browsers are powerful, and as a result, vanilla HTML, CSS, and JavaScript are powerful. Not all projects are going to be managed by large teams.&lt;/p&gt;
</content:encoded></item><item><title>Migrating Rails from Postgres to SQLite</title><link>https://nshki.com/migrating-rails-from-postgres-to-sqlite/</link><guid isPermaLink="true">https://nshki.com/migrating-rails-from-postgres-to-sqlite/</guid><description>My experience migrating a Rails project from Postgres/Redis to SQLite and vastly simplifying production infrastructure.</description><pubDate>Wed, 13 Dec 2023 06:51:00 GMT</pubDate><content:encoded>&lt;p&gt;I recently migrated a personal Rails project off of Postgres and Redis (for Sidekiq and caching) to SQLite (powered by &lt;a href=&quot;https://github.com/oldmoe/litestack&quot;&gt;Litestack&lt;/a&gt;). I&apos;ve been running this app on &lt;a href=&quot;https://fly.io/&quot;&gt;Fly.io&lt;/a&gt; powered by a handful of Docker containers for several months, and it&apos;s been accumulating costs in the magnitude of medium American french fries every month. This has been, and remains to this day, a purely for-fun project, so I wanted to reduce costs as much as humanly possible. At the time of writing, Fly.io waives bills that are under the $5 USD mark, so I started brainstorming some ways of making this happen.&lt;/p&gt;
&lt;p&gt;My initial thoughts were to perhaps rewrite the app with something like &lt;a href=&quot;https://nuxt.com/&quot;&gt;Nuxt&lt;/a&gt; or &lt;a href=&quot;https://nextjs.org/&quot;&gt;Next.js&lt;/a&gt;, deploy it to &lt;a href=&quot;https://www.netlify.com/&quot;&gt;Netlify&lt;/a&gt; or &lt;a href=&quot;https://vercel.com/&quot;&gt;Vercel&lt;/a&gt;, and use some NoSQL database. My thought was that if the app stays under 100k serverless function calls a month, I could probably safely stay in a free tier. I started toying with this idea, but quickly ditched it since finding a database provider in free tier land proved to be a fruitless endeavor. I also didn&apos;t like the idea of having to separate application hosting from where data was being persisted and Ruby is much more of a joy to work in than JavaScript/TypeScript.&lt;/p&gt;
&lt;p&gt;SQLite has had a resurgence in popularity as of late, and to my delight, I discovered &lt;a href=&quot;https://github.com/oldmoe/litestack&quot;&gt;Litestack&lt;/a&gt; which is an open-source Ruby gem that packages up all data-related infrastructure in SQLite. For Rails, it integrates with Active Record, Active Support, Active Job, and Active Cable which would remove my need for separate Postgres and Redis containers in my Fly.io infrastructure. After seeing how easy it looked to integrate into an existing project, I decided to give it a whirl.&lt;/p&gt;
&lt;h2&gt;Installation and revisiting the schema&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/oldmoe/litestack#installation&quot;&gt;Installing Litestack&lt;/a&gt; was pretty straightforward for a Rails project:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;bundle add sqlite3
bundle add litestack
bin/rails generate litestack:install
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the generator did its work, I then checked to see if I was able to load my existing schema into SQLite. Sadly, I wasn&apos;t able to.&lt;/p&gt;
&lt;p&gt;Up till this point, I was using Postgres UUIDs as the primary key for all of my tables, so I needed to tweak the schema to use default integer primary keys. I also discovered that Active Record array columns aren&apos;t quite supported out-of-the-box, so something like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;t.string :my_array_column, default: [], array: true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;...won&apos;t work with SQLite.&lt;/p&gt;
&lt;p&gt;Since my previous migrations were all no longer compatible with SQLite and there were a handful of column types I needed to rethink, I decided to scrap all &lt;code&gt;db/migrate/*.rb&lt;/code&gt; files and start with a brand new migration. I referenced &lt;code&gt;db/schema.rb&lt;/code&gt; to scaffold out the bulk of what I needed, and started reading up on supported column types in SQLite.&lt;/p&gt;
&lt;p&gt;As a replacement for array columns, turns out JSON types with a SQLite type constraint can be used:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;t.json :my_array_column, default: []
t.check_constraint &quot;json_type(my_array_column) = &apos;array&apos;&quot;, name: &quot;my_array_column_is_array&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;So I went ahead and replaced all array types with this. I also had a handful of &lt;code&gt;t.jsonb&lt;/code&gt; columns for Postgres, but replaced them with &lt;code&gt;t.json&lt;/code&gt; for SQLite. It didn&apos;t take long before I was able to create a schema that was compatible with the new database and my existing models.&lt;/p&gt;
&lt;h2&gt;Refactoring caching usage&lt;/h2&gt;
&lt;p&gt;Prior to Litestack, I was making heavy use of &lt;a href=&quot;https://github.com/rails/kredis&quot;&gt;Kredis&lt;/a&gt; for caching. Migrating away from Kredis APIs to supporting &lt;code&gt;litecache&lt;/code&gt; was just a matter of switching to vanilla Rails APIs:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Before
Kredis.flag(&quot;my_flag&quot;).mark(expires_in: 12.hours, force: false)
Kredis.flag(&quot;my_flag&quot;).marked?

# After
Rails.cache.write(&quot;my_flag&quot;, true, expires_in: 12.hours)
Rails.cache.read(&quot;my_flag&quot;).present?
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To ensure that my test and development environments were being tested with Litestack, I also needed to declare &lt;code&gt;litecache&lt;/code&gt; as the cache store in &lt;code&gt;config/application.rb&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;config.cache_store = :litecache
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once I got to this point, my test suite was green again, and I was a pretty happy camper.&lt;/p&gt;
&lt;h2&gt;Migrating production data&lt;/h2&gt;
&lt;p&gt;The last bit of the puzzle was migrating production data. Getting production Postgres data into a production SQLite database was a bit tricky since this process involved changing primary key types and column types. I&apos;m also making use of Active Record encryption in some models. That meant I couldn&apos;t just export a SQL dump and import it into the schema.&lt;/p&gt;
&lt;p&gt;So naturally, I exported each Postgres table into a CSV using the &lt;code&gt;\copy&lt;/code&gt; command:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;psql -c &quot;\copy table_name to &apos;/path/to/export.csv&apos; delimiter &apos;,&apos; csv header;&quot; postgres://user:password@host:port/db_name
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This preps the data in a format where I can easily consume it with a Rake task and import it into the new database, but I still needed to handle encrypted columns. I had some trouble figuring this out. I would have thought that importing the serialized object would let Active Record decrypt at runtime, especially considering the &lt;code&gt;active_record_encryption&lt;/code&gt; values in my &lt;code&gt;config/credentials.yml.enc&lt;/code&gt; weren&apos;t touched.&lt;/p&gt;
&lt;p&gt;Regrettably, I didn&apos;t end up deep-diving into the nitty-gritty of Active Record encryption to uncover the root cause. I instead ended up exporting the decrypted column values into these CSVs to use in my Rake task, which worked, and I was happy with that for the time being. (I &lt;em&gt;would&lt;/em&gt; like to poke around Active Record to learn how encryption is implemented at some point.)&lt;/p&gt;
&lt;p&gt;Part of the magic of SQLite is that the database is literally just a file. I ended up using a Rake task on my local machine to import the production data and then SCP&apos;d the database to my production container&apos;s persisted volume.&lt;/p&gt;
&lt;h2&gt;SQLite is rock solid&lt;/h2&gt;
&lt;p&gt;After having migrated the app, I&apos;m feeling pretty excited. Not only is the app still stable, but my production infrastructure is drastically simpler. I no longer have four containers for the app, Sidekiq, Postgres, and Redis. Instead, I just have one container with a persisted volume. It runs Rails and Litestack with a database, queue, and cache in three separate SQLite files on the volume. I&apos;m able to run everything on a less powerful CPU, reduce RAM, and still have my app as snappy as before.&lt;/p&gt;
&lt;p&gt;With projects like &lt;a href=&quot;https://fly.io/docs/litefs/&quot;&gt;LiteFS&lt;/a&gt; and &lt;a href=&quot;https://litestream.io/&quot;&gt;Litestream&lt;/a&gt; being actively worked on, the future of supporting Rails apps with SQLite replicas is looking promising. SQLite doesn&apos;t just work for smaller projects—it can scale.&lt;/p&gt;
&lt;p&gt;For my current needs, SQLite perfectly fit the bill. It supports my for-fun project in a much cheaper way, simplifies the infrastructure, and if I have to, scaling it up bit by bit doesn&apos;t seem daunting at all.&lt;/p&gt;
</content:encoded></item><item><title>Replacing ChromeOS with Linux on the 2017 Pixelbook (Eve)</title><link>https://nshki.com/replacing-chromeos-with-linux-on-the-2017-pixelbook-eve/</link><guid isPermaLink="true">https://nshki.com/replacing-chromeos-with-linux-on-the-2017-pixelbook-eve/</guid><description>A step-by-step guide for installing Linux on the Pixelbook and replacing ChromeOS.</description><pubDate>Sat, 15 Jul 2023 03:37:00 GMT</pubDate><content:encoded>&lt;p&gt;I admit, I absolutely nerd sniped myself this week when I told myself I wanted to install Linux on my 2017 Pixelbook (Eve). This particular Pixelbook is, to this day, one of my favorite pieces of computer hardware. It&apos;s incredibly thin and I love how unique it is with its sleek white slate on the back.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/replacing-chromeos-with-linux-on-the-2017-pixelbook-eve/pixelbook.jpg&quot; alt=&quot;My Pixelbook&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Unfortunately, Google drew the Pixelbook on its Auto Update Expiration (AUE) list a while back, saying that after June 2024, the device will no longer receive OS updates. It&apos;s such a bummer since it still performs well with a lighter OS like ChromeOS, and I honestly could see myself still using the machine as my daily driver if the M2 MacBook Air didn&apos;t come around.&lt;/p&gt;
&lt;p&gt;So naturally, I wanted to put Linux on it.&lt;/p&gt;
&lt;p&gt;Not just use Linux through ChromeOS or Crostini, but completely replace ChromeOS with it. Chromebooks are generally known to be fairly closed devices since you can&apos;t boot from USB drives out of the box. No one thinks of a Chromebook when they&apos;re looking for their next Linux machine. Luckily, there are folks out there that have figured it out. Here are the steps that I followed for removing ChromeOS and installing Linux on the Pixelbook.&lt;/p&gt;
&lt;h2&gt;1. Enable developer mode on the device&lt;/h2&gt;
&lt;p&gt;First things first, we have to enable developer mode on the Chromebook to access things like Crosh (Chrome shell) which let us modify the system firmware.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Turn the Chromebook off.&lt;/li&gt;
&lt;li&gt;Hold &lt;code&gt;Esc&lt;/code&gt; + &lt;code&gt;Refresh&lt;/code&gt; as you turn it back on.&lt;/li&gt;
&lt;li&gt;Wait for a scary prompt that says ChromeOS is missing or damaged. Press &lt;code&gt;Ctrl&lt;/code&gt; + &lt;code&gt;D&lt;/code&gt; on this screen.&lt;/li&gt;
&lt;li&gt;Follow steps to reboot the device. The rest of the steps is a standard ChromeOS installation process, so go through that as well.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;2. Enable booting from USB&lt;/h2&gt;
&lt;p&gt;I stumbled upon &lt;a href=&quot;https://mrchromebox.tech/&quot;&gt;mrchromebox.tech&lt;/a&gt;, which is run by someone called MrChromebox. After glancing at the updates on the site, I saw that there have been active posts since 2016 and the latest release was in May of this year. An active project was a very good sign. (The website has a ton of information that gets into the internals of Chromebooks, so check it out if you&apos;re so compelled.)&lt;/p&gt;
&lt;p&gt;I then checked the &lt;a href=&quot;https://mrchromebox.tech/#devices&quot;&gt;supported devices page&lt;/a&gt; to see if the Pixelbook (2017, Eve) was listed, and to my delight, it was. After digging around various Linux-on-Chromebook articles, it seemed like modifying the device firmware was important to allow booting from a USB drive, so I clicked into the &lt;a href=&quot;https://mrchromebox.tech/#fwscript&quot;&gt;firmware utility script page&lt;/a&gt; where he outlines some commands that pull &lt;a href=&quot;https://github.com/MrChromebox/scripts&quot;&gt;Bash scripts which are mirrored on GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;From here, I cracked open a Crosh shell. Here&apos;s how to do that:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;Ctrl&lt;/code&gt; + &lt;code&gt;Alt&lt;/code&gt; + &lt;code&gt;T&lt;/code&gt; to open a Crosh window.&lt;/li&gt;
&lt;li&gt;Execute &lt;code&gt;shell&lt;/code&gt; to jump into a proper shell from there.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I then executed the firmware utility script:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;cd; curl -LO mrchromebox.tech/firmware-util.sh &amp;amp;&amp;amp; sudo bash firmware-util.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Luckily this script provides some nice step-by-steps in the CLI. I picked &lt;code&gt;1) Install/Update RW_LEGACY Firmware&lt;/code&gt; and picked further options that enabled boot-from-USB as the default device behavior.&lt;/p&gt;
&lt;h2&gt;3. Plug in the Linux USB drive&lt;/h2&gt;
&lt;p&gt;Presumably you have a USB drive with your preferred flavor of Linux on it already. If not, go do that. I ended up grabbing a copy of &lt;a href=&quot;https://elementary.io/&quot;&gt;Elementary OS&lt;/a&gt; for this project and put it on an ancient 8 GB USB drive. Plug it into the Chromebook.&lt;/p&gt;
&lt;p&gt;Now, reboot the device. You should see a scary screen that warns you about the device being in developer mode. Ignore it and press &lt;code&gt;Ctrl&lt;/code&gt; + &lt;code&gt;L&lt;/code&gt; to allow the device to boot from the USB drive. Wait until the device is able to successfully boot from the drive, but the rest of the way should be a standard Linux installation process!&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The screen will be upside down on the Pixelbook, so flip your device around to get the orientation right or just use your finger to tap through the flow on the touchscreen.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;4. Fix screen orientation and brightness (Pixelbook and Ubuntu-specific)&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;This step can be ignored if you&apos;re not on a Pixelbook and not using an Ubuntu-based distribution.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;There is some further tweakage needed to fix the screen orientation and enable screen brightness adjustments on the Pixelbook on an Ubuntu-based distro. You&apos;ve probably already noticed everything being upside down, but that can be fixed by navigating to the OS display settings and adjusting rotation config there.&lt;/p&gt;
&lt;p&gt;To be able to adjust screen brightness (since by default it has everything at 100%), open terminal, and using your editor of choice, &lt;code&gt;sudo&lt;/code&gt; edit the &lt;code&gt;/etc/default/grub&lt;/code&gt; file. You&apos;ll need to make sure that the &lt;code&gt;GRUB_CMDLINE_LINUX&lt;/code&gt; variable is set to the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;GRUB_CMDLINE_LINUX=&quot;i915.enable_dpcd_backlight=1&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can then apply these changes by running &lt;code&gt;sudo update-grub&lt;/code&gt; and giving your device a restart. Make sure to always press &lt;code&gt;Ctrl&lt;/code&gt; + &lt;code&gt;L&lt;/code&gt; when seeing the scary developer mode screen again. You&apos;ll need to do this each time you reboot.&lt;/p&gt;
&lt;h2&gt;Live, laugh, love&lt;/h2&gt;
&lt;p&gt;And voila! The end-of-life&apos;d Chromebook is now a healthy Linux machine with no end of life in sight. I always get excited when working with outdated tech because there&apos;s so much more that can be done with them than corporations lead us to believe. Installing Linux on these devices gives them so many more years of usage and saves a good deal of money. If you made it to the end of this article, then I commend you for making the choice of Linux-ifying your Chromebook.&lt;/p&gt;
&lt;p&gt;I&apos;m also happy to chat further with you over &lt;a href=&quot;mailto:hello@nshki.com&quot;&gt;email&lt;/a&gt; or the &lt;a href=&quot;https://ruby.social/@nshki&quot;&gt;fediverse&lt;/a&gt;.&lt;/p&gt;
</content:encoded></item><item><title>How to: Selenium with Chrome extensions</title><link>https://nshki.com/how-to-selenium-with-chrome-extensions/</link><guid isPermaLink="true">https://nshki.com/how-to-selenium-with-chrome-extensions/</guid><description>This is an overview of how to use Selenium to automate Chrome extensions with JavaScript.</description><pubDate>Tue, 21 Mar 2023 05:12:00 GMT</pubDate><content:encoded>&lt;p&gt;When developers chat about browser automations, &lt;a href=&quot;https://www.selenium.dev/&quot;&gt;Selenium&lt;/a&gt; is one of those tools that’ll get mentioned in every conversation. It has native language bindings with C#, Ruby, Java, Python, and JavaScript, so it’s super flexible. This is an overview of how to use Selenium to automate Chrome extensions with JavaScript, but the concepts can be generalized to fit your browser and language of choice.&lt;/p&gt;
&lt;h2&gt;Setting up the project&lt;/h2&gt;
&lt;p&gt;First things first. Let’s fetch the &lt;a href=&quot;https://www.npmjs.com/package/selenium-webdriver&quot;&gt;selenium-webdriver&lt;/a&gt; NPM package.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ npm install --save selenium-webdriver
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make sure you have the following installed on your system as well:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Google Chrome&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://chromedriver.chromium.org/downloads&quot;&gt;Chromedriver&lt;/a&gt; (alternatively, you could also &lt;a href=&quot;https://www.npmjs.com/package/chromedriver&quot;&gt;install the NPM package&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;From there, we can set up a new Node.js script that will fire up a new Selenium Chrome instance.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// selenium.js (or anything you want to name this file)

(async () =&amp;gt; {
  const { Builder } = require(&apos;selenium-webdriver&apos;);
  const chrome = require(&apos;selenium-webdriver/chrome&apos;);
  const driver = await new Builder().forBrowser(&apos;chrome&apos;).build();
})();
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Starting Selenium Chrome with extensions&lt;/h2&gt;
&lt;p&gt;Now this is where things get interesting. There are multiple ways to add extensions to Selenium Chrome.&lt;/p&gt;
&lt;h3&gt;Method 1: Loading with a local directory&lt;/h3&gt;
&lt;p&gt;This is likely what most people need when automating using extensions. If you already have the local extension build on your machine, you can just use that. Let’s add that into the script.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;(async () =&amp;gt; {
  const { Builder } = require(&apos;selenium-webdriver&apos;);
  const chrome = require(&apos;selenium-webdriver/chrome&apos;);
  const options = new chrome.Options();  // &amp;lt;------
  options.addArguments(&apos;--load-extension=/path/to/extension/build&apos;);  // &amp;lt;------
  const driver = await new Builder().forBrowser(&apos;chrome&apos;).setChromeOptions(options).build();  // &amp;lt;------
})();
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Method 2: Loading with a CRX file&lt;/h3&gt;
&lt;p&gt;CRX files are Chrome extension files. They are extensions bundled up into one file for nice portability. When you want to load in an extension that you aren’t actively developing, this might be the only way to do so. CRX files are obtainable using other extensions like the &lt;a href=&quot;https://chrome.google.com/webstore/detail/crx-extractordownloader/ajkhmmldknmfjnmeedkbkkojgobmljda&quot;&gt;CRX Extractor/Downloader&lt;/a&gt; or online tools.&lt;/p&gt;
&lt;p&gt;Once you have a CRX file, you can load it in like so:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;(async () =&amp;gt; {
  const { Builder } = require(&apos;selenium-webdriver&apos;);
  const chrome = require(&apos;selenium-webdriver/chrome&apos;);
  const options = new chrome.Options();
  options.addExtensions(&apos;/path/to/extension.crx&apos;); // &amp;lt;------
  const driver = await new Builder().forBrowser(&apos;chrome&apos;).setChromeOptions(options).build();  // &amp;lt;------
})();
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Interacting with extensions&lt;/h2&gt;
&lt;p&gt;If your extension has its own page that opens in a browser tab or as a pop-up, chances are your automation needs to interact with the associated DOM elements. The way to do this is to change Selenium’s currently active window to the extension’s. &lt;a href=&quot;https://www.selenium.dev/documentation/webdriver/interactions/windows/&quot;&gt;Selenium’s windows API&lt;/a&gt; lets us accomplish this.&lt;/p&gt;
&lt;p&gt;Each window has an associated window ID, which is a set of randomized alphanumeric characters. You can get the current or full list of all windows in the browser session with a method call.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const currentWindowHandle = await driver.getWindowHandle(); // =&amp;gt; This returns a `string`
const windowHandles = await driver.getAllWindowHandles(); // =&amp;gt; This returns a `string[]`
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And you can switch the currently active window like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;await driver.switchTo().window(windowHandles[0]);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All driver calls are for the currently active window. That means we could write something like this, too:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// Store all window handles in a variable.
const windowHandles = await driver.getAllWindowHandles();
let windowHandleIndex = 0;

// Keep switching to the next window until we open one with a title value of
// &quot;My Extension&quot;.
while (await driver.getTitle() !== &apos;My Extension&apos;) {
  windowHandleIndex++;
  const nextWindow = windowHandles[windowHandleIndex];
  await driver.switchTo().window(nextWindow);
}

// Make Selenium driver calls per usual.
await driver.findElement(By.xpath(&apos;//button[contains(text(), &quot;Submit&quot;)]&apos;)).click();
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Closing thoughts&lt;/h2&gt;
&lt;p&gt;As mentioned earlier in this post, these concepts can be applied to different browsers and languages. Loading local extensions for Firefox or Safari can be done in similar ways, and CRX files can be used for all Chromium-based browsers. (Looking at you, Microsoft Edge.)&lt;/p&gt;
&lt;p&gt;Browser automations can really take us a long way as developers, and it’s nice knowing how to throw browser extensions into that mix. While we commonly use them for end-to-end testing, the use cases really can span anything we want.&lt;/p&gt;
&lt;p&gt;I hope that this article proves useful to others out there, and keep building awesome software!&lt;/p&gt;
</content:encoded></item><item><title>Our little universes</title><link>https://nshki.com/our-little-universes/</link><guid isPermaLink="true">https://nshki.com/our-little-universes/</guid><description>Some fleeting thoughts while I rest in a creaky, small-town Japanese hotel.</description><pubDate>Sun, 29 Jan 2023 06:15:00 GMT</pubDate><content:encoded>&lt;p&gt;I’m currently writing this from the third floor of a creaky hotel in a small Japanese town where I spent a good chunk of my formative years. In the distance, I see some rolling hills peppered with trees that look like they could’ve been drawn by grade schoolers with dark green crayons. Thick, angry clouds are slowly migrating from north to south and a breathtakingly blue sky is staring down between them. Way back when, this used to be my entire universe.&lt;/p&gt;
&lt;p&gt;Having bounced back and forth a lot between the US and Japan as a kid, I felt a lot of magic every time I visited Japan. There were revolving sushi bars, anime characters used in store ads, and extended family who I love very much. I took some time to catch a glimpse of my old elementary school where I made lifelong friends and solidified my grasp of the language. I’m reminiscing over how we used to run around outside in the playground and play baseball in sandy gravel as kids.&lt;/p&gt;
&lt;p&gt;Spending time back in my second home town as a slightly jaded adult, I noticed that things seem so much &lt;em&gt;less magical&lt;/em&gt;. This is probably because I understand much more now compared to when I was a kid, but it got me thinking about how everyone has this process of their universes expanding over time and that changes their lives forever. We’re just chugging along on this train of life collecting new experiences and those new experiences shape how we view everything.&lt;/p&gt;
&lt;p&gt;There’s something kind of magical about this idea though. My grandpa is 90 this year. He isn’t in the best health at the moment, but is well enough to crack some jokes at the dinner table. My cousin has two kids. Those kids are my grandpa’s great-grandkids.&lt;/p&gt;
&lt;p&gt;Great-grandkids. Bonkers.&lt;/p&gt;
&lt;p&gt;On my trip to Japan this year, we have four generations of family members all having dinner together, all with different experiences, different world views, and different universes. All of our universes are still expanding. Colliding. It’s like a cosmic event.&lt;/p&gt;
&lt;p&gt;I know I’m running this metaphor into the ground but it’s really making me appreciate time together, both in a familial and general sense. It helps me frame things in a way that makes judgment evaporate and empathy flood in.&lt;/p&gt;
&lt;p&gt;We all have our little universes. That’s magical. Let’s make them collide beautifully.&lt;/p&gt;
</content:encoded></item><item><title>Rubber bands</title><link>https://nshki.com/rubber-bands/</link><guid isPermaLink="true">https://nshki.com/rubber-bands/</guid><description>Back in middle school, my friend and I entered a science fair competition. We ended up winning this competition.</description><pubDate>Mon, 02 Jan 2023 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Back in middle school, my friend and I entered a science fair competition. The objective was for teams of 2 students to design, build, and use a catapult to launch a given object as far as possible. (I’m pretty sure it was a ping pong ball, but I don’t quite remember that detail.)&lt;/p&gt;
&lt;p&gt;Every team got &lt;em&gt;really&lt;/em&gt; fancy with it. They’d be making all these calculations with mathematical formulas, considering things like air resistance, how to maximize the amount of force produced to launch the object, and who knows what else. They consulted the science teachers, wrote things down on paper, and were being really productive.&lt;/p&gt;
&lt;p&gt;Meanwhile, my friend and I just loved to launch stuff. The only reason we wanted to enter this thing was to make a propulsion device because we were middle schoolers and that sounded awesome. Looking around and seeing all these teams get so serious with this was honestly kind of anxiety-inducing. Were we not smart enough for this?&lt;/p&gt;
&lt;p&gt;We ended up winning this competition.&lt;/p&gt;
&lt;p&gt;Our solution? Three blocks of wood, some shoddily put together contraption that would hold a ball steady, and a &lt;em&gt;ton&lt;/em&gt; of rubber bands.&lt;/p&gt;
&lt;p&gt;I thought about this today because it was just so ridiculous. We didn’t think one second about any numbers or consulted any teachers. We just went, “Rubber bands should work because they launch things. Let’s just use as many as we can.” The school paper interviewed us after the competition and I felt bad we didn’t really have anything super science-y to say. Our big, beaming smiles were in our local paper later that month.&lt;/p&gt;
&lt;p&gt;Sometimes, development can be like this. Not always, but sometimes. Most of the time, we should be thinking about things like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Is it easy for others to understand this?&lt;/li&gt;
&lt;li&gt;Is it an efficient use of resources?&lt;/li&gt;
&lt;li&gt;How easy is it to make changes to this in the future?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But sometimes, just use rubber bands, which are already easy to understand, super accessible, and pretty easy to manipulate later.&lt;/p&gt;
&lt;p&gt;Don’t over-engineer.&lt;/p&gt;
</content:encoded></item><item><title>The Web pendulum</title><link>https://nshki.com/the-web-pendulum/</link><guid isPermaLink="true">https://nshki.com/the-web-pendulum/</guid><description>There&apos;s a renaissance happening on the Web right now as some are saying, and I&apos;m loving it.</description><pubDate>Fri, 16 Dec 2022 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I think over the years, we’ve been noticing Web trends swinging back and forth in really broad strokes. Back in the 2000s, the rise and fall of skeuomorphism was really apparent. We ended up landing in the land of flat design by the 2010s and dare I say things are beginning to look a little less flat nowadays. Server-rendered applications became overshadowed by single page applications and back-of-the-front-end technologies. Now, we’re starting to see an influx of love for the server again (but on the edge).&lt;/p&gt;
&lt;p&gt;At the moment, I’m particularly interested in two seemingly happening trends: privacy and decentralization. Twitter got bought out by Elon Musk and millions of people are signing up for Mastodon accounts across the fediverse. There are no sponsored ads or tracking in sight. They also are publishing more content on their RSS feeds again. 💛&lt;/p&gt;
&lt;p&gt;There&apos;s a renaissance happening on the Web right now as some are saying, and I&apos;m loving it. It’s the 2020s version of phpBB forums and decentralized protocols.&lt;/p&gt;
&lt;p&gt;I’ve been categorizing these sorts of things as part of the Web pendulum in my head. I know the Web is still relatively young and it’s hard to say if these sorts of polarizing trends will continue to happen, but I hope they do. I think through the messiness of it all, we get to learn &lt;em&gt;a lot.&lt;/em&gt; This leads to growth for this insane piece of technology called the Internet and it also lets us get a little better at being kind to each other.&lt;/p&gt;
&lt;p&gt;Just a little.&lt;/p&gt;
</content:encoded></item><item><title>Open source chats with Kasper Timm Hansen</title><link>https://nshki.com/open-source-chats-with-kasper-timm-hansen/</link><guid isPermaLink="true">https://nshki.com/open-source-chats-with-kasper-timm-hansen/</guid><description>Some nuggets of wisdom that Kasper Timm Hansen touched on while we chatted on a Hollywood rooftop.</description><pubDate>Sun, 16 Oct 2022 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Last week I attended the &lt;a href=&quot;https://railssaas.com/&quot;&gt;Rails SaaS Conference&lt;/a&gt;, which is a brand new conference revolving around Ruby on Rails and entrepreneurship. Despite writing Rails apps for a while now, I haven’t been very involved in the community, so I wanted to branch out a bit and meet the wider Rails community.&lt;/p&gt;
&lt;p&gt;The conference was an incredible experience, but that’s not what I wanted to write about today. (Read &lt;a href=&quot;https://berry.sh/the-rails-saas-conference/&quot;&gt;Eric Berry’s wonderful write-up of the experience&lt;/a&gt; if you’re curious on that front!) I wanted to capture some nuggets of wisdom that &lt;a href=&quot;https://twitter.com/kaspth&quot;&gt;Kasper Timm Hansen&lt;/a&gt; touched on while we chatted on a Hollywood rooftop.&lt;/p&gt;
&lt;p&gt;For those who aren’t familiar, Kasper is a former Rails core member who has made major contributions to the framework. I ran into him while scanning around the second floor lobby for a bottle of water, where we greeted each other for the first time. He told me about his 11-hour flight from Denmark and I told him about my 30-minute Uber ride from home. We shared jokes about how I should craft an intricate travel story because I&apos;m a local. (I didn’t end up doing this.) He’s super personable and extremely humble.&lt;/p&gt;
&lt;p&gt;Later during the day, I asked him what he’d recommend to someone like me who has aspirations to eventually contribute to Rails. What he said really stuck with me.&lt;/p&gt;
&lt;h2&gt;Read bits of the codebase at a time&lt;/h2&gt;
&lt;p&gt;This one was a very new idea to me. I used to have a preconception that in order to contribute to an open source project, a good place to start would be to browse its GitHub issues or something similar. Kasper suggested that reading little bits of the codebase at a time would be a good first step.&lt;/p&gt;
&lt;p&gt;No one understands Rails 100%. There are tens of thousands of lines of code with new contributions all the time. Reading bits over time, whether through &lt;a href=&quot;https://api.rubyonrails.org&quot;&gt;api.rubyonrails.org&lt;/a&gt; or source code, with the intent to understand will help someone like me get a foot in the door by accumulating some domain knowledge.&lt;/p&gt;
&lt;p&gt;In Kasper&apos;s own words:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;I like to think of reading the layers of code as drilling into the ground, viewing the different sediments until you hit bedrock (and then you know this piece). Then picking a different spot to drill into, so you learn bit by bit, but enough that you have a solid understanding for one piece.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Reading leads to learning&lt;/h2&gt;
&lt;p&gt;Library code is very different from application code. Code that is kosher in a consumer-facing application might not fly very well in a library. There are many more nuanced considerations like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What could the downstream impacts of this seemingly tiny change be?&lt;/li&gt;
&lt;li&gt;How difficult might it be to maintain this change over time across different people?&lt;/li&gt;
&lt;li&gt;There are deprecation cycles in libraries—you can&apos;t just yank things from them. Is this change as &quot;correct&quot; as possible so we mitigate the risk of deprecating in the future?&lt;/li&gt;
&lt;li&gt;Is this change targeting too specific a use case to be included in a library?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Reading and learning small bits of the codebase over time will shed light on these things and many more. Kasper brought up a great point that this will probably expose features of Ruby that aren’t quite as common in the application world. Learning all of these things will eventually make you a better programmer in general.&lt;/p&gt;
&lt;h2&gt;Make learning the goal&lt;/h2&gt;
&lt;p&gt;Finally, this was a really great mindset shift that he proposed: dive into Rails with the goal of learning. If a contribution eventually results from it, great, but what’s more important is that community members are leveling up from delving into Rails.&lt;/p&gt;
&lt;p&gt;I love this community mindset. Rails should be about enabling developers to quickly build maintainable software and its community is aiming to grow everyone into better programmers.&lt;/p&gt;
&lt;p&gt;Learnings from Rails would absolutely be applicable in any other project, whether that’s another gem, another framework, or even another programming language. Not everything you learn in private app contexts is applicable to open source, but learnings from open source will almost always be applicable to private apps. In the long run, the value of that outweighs any number of issued PRs, whether accepted or rejected.&lt;/p&gt;
&lt;h2&gt;Closing thoughts&lt;/h2&gt;
&lt;p&gt;To sum it up: read little bits of code at a time, slowly gain understanding of a specific domain, and do it all with the intent of getting better. There are a lot of higher-level themes in these ideas like curiosity, empathy, and community, and I’m sure I’ll still be noodling on them for the foreseeable future.&lt;/p&gt;
&lt;p&gt;I feel like my chats with Kasper has already helped change how I think about open source, and I hope they can do the same for you.&lt;/p&gt;
</content:encoded></item><item><title>Thoughts on data</title><link>https://nshki.com/thoughts-on-data/</link><guid isPermaLink="true">https://nshki.com/thoughts-on-data/</guid><description>This is, in no particular order, a collection of my thoughts around data and how we might better approach it when writing software.</description><pubDate>Sat, 10 Sep 2022 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Over the past several years, I’ve been doing quite a bit of work with data. This was through setting up systems to collect data from various APIs or webhooks, transforming the data so it’s more usable, documenting changes, handling errors, etc. And I’ve noticed that this particular area has been the wild west, particularly in the context of teams. There aren’t a lot of “data best practices” floating around in popular tech reading.&lt;/p&gt;
&lt;p&gt;This is, in no particular order, a collection of my thoughts around data and how we might better approach it when writing software.&lt;/p&gt;
&lt;h3&gt;Keep raw data&lt;/h3&gt;
&lt;p&gt;When you reach out to an API or process webhook payloads for chunks of data, keep everything in the exact format you received it. This could be in a Postgres &lt;code&gt;jsonb&lt;/code&gt; column, a separate table, or a separate database entirely. The point is that you’re more likely than not going to need to revisit that raw data at some point whether it’s to correct discrepancies or help debug something. It’s much better to have records of the raw data than to have to collect all of it again.&lt;/p&gt;
&lt;h3&gt;Normalize that data&lt;/h3&gt;
&lt;p&gt;Having the raw data is great but don’t be using it directly in your app. Create tables with explicit columns that parse out the relevant bits if you’re using a relational database. Make it easy to understand. Set good indexes. Thinking through this is important if you want a performant and maintainable application.&lt;/p&gt;
&lt;h3&gt;Use background jobs&lt;/h3&gt;
&lt;p&gt;Data processing generally takes time, especially when dealing with large data sets. Queue them as background jobs. Stagger them to respect any API rate limits. Have retry mechanisms in place in case anything goes wrong.&lt;/p&gt;
&lt;h3&gt;Batch things wherever possible&lt;/h3&gt;
&lt;p&gt;Whenever you’re inserting/updating/upserting rows or documents, batch them. It makes a world of difference when you can do it in one query vs N+1.&lt;/p&gt;
&lt;h3&gt;Encrypt sensitive bits&lt;/h3&gt;
&lt;p&gt;Personally identifiable information (PII), credit card numbers, social security numbers, and things of that nature should never be stored unencrypted. If they’re present in the raw data, encrypt that too.&lt;/p&gt;
&lt;h3&gt;Get agreement on and document events&lt;/h3&gt;
&lt;p&gt;I use the term “events” loosely here, but a lot of teams like to have analytics data in a platform like Mixpanel, Segment, etc. The data that goes into these platforms need to be well-documented and most importantly have stakeholder alignment. It’s going to be a bad time if everyone is free-handing all sorts of event names with inconsistent payloads that fire at unexpected times.&lt;/p&gt;
&lt;h3&gt;Bring in your data team&lt;/h3&gt;
&lt;p&gt;If you have a data division at your org, bring someone in from their team. They’re going to have to access this data at some point too, and their role is entirely around data. Chances are they’re going to have really great suggestions on how to collect, structure, and analyze the data that you didn’t think about.&lt;/p&gt;
</content:encoded></item><item><title>+918 -24,519</title><link>https://nshki.com/918-24519/</link><guid isPermaLink="true">https://nshki.com/918-24519/</guid><description>I migrated my site back to Jekyll which resulted in 918 lines of code added and 24,519 removed.</description><pubDate>Fri, 02 Sep 2022 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It’s been several months since I’ve last written a blog post so I was revisiting my site this week. I noticed there were some security alerts in GitHub so I decided to upgrade various dependencies to resolve them. To my surprise, this actually broke &lt;a href=&quot;https://remix.run/&quot;&gt;Remix&lt;/a&gt;’s router and resulted in exception explosions whenever I wanted to navigate to any page.&lt;/p&gt;
&lt;p&gt;Normally, I’d be more than happy to start debugging what the heck was going on, but I just wasn’t having it that day. I wanted my site to be a joyful little codebase where I push some Markdown and pages get published. This was making it turn into a headache instead.&lt;/p&gt;
&lt;p&gt;So I migrated my site back to &lt;a href=&quot;https://jekyllrb.com/&quot;&gt;Jekyll&lt;/a&gt; which resulted in 918 lines of code added and &lt;em&gt;24,519 removed&lt;/em&gt;. Now, just to clarify, this is not a post bashing Remix—Remix is a lovely framework. It was just staggering how much more code there was to look at and reason about.&lt;/p&gt;
&lt;p&gt;For me at this moment in time, this migration sparked joy. The site is much easier to maintain, the &lt;a href=&quot;https://validator.w3.org/nu/?doc=https%3A%2F%2Fnshki.com%2F&quot;&gt;HTML&lt;/a&gt; and &lt;a href=&quot;https://jigsaw.w3.org/css-validator/validator?uri=https%3A%2F%2Fnshki.com%2Fassets%2Fstyles.css&amp;amp;profile=css3svg&amp;amp;usermedium=all&amp;amp;warning=1&amp;amp;vextwarning=&amp;amp;lang=en&quot;&gt;CSS&lt;/a&gt; are valid, and there’s not a line of JavaScript in sight.&lt;/p&gt;
</content:encoded></item><item><title>Accessing a previous session in NextAuth.js callbacks</title><link>https://nshki.com/accessing-a-previous-session-in-nextauthjs-callbacks/</link><guid isPermaLink="true">https://nshki.com/accessing-a-previous-session-in-nextauthjs-callbacks/</guid><description>A how-to on accessing previous session tokens to enable account merging or something similar using NextAuth.js.</description><pubDate>Mon, 21 Feb 2022 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;strong&gt;Edit: February 22nd, 2022&lt;/strong&gt;: Balázs Orbán, the lead maintainer of NextAuth, &lt;a href=&quot;https://twitter.com/balazsorban44/status/1496322391059873796?s=20&amp;amp;t=k4AGaG_eh7gcPVJM4LhaDQ&quot;&gt;was kind enough to point out on Twitter&lt;/a&gt; that achieving this is much simpler by using the built-in &lt;a href=&quot;https://next-auth.js.org/tutorials/securing-pages-and-api-routes#using-gettoken&quot;&gt;getToken()&lt;/a&gt; function. Cheers for that! I&apos;ve left the original article below.&lt;/p&gt;
&lt;p&gt;--&lt;/p&gt;
&lt;p&gt;I recently was working on a project that used &lt;a href=&quot;https://next-auth.js.org/&quot;&gt;NextAuth.js&lt;/a&gt; for its auth mechanism and needed to support account merging. e.g. If I’m already logged in via an email and password combination, I need to be able to “attach” an account from an OAuth service like Twitter or Discord to that main account.&lt;/p&gt;
&lt;p&gt;For folks who use NextAuth, this should already be possible if you’re using the &lt;a href=&quot;https://next-auth.js.org/getting-started/upgrade-v4#session-strategy&quot;&gt;database session strategy&lt;/a&gt;, but it’s not quite as obvious if you’re using the JWT strategy. The objective is to have access to the previous session with an identifier you can use to reference a main account/user record. There is a &lt;a href=&quot;https://github.com/nextauthjs/next-auth/discussions/3946&quot;&gt;GitHub discussion&lt;/a&gt; open to address this very thing, but this article outlines a way you can access previous session tokens in your currently installed version of NextAuth.&lt;/p&gt;
&lt;h2&gt;General strategy&lt;/h2&gt;
&lt;p&gt;In Next.js, we’re already able to access request cookies by referencing the &lt;code&gt;req.cookies&lt;/code&gt; object provided by the &lt;code&gt;NextApiRequest&lt;/code&gt; object in each API route. We can utilize this to reference the session cookie provided by NextAuth. We can configure NextAuth to use a custom session cookie name so we can always reference it without fear of the default name changing in future releases.&lt;/p&gt;
&lt;p&gt;The next challenge here is that the session cookie is encoded for security reasons, so we need a way to reliably decode it on the back end. Luckily, we can do that by defining custom JWT encode and decode functions for NextAuth.&lt;/p&gt;
&lt;h3&gt;1. Get access to the &lt;code&gt;NextApiRequest&lt;/code&gt; object&lt;/h3&gt;
&lt;p&gt;As a first step, let’s make sure our NextAuth API route is set up to have access to the request object. Most examples from NextAuth’s docs don’t include the request, so here’s one way to do it:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// pages/api/auth/[...nextauth].js
import NextAuth from &apos;next-auth&apos;

export default (req, res) =&amp;gt; {
  return NextAuth(req, res, {
    // Your NextAuth config
  })
}
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;// For TypeScript folks, it&apos;d look like this.
//
// pages/api/auth/[...nextauth].ts
import type { NextApiRequest, NextApiResponse } from &apos;next&apos;
import NextAuth from &apos;next-auth&apos;

export default (req: NextApiRequest, res: NextApiResponse) =&amp;gt; {
  return NextAuth(req, res, {
    // Your NextAuth config
  }
})
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;2. Use a custom session cookie name&lt;/h3&gt;
&lt;p&gt;Next, let’s use a custom session cookie name so we can future-proof ourselves from referencing a default session cookie name in case it changes in future NextAuth releases. You can see &lt;a href=&quot;https://next-auth.js.org/configuration/options#cookies&quot;&gt;specific option documentation here&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// pages/api/auth/[...nextauth].js
import NextAuth from &apos;next-auth&apos;

export default (req, res) =&amp;gt; {
  let sessionTokenName = &apos;&amp;lt;your session token name&amp;gt;&apos;

  return NextAuth(req, res, {
    cookies: {
      sessionToken: {
        name: sessionTokenName,
        options: {
          httpOnly: true,
          sameSite: &apos;lax&apos;,
          path: &apos;/&apos;,
          secure: true
        }
      }
    },

    // Rest of your NextAuth config
  })
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;3. Add custom JWT encode and decode functions&lt;/h3&gt;
&lt;p&gt;Now, let’s add custom JWT encode and decode functions so that we can prepare to properly decode the session token. I ended up using the &lt;a href=&quot;https://www.npmjs.com/package/jsonwebtoken&quot;&gt;jsonwebtoken&lt;/a&gt; package &lt;a href=&quot;https://next-auth.js.org/adapters/dgraph#working-with-jwt-session-and-auth-directive&quot;&gt;as suggested by NextAuth’s docs&lt;/a&gt;, so let’s install that.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# For Yarn users
yarn add jsonwebtoken
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;# For NPM users
npm install jsonwebtoken
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We’ll define custom functions in a separate module.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// lib/jwt.js
import * as jwt from &apos;jsonwebtoken&apos;

export function jwtEncode({ token, secret }) {
  return jwt.sign({ ...token }, secret, { algorithm: &apos;HS256&apos; })
}

export function jwtDecode({ token, secret }) {
  return jwt.verify(token, secret, { algorithms: [&apos;HS256&apos;] })
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And use them in the NextAuth route. Generate a &lt;code&gt;NEXTAUTH_SECRET&lt;/code&gt; environment variable if you haven’t done so already in your setup.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// pages/api/auth/[...nextauth].js
import NextAuth from &apos;next-auth&apos;
import { jwtEncode, jwtDecode } from &apos;../../lib/jwt&apos;

export default (req, res) =&amp;gt; {
  let sessionTokenName = &apos;&amp;lt;your session token name&amp;gt;&apos;

  return NextAuth(req, res, {
    jwt: {
      secret: process.env.NEXTAUTH_SECRET,
      encode: jwtEncode,
      decode: jwtDecode
    },

    // Rest of your NextAuth config
  })
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;4. Decode the session token&lt;/h3&gt;
&lt;p&gt;Now we have all the piping in place to reference the decoded session token! This will obviously depend on your particular use case, but here’s an example of how to do it in the &lt;code&gt;jwt&lt;/code&gt; callback.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// pages/api/auth/[...nextauth].js
import NextAuth from &apos;next-auth&apos;
import { jwtEncode, jwtDecode } from &apos;../../lib/jwt&apos;

export default (req, res) =&amp;gt; {
  let sessionTokenName = &apos;&amp;lt;your session token name&amp;gt;&apos;

  return NextAuth(req, res, {
    callbacks: {
      jwt: ({ token }) =&amp;gt; {
        let secret = process.env.NEXTAUTH_SECRET
        let sessionToken = req.cookies[sessionTokenName]
        let decodedSession = jwtDecode({ token: sessionToken, secret })

        // Use `decodedSession` here! Look up a user or account
        // record, persist current session data, etc.
      }
    },

    // Rest of your NextAuth config
  })
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;Keep tabs on the &lt;a href=&quot;https://github.com/nextauthjs/next-auth/discussions/3946&quot;&gt;relevant GitHub discussion&lt;/a&gt; to follow any official releases that might make this article obsolete, but I hope this serves as a helpful reference in the meantime for anyone looking to accomplish account merging or something similar using NextAuth.&lt;/p&gt;
</content:encoded></item><item><title>Replace a relational database with DynamoDB in Rails</title><link>https://nshki.com/replace-a-relational-database-with-dynamodb-in-rails/</link><guid isPermaLink="true">https://nshki.com/replace-a-relational-database-with-dynamodb-in-rails/</guid><description>Steps I took to completely remove Active Record and replace it with DynamoDB.</description><pubDate>Sun, 24 Oct 2021 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I recently was working on an already existing Rails project where using &lt;a href=&quot;https://aws.amazon.com/dynamodb/&quot;&gt;Amazon DynamoDB&lt;/a&gt; made more sense than a relational database. While I won&apos;t get into the nitty-gritty of DynamoDB, what&apos;s relevant for this blog post is that it&apos;s a NoSQL database without an out-of-the-box Active Record adapter.&lt;/p&gt;
&lt;p&gt;This meant a couple things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;We needed a Ruby ORM that supported DynamoDB.&lt;/li&gt;
&lt;li&gt;Active Record must be removed from the project to avoid having to maintain a relational database in the project&apos;s cloud infrastructure.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This was a bit more involved than I first thought, so here are the steps I went through to successfully replace a Rails project&apos;s relational database with DynamoDB.&lt;/p&gt;
&lt;h2&gt;1. Install Dynamoid&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.honeybadger.io/blog/aws-dynamo-db-rails/&quot;&gt;Julie Kent&apos;s article on using DynamoDB in Rails&lt;/a&gt; has a great overview of an ORM called &lt;a href=&quot;https://github.com/Dynamoid/dynamoid&quot;&gt;Dynamoid&lt;/a&gt; which feels really close to Active Record. This is an excellent choice for interfacing with DynamoDB, and adding it to a bundle is as easy as:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Gemfile

# ...
gem &apos;aws-sdk&apos;
# ...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, configuring Dynamoid can be done in a few ways. The way I opted for was to maintain as much of the &quot;Rails way&quot; as possible, which involves having local database tables as well as a test suite. Running DynamoDB locally can be done by &lt;a href=&quot;https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.DownloadingAndRunning.html&quot;&gt;running an executable &lt;code&gt;.jar&lt;/code&gt; file or via Docker&lt;/a&gt;. Since I wanted to avoid the overhead of sorting out all Java runtime dependencies, I picked Docker.&lt;/p&gt;
&lt;h2&gt;2. Dockerize the project&lt;/h2&gt;
&lt;p&gt;I ended up Dockerizing this project by adding a &lt;code&gt;Dockerfile&lt;/code&gt; and &lt;code&gt;docker-compose.yml&lt;/code&gt;—the first to define an image and the second to orchestrate multiple containers.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Dockerfile.
#
# Swap Ruby version to applicable one for your project.
FROM ruby:2.7.4

# Install Yarn.
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y npm &amp;amp;&amp;amp; npm install -g yarn

# Setup working directory.
RUN mkdir -p /var/my-project-name
WORKDIR /var/my-project-name

# Setup dependencies. Split into separate step to utilize Docker cache.
COPY Gemfile* /var/my-project-name/
RUN bundle install
COPY yarn.lock /var/my-project-name/
RUN bin/yarn install

# Copy project files.
COPY . /var/my-project-name

# Command to boot server.
#
# 1. Prevent &quot;Rails server already running&quot; errors on runs.
# 2. Ensure proper DynamoDB tables exist.
# 3. Start Rails server on 0.0.0.0.
CMD rm -rf tmp/pids/server.pid &amp;amp;&amp;amp; bin/rake dynamoid:create_tables &amp;amp;&amp;amp; bin/rails server -b &apos;0.0.0.0&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;# docker-compose.yml
#
# Ref: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.DownloadingAndRunning.html

version: &apos;3.8&apos;

services:
  dynamodb-local:
    image: &apos;amazon/dynamodb-local:latest&apos;
    working_dir: &apos;/home/dynamodblocal&apos;
    volumes:
      - &apos;./docker/dynamodb:/home/dynamodblocal/data&apos;
    ports:
      - &apos;8000:8000&apos;
    command: &apos;-jar DynamoDBLocal.jar -sharedDb -dbPath ./data&apos;

  rails-app:
    build: .
    container_name: rails-app
    depends_on:
      - &apos;dynamodb-local&apos;
    links:
      - &apos;dynamodb-local&apos;
    volumes:
      - &apos;.:/var/my-project-name&apos;   # Mirror working directory in Dockerfile
    ports:
      - &apos;3000:3000&apos;

    # These environment variables can be left as-is. The DynamoDB container
    # doesn&apos;t need valid keys to work, it just needs them to exist.
    environment:
      AWS_ACCESS_KEY_ID: &apos;DUMMYIDEXAMPLE&apos;
      AWS_SECRET_ACCESS_KEY: &apos;DUMMYEXAMPLEKEY&apos;
      REGION: &apos;us-west-2&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;3. Configure Dynamoid&lt;/h2&gt;
&lt;p&gt;First thing is to make sure our environments are referencing the proper DynamoDB tables. We&apos;ll modify our environment config.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# config/environments/development.rb

# ...
Dynamoid.configure do |config|
  # Point to local DynamoDB server.
  config.endpoint = &apos;http://dynamodb-local:8000&apos;

  # Use passed REGION from docker-compose.yml.
  config.region = ENV[&apos;REGION&apos;]
end
# ...
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;# config/environments/production.rb

# ...
Dynamoid.configure do |config|
  # If you have a more advanced AWS infrastructure setup that uses assumed
  # roles, etc., then setting the access key and secret key aren&apos;t
  # necessary here.
  config.access_key = ENV[&apos;AWS_ACCESS_KEY_ID&apos;]
  config.secret_key = ENV[&apos;AWS_SECRET_ACCESS_KEY&apos;]
  config.region = ENV[&apos;REGION&apos;]
end
# ...
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;# config/environments/test.rb

# ...
Dynamoid.configure do |config|
  # Essentially the same as the development environment except with an
  # explicit namespace so we don&apos;t have development and test DynamoDB table
  # names clashing with each other.
  config.namespace = &quot;#{Rails.application.railtie_name}_#{Rails.env}&quot;
  config.endpoint = &apos;http://dynamodb-local:8000&apos;
  config.region = ENV[&apos;REGION&apos;]
end
# ...
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;4. Migrate models&lt;/h2&gt;
&lt;p&gt;Luckily, Dynamoid provides a very easy way to migrate our existing Active Record models to it. We just need to break inheritance chains so we stop using &lt;code&gt;ApplicationRecord&lt;/code&gt; and/or &lt;code&gt;ActiveRecord::Base&lt;/code&gt; and then &lt;code&gt;include Dynamoid::Document&lt;/code&gt; in our model classes.&lt;/p&gt;
&lt;p&gt;An example model might look like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# app/models/user.rb

class User
  include Dynamoid::Document

  field :name,  :string
  field :email, :string
  field :phone, :string
  field :age,   :integer
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Since DynamoDB is a NoSQL database, migrations won&apos;t be applicable in our Rails project anymore, and model schemas are defined directly in model definitions.&lt;/p&gt;
&lt;h2&gt;5. Remove Active Record&lt;/h2&gt;
&lt;p&gt;Depending on your situation, this step may not be necessary, but in my case it was. The first thing to do is to modify your &lt;code&gt;config/application.rb&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# config/application.rb

# ...

# Replace your `require &quot;rails/all&quot;` line with this.
#
# Pick the frameworks you want:
require &quot;active_model/railtie&quot;
require &quot;active_job/railtie&quot;
# require &quot;active_record/railtie&quot;
# require &quot;active_storage/engine&quot;
require &quot;action_controller/railtie&quot;
require &quot;action_mailer/railtie&quot;
# require &quot;action_mailbox/engine&quot;
# require &quot;action_text/engine&quot;
require &quot;action_view/railtie&quot;
require &quot;action_cable/engine&quot;
require &quot;sprockets/railtie&quot;
require &quot;rails/test_unit/railtie&quot;

# ...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Active Record, Active Storage, Action Mailbox, and Action Text are all commented out from the &lt;code&gt;require&lt;/code&gt; statements because they all use Active Record. As a follow-up, we need to remove references to all of these libraries.&lt;/p&gt;
&lt;p&gt;First, comment out any line that references any of the removed libraries in your environment config files (&lt;code&gt;config/environments/*&lt;/code&gt;). For instance:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# config.active_storage.service = :local
# config.active_record_migration_error = :page_load
# config.active_record.verbose_query_logs = true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, we won&apos;t need Active Storage in our front end bundle anymore, so let&apos;s remove that:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;yarn remove @rails/activestorage
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And we&apos;ll also remove references to that front end package. In &lt;code&gt;app/javascript/packs/application.js&lt;/code&gt;, comment out any lines that reference &lt;code&gt;ActiveStorage&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// import * as ActiveStorage from &quot;@rails/activestorage&quot;
// ActiveStorage.start()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And finally, remove the following files:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;app/models/application_record.rb&lt;/code&gt;: Since we&apos;ve already migrated our models to Dynamoid.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;config/database.yml&lt;/code&gt;: Since we&apos;re not using Active Record adapters.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;db/*&lt;/code&gt;: The whole directory. We won&apos;t need the schema, migrations, or seeds anymore since those were all Active Record.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;test/fixtures/*&lt;/code&gt;: This one&apos;s only really applicable if you use Minitest and fixtures, but those operate using Active Record, so we don&apos;t need those anymore.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;6. Configure test suite&lt;/h2&gt;
&lt;p&gt;The primary thing to configure here is getting the test suite to properly clean database tables for test runs. Whether you&apos;re using Minitest, RSpec, or something else, you&apos;ll want to modify the method that runs before every test run. In my case, it was Minitest, so it looked like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# test/test_helper.rb

# ...
class ActiveSupport::TestCase
  # Run some procedures before every test case in the suite.
  #
  # @return [void]
  def before_setup
    super

    # Reset DynamoDB tables.
    #
    # Ref: https://github.com/Dynamoid/dynamoid#test-environment
    Dynamoid.adapter.list_tables.each do |table|
      Dynamoid.adapter.delete_table(table) if table =~ /^#{Dynamoid::Config.namespace}/
    end
    Dynamoid.adapter.tables.clear
    Dynamoid.included_models.each { |m| m.create_table(sync: true) }
  end
end
# ...
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;...profit! 🎉&lt;/h2&gt;
&lt;p&gt;Now we should be able to simply run &lt;code&gt;docker-compose up&lt;/code&gt; and our local application should be using DynamoDB tables without any trace of Active Record left!&lt;/p&gt;
&lt;p&gt;I would like to note that for most Rails projects, using DynamoDB is probably not advisable. However for cases where it actually makes sense, I hope this post can serve as a good reference.&lt;/p&gt;
</content:encoded></item><item><title>Adding a Cypress test suite to your full stack Next.js app</title><link>https://nshki.com/adding-a-cypress-test-suite-to-your-full-stack-nextjs-app/</link><guid isPermaLink="true">https://nshki.com/adding-a-cypress-test-suite-to-your-full-stack-nextjs-app/</guid><description>A guide to add Cypress and CI to a Next.js app configured with a database.</description><pubDate>Wed, 18 Aug 2021 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://nextjs.org/&quot;&gt;Next.js&lt;/a&gt; is undoubtedly a powerful way to build React apps. With its recent addition of API routes, it can actually serve as a full stack toolkit to build web applications.&lt;/p&gt;
&lt;p&gt;Recently, I started helping out with a full stack Next.js app (database and all) and wanted to add end-to-end integration tests. &lt;a href=&quot;https://www.cypress.io/&quot;&gt;Cypress&lt;/a&gt; was the natural framework to reach for, and I was surprised to find that there weren&apos;t very many guides out there for adding it to Next.js apps. Specifically, there weren&apos;t very many guides for adding Cypress to Next.js apps that interact with databases.&lt;/p&gt;
&lt;p&gt;This post is my stab at one of those guides.&lt;/p&gt;
&lt;h2&gt;Installing the packages&lt;/h2&gt;
&lt;p&gt;First off, we need to add the necessary package to run Cypress:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;yarn add cypress --dev
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We&apos;ll also add a handy package called &lt;a href=&quot;https://www.npmjs.com/package/start-server-and-test&quot;&gt;&lt;code&gt;start-server-and-test&lt;/code&gt;&lt;/a&gt; which will be used to automate booting up and tearing down the app when the test suite runs:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;yarn add start-server-and-test --dev
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And finally, we&apos;ll add a package called &lt;a href=&quot;https://www.npmjs.com/package/dotenv-flow&quot;&gt;&lt;code&gt;dotenv-flow&lt;/code&gt;&lt;/a&gt; which lets us easily manage environment variables in development and test environments:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;yarn add dotenv-flow --dev
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Creating the test database&lt;/h2&gt;
&lt;p&gt;We&apos;ll want to create a separate test database so we don&apos;t clobber development databases with test data. This will heavily vary depending on your use case, but the general steps should be more or less the same:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a new local test database and user.&lt;/li&gt;
&lt;li&gt;Keep its database connection string in &lt;code&gt;.env.test.local&lt;/code&gt; which will automatically be used in test environments thanks to &lt;code&gt;dotenv-flow&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Use the connection string to connect to and interact with your database in your app.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Example for PostgreSQL&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;$ createdb my_app_test
$ psql my_app_test
psql_test=# create user YOUR_USERNAME with password &apos;YOUR_PASSWORD&apos;;
psql_test=# grant all privileges on database my_app_test to YOUR_USERNAME;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;DATABASE_URL=&quot;postgresql://YOUR_USERNAME:YOUR_PASSWORD@localhost:5432/my_app_test&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Adding the scripts&lt;/h2&gt;
&lt;p&gt;Now let&apos;s add some scripts to &lt;code&gt;package.json&lt;/code&gt; to automate running Cypress in a test environment.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;scripts&quot;: {
    &quot;dev:test&quot;: &quot;next dev -p 3001&quot;,
    &quot;cy:open-only&quot;: &quot;cypress open&quot;,
    &quot;cy:run-only&quot;: &quot;cypress run&quot;,
    &quot;cy:open&quot;: &quot;NODE_ENV=test dotenv-flow -- start-server-and-test dev:test 3001 cy:open-only&quot;,
    &quot;cy:run&quot;: &quot;NODE_ENV=test dotenv-flow -- start-server-and-test dev:test 3001 cy:run-only&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A couple points of interest here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We add a &lt;code&gt;dev:test&lt;/code&gt; command that starts our Next.js app on port 3001. This is to avoid any port conflicts when, say, we want to run the test suite at the same time we have a development environment running.&lt;/li&gt;
&lt;li&gt;We specifically set &lt;code&gt;NODE_ENV=test&lt;/code&gt; when running the main Cypress commands to have &lt;code&gt;dotenv-flow&lt;/code&gt; automatically read our &lt;code&gt;.env.test.local&lt;/code&gt; file to grab the database connection string.&lt;/li&gt;
&lt;li&gt;We have &lt;code&gt;cy:open&lt;/code&gt; and &lt;code&gt;cy:run&lt;/code&gt; to easily let us access the Cypress Electron app or just run in the command line.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Running Cypress for the first time&lt;/h2&gt;
&lt;p&gt;Now let&apos;s finally run Cypress. The first run will generate files in your project that will serve as the foundation of your suite.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;yarn cy:open
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you haven&apos;t used Cypress before, this will open an Electron app that gives you a nice UI to configure and run your tests. At this point, you should see a new &lt;code&gt;cypress/&lt;/code&gt; directory in your project as well as a &lt;code&gt;cypress.json&lt;/code&gt;. Let&apos;s tweak that &lt;code&gt;cypress.json&lt;/code&gt; file to look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;baseUrl&quot;: &quot;http://localhost:3001&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is so Cypress automatically prepends any paths with that URL when we want to run &lt;code&gt;cy.visit()&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;If you&apos;ve made it this far, you&apos;re now ready to start writing your integration tests and running them locally, congrats! 🎉&lt;/p&gt;
&lt;h2&gt;Configuring CI&lt;/h2&gt;
&lt;p&gt;This is another step that will vary depending on your use case, but for the sake of example, let&apos;s set up a new workflow in GitHub Actions that will run our Cypress suite on every pull request.&lt;/p&gt;
&lt;p&gt;Add a new workflow file at &lt;code&gt;.github/workflows/e2e_tests.yml&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;name: E2E tests (Cypress suite)
on: [pull_request]
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: &quot;Setup Postgres database&quot;
        uses: harmon758/postgresql-action@v1
        with:
          postgresql version: &quot;13&quot;
          postgresql db: ${{ secrets.POSTGRES_DB }}
          postgresql user: ${{ secrets.POSTGRES_USER }}
          postgresql password: ${{ secrets.POSTGRES_PASSWORD }}
      - name: &quot;Setup Yarn dependencies&quot;
        uses: bahmutov/npm-install@v1
        with:
          install-command: yarn --frozen-lockfile
      - name: &quot;Run migrations&quot;
        run: yarn db:migrate
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL }}
      - name: &quot;Run Cypress&quot;
        uses: cypress-io/github-action@v2
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL }}
        with:
          command: yarn cy:run
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This example workflow makes a couple assumptions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You have &lt;code&gt;POSTGRES_DB&lt;/code&gt;, &lt;code&gt;POSTGRES_USER&lt;/code&gt;, &lt;code&gt;POSTGRES_PASSWORD&lt;/code&gt;, and &lt;code&gt;DATABASE_URL&lt;/code&gt; added to your GitHub repo&apos;s secrets.&lt;/li&gt;
&lt;li&gt;You have a &lt;code&gt;db:migrate&lt;/code&gt; script setup in &lt;code&gt;package.json&lt;/code&gt; which migrates your database and sets up the correct schema for your project.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now try opening a new PR and celebrate the fact that your test suite now runs on each PR. 🎉&lt;/p&gt;
&lt;h1&gt;Wrapping up&lt;/h1&gt;
&lt;p&gt;It took me a minute to piece together this particular configuration when I first went through it, so I hope this post can serve as a solid reference for anyone looking to setup end-to-end tests for their Next.js app. Special thanks to &lt;a href=&quot;https://dev.to/ashconnolly/how-to-quickly-add-cypress-to-your-next-js-app-2oc6&quot;&gt;Ash Connolly&apos;s post on setting up Cypress in Next.js&lt;/a&gt;. I used his guide as a stepping stone to get to where this ended up.&lt;/p&gt;
&lt;p&gt;And finally, feedback is welcome! Feel free to reach out for any comments, questions, or suggestions.&lt;/p&gt;
&lt;p&gt;Happy coding.&lt;/p&gt;
</content:encoded></item><item><title>Giving Ruby objects access to Rails view methods</title><link>https://nshki.com/giving-ruby-objects-access-to-rails-view-methods/</link><guid isPermaLink="true">https://nshki.com/giving-ruby-objects-access-to-rails-view-methods/</guid><description>A look at `view_context` and how to use them in POROs.</description><pubDate>Wed, 27 Jan 2021 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I think most Rubyists can agree that POROs (Plain Old Ruby Objects) are incredibly useful. When done right, they&apos;re easy to understand, build, and use in applications.&lt;/p&gt;
&lt;p&gt;In the Rails ecosystem, there are many classifications of POROs: services, presenters, query objects, value objects, and so on and so forth. Sometimes we want to give these POROs access to things our application can access at the view level—maybe we want a PORO to manage generating a set of asset or route paths for us.&lt;/p&gt;
&lt;p&gt;Luckily, it&apos;s easy! We can use something called &lt;a href=&quot;https://api.rubyonrails.org/classes/ActionView/Rendering.html#method-i-view_context&quot;&gt;&lt;code&gt;view_context&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class MyController &amp;lt; ApplicationController
  def index
    # Pass a `view_context` from the controller level into a PORO.
    poro = MyPoro.new(view_context)

    # ...
  end
end

class MyPoro
  def initialize(context)
    @context = context
  end

  # Now we have access to things like...
  def example
    # Image paths.
    @context.image_path(&quot;/some/path/here&quot;)
    @context.image_url(&quot;/some/path/here&quot;)

    # Routes.
    @context.root_path
    @context.users_path

    # Other things you have access to in a view.
    @context.controller_name
    @context.action_name
    # etc...
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Having the flexibility to access view-level methods in a PORO is powerful for obvious reasons and can help prevent unnecessary complexity inside of templates and partials.&lt;/p&gt;
</content:encoded></item><item><title>2020 year in review</title><link>https://nshki.com/2020-year-in-review/</link><guid isPermaLink="true">https://nshki.com/2020-year-in-review/</guid><description>It goes without saying that this year was absolutely insane, but existential threats aside, this year did give me a decent number of things to be thankful about.</description><pubDate>Tue, 29 Dec 2020 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It goes without saying that this year was absolutely insane, but existential threats aside, this year did give me a decent number of things to be thankful about.&lt;/p&gt;
&lt;h2&gt;Professionally&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;I passed my 1 year mark at &lt;a href=&quot;https://www.litmus.com/&quot;&gt;Litmus&lt;/a&gt;, a company whose product I&apos;ve used in just about every previous role I&apos;ve had. My coworkers are great human beings and I feel like I can continue to grow here. Without divulging a lot of details, I helped enhance some internal company software and served as tech lead for two newly formed teams. I also helped lead the committee that recently launched the company&apos;s &lt;a href=&quot;https://litmus.engineering/&quot;&gt;engineering blog&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;I&apos;m now in year 4 of being fully remote. Most people in the industry have gone fully remote with the pandemic, but I&apos;m thankful to have had a bit of a head start, since I know switching cold turkey can be rough. Companies being remote &lt;em&gt;first&lt;/em&gt; really does make a difference as opposed to remote friendly, or as I like to call it, &quot;tolerant.&quot;&lt;/li&gt;
&lt;li&gt;I&apos;m still rocking Ruby on Rails as my daily driver with sprinkles of Vue here and there—technologies that I enjoy working with.&lt;/li&gt;
&lt;li&gt;I started flirting with Rust in my off time.&lt;/li&gt;
&lt;li&gt;I went through &lt;a href=&quot;http://rebuilding-rails.com/&quot;&gt;Rebuilding Rails&lt;/a&gt; by Noah Gibbs, which covers the major components of Rails internals.&lt;/li&gt;
&lt;li&gt;I&apos;ve also started mentoring a couple people who were looking to break into the industry in engineering roles, and I&apos;m elated that they both landed jobs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Open source...ly&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/nshki/chusaku&quot;&gt;Chusaku&lt;/a&gt;, a Ruby gem I started in 2019, passed 10,000 downloads and received some lovely contributions from people much smarter than me. I&apos;m encouraged to continue maintaining it not just for my own use, but for the folks who have adopted it into their workflows as well.&lt;/li&gt;
&lt;li&gt;I helped review PRs for &lt;a href=&quot;https://www.if-me.org/&quot;&gt;if me&lt;/a&gt;, a mental health app that &lt;a href=&quot;https://nshki.com/getting-into-open-source/&quot;&gt;I started contributing to back in 2018&lt;/a&gt;. Definitely intending to continue contributing to this project, since it&apos;s especially relevant today.&lt;/li&gt;
&lt;li&gt;I got a couple PRs (&lt;a href=&quot;https://github.com/freeCodeCamp/devdocs/pull/1418&quot;&gt;#1418&lt;/a&gt;, &lt;a href=&quot;https://github.com/freeCodeCamp/devdocs/pull/1431&quot;&gt;#1431&lt;/a&gt;) merged for &lt;a href=&quot;https://devdocs.io/&quot;&gt;DevDocs&lt;/a&gt;, a developer documentation tool I use for just about everything on a daily basis. I actually wasn&apos;t aware it was open source until recently, so I was happy to contribute.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Personally&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;I moved to Los Angeles in the latter half of 2019 to help support family, so I&apos;m in my second year of living here. I&apos;ve always wanted to live in California at &lt;em&gt;some&lt;/em&gt; point in my life, and it&apos;s been an adventure so far. Thankful for the friends I&apos;ve made here. Thankful that my family is doing alright. Hoping the COVID situation gets under control soon.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://twitter.com/nshki_/status/1326687355084787715?s=21&quot;&gt;I brought home the goodest doggo named Luna&lt;/a&gt; and she&apos;s helped raise my bar for base happiness. Being remote and raising a puppy should really be a package deal for all dog lovers.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Finally&lt;/h2&gt;
&lt;p&gt;I think it&apos;s going to be important going into 2021 with the mindset that &lt;em&gt;we&lt;/em&gt; need to make progress happen. Whether it&apos;s something at our jobs or things like COVID-19, we can&apos;t just sit around hoping problems are going to magically go away.&lt;/p&gt;
&lt;p&gt;Be the magic. Wear a mask. Fight racism. Write semantic HTML.&lt;/p&gt;
&lt;p&gt;See you all next year.&lt;/p&gt;
</content:encoded></item><item><title>Fixing overflow padding in Firefox</title><link>https://nshki.com/fixing-overflow-padding-in-firefox/</link><guid isPermaLink="true">https://nshki.com/fixing-overflow-padding-in-firefox/</guid><description>A look at a long-standing CSS bug in Firefox and some approaches to remedy it in your projects.</description><pubDate>Tue, 15 Sep 2020 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;Hat tip to my coworker &lt;a href=&quot;https://dylanatsmith.com/&quot;&gt;Dylan&lt;/a&gt; for deep-diving into this issue with me and digging up the logged bug.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Elements that scroll as its contents go beyond its given dimensions—this is a pretty common pattern in sites and apps. Unfortunately, it comes with a &lt;a href=&quot;https://bugzilla.mozilla.org/show_bug.cgi?id=748518&quot;&gt;9 year old bug in Firefox&lt;/a&gt;. It&apos;s reasonable to want padding within these scrolling elements to give your content some breathing room as you reach the end of the scroll. This bug causes the end padding to be completely ignored. &lt;a href=&quot;https://codepen.io/nshki_/pen/MWyBprL&quot;&gt;Here is a pen to demonstrate&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Unfortunately, until this bug is fixed, we can&apos;t really solve this in a &quot;correct&quot; way. So here are some cross-browser methods to fix this issue.&lt;/p&gt;
&lt;h2&gt;Pseudo-element that provides the padding&lt;/h2&gt;
&lt;p&gt;This is an approach that inserts an &lt;code&gt;::after&lt;/code&gt; pseudo-element in the element that has &lt;code&gt;overflow&lt;/code&gt; applied. You&apos;d remove the end padding explicitly and move it into the pseudo-element.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;div class=&quot;box&quot;&amp;gt;
  &amp;lt;!-- Your content here. --&amp;gt;
&amp;lt;/div&amp;gt;&amp;lt;!-- .box --&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;.box {
  /* These properties are dependent on the project design. */
  overflow: auto;
  max-height: 100px;
  padding: 1rem;

  /* Explicitly removing the end padding. */
  padding-bottom: 0;
}

.box::after {
  content: &apos;&apos;;
  display: block;
  padding-bottom: 1rem; /* Should match `padding` value in parent. */
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This works nicely as Firefox will render the end padding just fine and other browsers will render it the same. It&apos;s also fine to use something like &lt;code&gt;height&lt;/code&gt; instead of &lt;code&gt;padding-bottom&lt;/code&gt; in this example as well.&lt;/p&gt;
&lt;h2&gt;Adding a child element in markup with the padding&lt;/h2&gt;
&lt;p&gt;This is similar to the first method except that we are explicitly including an element in the markup with the necessary padding rather than using a pseudo-element.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;div class=&quot;box&quot;&amp;gt;
  &amp;lt;!-- Your content here. --&amp;gt;

  &amp;lt;div class=&quot;box__spacing&quot;&amp;gt;&amp;lt;/div&amp;gt;
&amp;lt;/div&amp;gt;&amp;lt;!-- .box --&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;.box {
  /* These properties are dependent on the project design. */
  overflow: auto;
  max-height: 100px;
  padding: 1rem;

  /* Explicitly removing the end padding. */
  padding-bottom: 0;
}

.box__spacing {
  display: block;
  padding-bottom: 1rem; /* Should match `padding` value in parent. */
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Adding a child container element with the padding&lt;/h2&gt;
&lt;p&gt;Instead of having the parent element declare a &lt;code&gt;padding&lt;/code&gt;, you can move it into a child container element.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;div class=&quot;box&quot;&amp;gt;
  &amp;lt;div class=&quot;box__child&quot;&amp;gt;
    &amp;lt;!-- Your content here. --&amp;gt;
  &amp;lt;/div&amp;gt;&amp;lt;!-- .box__child --&amp;gt;
&amp;lt;/div&amp;gt;&amp;lt;!-- .box --&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;.box {
  /* These properties are dependent on the project design. */
  overflow: auto;
  max-height: 100px;

  /* Notice that this element has no padding anymore. */
}

.box__child {
  padding: 1rem; /* ...and that the child now has the padding. */
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Closing thoughts&lt;/h2&gt;
&lt;p&gt;The general strategy here is to divert the spacing to another element other than the original. Depending on your project, there are very likely other ways of tackling this problem, but I hope these general strategies help point you in the right direction.&lt;/p&gt;
&lt;p&gt;Also...&lt;/p&gt;
&lt;p&gt;CSS is hard. It seems deceptively simple to pick up, but the real world bugs and issues that developers run into are unlike anything you&apos;ll see in other languages.&lt;/p&gt;
&lt;p&gt;Lots of respect to the people who work on browsers and browser engines, and lots of respect to people who write CSS.&lt;/p&gt;
</content:encoded></item><item><title>Write code for people, not machines</title><link>https://nshki.com/write-code-for-people-not-machines/</link><guid isPermaLink="true">https://nshki.com/write-code-for-people-not-machines/</guid><description>If I could go back in time and talk to my past self, I&apos;d encourage this particular mindset earlier.</description><pubDate>Thu, 13 Aug 2020 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I was asked recently in the context of my career, &quot;what would you have done differently if you could do things over again?&quot;&lt;/p&gt;
&lt;p&gt;I&apos;m a believer that mistakes are the best opportunities to learn and grow from, so I wouldn&apos;t avoid making mistakes. But if I could go back in time and talk to my past self, I think I&apos;d encourage a particular mindset earlier—write code for people, not machines.&lt;/p&gt;
&lt;p&gt;At a certain point in my programming journey, I used to &lt;em&gt;love&lt;/em&gt; being clever. &quot;Wow, look how &lt;em&gt;compact&lt;/em&gt; I can make my code,&quot; &quot;I can encode this list of possible options in &lt;em&gt;binary&lt;/em&gt; to save space,&quot; &quot;I&apos;m going to write it this way to just &lt;em&gt;feel&lt;/em&gt; cool,&quot; etc.&lt;/p&gt;
&lt;p&gt;Spoiler alert, that sucked because guess who that always comes back to bite in the end? &lt;strong&gt;Me&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Obviously, this makes things difficult for your team as well. People who&apos;ve never seen my code before will scratch their heads, find that the original author was me, then ping me with questions. That&apos;s fine for a short period of time while the ideas were still relatively fresh in my mind, but fast forward a handful of years. &lt;em&gt;Shit&lt;/em&gt;, I forgot.&lt;/p&gt;
&lt;p&gt;There&apos;s really no room for ego in this industry. There&apos;s really no room for ego anywhere, for that matter. At the end of the day, yes, writing code is laying out instructions for computers, but who are you really writing for? Yourself and other human beings.&lt;/p&gt;
&lt;p&gt;My teachers in the past always taught me to write like I&apos;m writing so a 4th grader could understand it. The same thing applies to code. Does something prompt a double take? Rewrite it and supply a thoughtful comment. Is something just hard to look at since it&apos;s really clumped together? Make it neat and tidy and easy to process.&lt;/p&gt;
&lt;p&gt;Write the code for people. We&apos;re all in this together.&lt;/p&gt;
</content:encoded></item><item><title>HTML isn&apos;t an assembly language</title><link>https://nshki.com/html-isnt-an-assembly-language/</link><guid isPermaLink="true">https://nshki.com/html-isnt-an-assembly-language/</guid><description>Thoughts on how HTML isn&apos;t the means to an end, but an end product.</description><pubDate>Sun, 12 Jul 2020 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Back in 2005–2006, validators were all the rage. Every website had a W3C HTML valid badge to let visitors know the authors of the site really knew their stuff. It was almost ludicrous to see a site without a validator badge, and my first instinct was to run their markup through the &lt;a href=&quot;https://validator.w3.org/&quot;&gt;W3C validator&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As the Web matured and newer techniques to build things came out, the attitude towards the &quot;correctness&quot; of HTML shifted. HTML became the vessel to ship user experiences on the Web. Whether it was valid or not mattered less as long as the interactions worked.&lt;/p&gt;
&lt;p&gt;I&apos;m not trying to claim that I&apos;m a Web historian or anything because I&apos;m not, but I would be incredibly surprised if this shift in attitude &lt;em&gt;didn&apos;t&lt;/em&gt; contribute to things like progressive enhancement and accessibility not getting the attention that it really deserves.&lt;/p&gt;
&lt;p&gt;I have friends who are getting into web development today, and the material that they&apos;re taught skews heavily on the shiny things—React, GraphQL, serverless functions, and the like. While there&apos;s nothing wrong with that, I&apos;d love to see the fundamentals become cool again. I&apos;d love to see an emphasis on semantics, structure, and readability become cool again.&lt;/p&gt;
&lt;p&gt;HTML is the primary medium of the Web, yes, and that means it is &lt;em&gt;the&lt;/em&gt; common denominator when it comes to things like progressive enhancement and accessibility. HTML isn&apos;t an assembly language. Make your site/app make sense and work without CSS and JavaScript first, then begin to layer everything else in.&lt;/p&gt;
&lt;p&gt;Now &lt;em&gt;that&apos;s&lt;/em&gt; cool. Let&apos;s normalize that.&lt;/p&gt;
</content:encoded></item><item><title>The software spectrum</title><link>https://nshki.com/the-software-spectrum/</link><guid isPermaLink="true">https://nshki.com/the-software-spectrum/</guid><description>Thoughts around how roles and responsibilities are getting blurrier and blurrier in the Web industry.</description><pubDate>Thu, 30 Apr 2020 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;Kudos to my coworker &lt;a href=&quot;https://dylanatsmith.com/&quot;&gt;Dylan Smith&lt;/a&gt; for the chats recently that got me thinking about this.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;There was a fantastic article written by Chris Coyier over at CSS-Tricks in 2019 called &lt;a href=&quot;https://css-tricks.com/the-great-divide/&quot;&gt;The Great Divide&lt;/a&gt;. In it, the problem of the term &quot;front end developer&quot; encompassing very distinct roles is described. This isn&apos;t just a semantic difference—it&apos;s a mindset difference as well.&lt;/p&gt;
&lt;p&gt;In a product team for the Web, you&apos;re likely to have people that cover the responsibilities of what we&apos;ve traditionally known as designers, front end developers, and back end developers. Will roles and titles fit snugly? Almost certainly no.&lt;/p&gt;
&lt;p&gt;Those traditional role titles are even a bit misleading, since you and I would probably have slightly differing definitions of each. Just by starting to think through the various responsibilities and laying them out in a flat list, I get something like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;User research/data analysis&lt;/li&gt;
&lt;li&gt;User flows/storyboarding&lt;/li&gt;
&lt;li&gt;Wireframes/low-fidelity prototypes&lt;/li&gt;
&lt;li&gt;High-fidelity prototypes&lt;/li&gt;
&lt;li&gt;HTML/CSS implementation of prototypes&lt;/li&gt;
&lt;li&gt;Accessibility, cross-browser support, animation, etc.&lt;/li&gt;
&lt;li&gt;CSS methodologies/structure&lt;/li&gt;
&lt;li&gt;JavaScript sprinkles/enhancements&lt;/li&gt;
&lt;li&gt;Heavy JavaScript utilization&lt;/li&gt;
&lt;li&gt;Front end frameworks&lt;/li&gt;
&lt;li&gt;CSS-in-JS, code splitting, state management, etc.&lt;/li&gt;
&lt;li&gt;Single-page applications/hybrid apps&lt;/li&gt;
&lt;li&gt;API design&lt;/li&gt;
&lt;li&gt;Serverless functions&lt;/li&gt;
&lt;li&gt;Server side development&lt;/li&gt;
&lt;li&gt;Server side frameworks&lt;/li&gt;
&lt;li&gt;Database design/management&lt;/li&gt;
&lt;li&gt;Infrastructure/DevOps&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;...and I&apos;m still probably missing a million things in between.&lt;/p&gt;
&lt;p&gt;The point is, there&apos;s &lt;em&gt;a lot&lt;/em&gt; of stuff, and expectations for roles and corresponding responsibilities are getting blurrier and blurrier each day. People&apos;s skill sets may clump somewhere in this spectrum, but it&apos;s also important to recognize that skill sets could be scattered as well.&lt;/p&gt;
&lt;p&gt;We&apos;ve traditionally separated out these responsibilities in clear-cut places, but what happens when someone&apos;s core skills sit right in the middle of a cut? What happens when someone is excellent at both ends of the spectrum? What happens when someone &lt;em&gt;wants&lt;/em&gt; to pursue an unorthodox skill set at work?&lt;/p&gt;
&lt;p&gt;How can companies start supporting employees that don&apos;t fit the traditional mold? Should roles and titles be more ambiguous?&lt;/p&gt;
&lt;p&gt;I don&apos;t have an answer to any of these, but I think it&apos;s certainly worth mulling over.&lt;/p&gt;
</content:encoded></item><item><title>Delayed::Job for outbound emails in Rails</title><link>https://nshki.com/delayed-job-for-outbound-emails-in-rails/</link><guid isPermaLink="true">https://nshki.com/delayed-job-for-outbound-emails-in-rails/</guid><description>A situational but noteworthy use case of Delayed::Job for outbound emails in Ruby on Rails.</description><pubDate>Fri, 20 Mar 2020 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Let&apos;s say you run an online e-commerce platform. You want to send out a reminder email to new merchants to fill out their business profile because shoppers care about who they&apos;re buying from. In Rails, that&apos;s pretty straightforward:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MerchantMailer.profile_reminder(merchant.id).deliver_later \
  wait_until: 3.hours.from_now
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But wait, we should make sure that the merchant doesn&apos;t already have their profile filled out when the reminder email is about to go out. Otherwise we&apos;d be sending an annoying and not applicable email.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class MerchantMailer &amp;lt; ActionMailer::Base
  # Reminder to fill out business profile.
  #
  # @param {Integer} merchant_id - Merchant record ID
  # @return {ActionMailer::MessageDelivery} - Email to deliver
  def profile_reminder(merchant_id)
    return unless (merchant = Merchant.find_by(id: merchant_id))

    return if merchant.filled_out_business_profile?

    # ...
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Perfect! Or is it? This test fails:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;RSpec.describe MerchantMailer do
  describe &apos;#profile_reminder&apos; do
    it &quot;shouldn&apos;t run the method code until send time&quot; do
      expect(Merchant).to_not receive(:find_by)

      MerchantMailer.profile_reminder(1).deliver_later \
        wait_until: 3.hours.from_now
    end
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Unfortunately this doesn&apos;t work because &lt;em&gt;mailer methods are run at the time we queue the job, not at the time of job processing&lt;/em&gt;. Our reminder emails will still always get sent even if our merchants filled out their profiles.&lt;/p&gt;
&lt;h2&gt;Enter Delayed::Job&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/collectiveidea/delayed_job/&quot;&gt;Delayed::Job&lt;/a&gt; is an excellent (and popular) gem that provides a bunch of tools for processing jobs. &lt;code&gt;#delay&lt;/code&gt; is a method added to all objects by Delayed::Job that we can use here.&lt;/p&gt;
&lt;p&gt;Instead of queueing up the reminder email, we can queue up a method that determines if the email should be sent. So in our controller or service object, we can add:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Queues a reminder to fill out the given merchant&apos;s business
# profile if they haven&apos;t already.
#
# @param {Integer} merchant_id - Merchant record ID
# @return {void}
def send_profile_reminder(merchant_id)
  return unless (merchant = Merchant.find_by(id: merchant_id))

  return if merchant.filled_out_business_profile?

  MerchantMailer.profile_reminder(merchant).deliver_later
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And replace our original line where we queued the mailer with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;delay(run_at: 3.hours.from_now).send_profile_reminder(merchant.id)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now what&apos;s queued isn&apos;t the email itself, but &lt;code&gt;send_profile_reminder&lt;/code&gt;, which will determine if an email should get queued at the time it&apos;s scheduled for.&lt;/p&gt;
&lt;h2&gt;Closing Thoughts&lt;/h2&gt;
&lt;p&gt;This is a very situational use of Delayed::Job. I used e-commerce just as an example, but this could apply to anything that warrants conditionals at the time of a queued job. Perhaps you want to email trial users about something, or maybe you want to check in with a patient about a medical condition. Whatever the use case, using &lt;code&gt;#delay&lt;/code&gt; in this manner is something that can help prevent an otherwise disgruntling bug.&lt;/p&gt;
</content:encoded></item><item><title>Concepts I needed to learn to setup a home server</title><link>https://nshki.com/concepts-i-needed-to-learn-to-setup-a-home-server/</link><guid isPermaLink="true">https://nshki.com/concepts-i-needed-to-learn-to-setup-a-home-server/</guid><description>(As a networking newbie.)</description><pubDate>Wed, 11 Mar 2020 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve always wanted to set up a home server, but my networking knowledge has always held me back. I&apos;ve always been &lt;em&gt;aware&lt;/em&gt; of things like DHCP and port forwarding but have never configured anything meaningful with them, so they remained somewhat of a mystery to me.&lt;/p&gt;
&lt;p&gt;My goal here is pretty straightforward: configure a machine in my home so I can SSH into it over the Internet. I finally hunkered down and learned enough to accomplish this and was pleased to find this is actually very easy to do.&lt;/p&gt;
&lt;p&gt;Here are the concepts that helped me get there.&lt;/p&gt;
&lt;h2&gt;LAN, WAN, and static IP addresses&lt;/h2&gt;
&lt;p&gt;LAN stands for local area network, and it&apos;s the network of devices connected to my home router. WAN stands for wide area network, and it&apos;s the network of households being serviced by my Internet service provider.&lt;/p&gt;
&lt;p&gt;When I connect my server box to my home network, my router assigns it an IP address that looks something like 198.168.1.12. When that server box reconnects to the router from a restart or some kind of connectivity issue, the router assigns it an IP address that isn&apos;t necessarily the one it assigned earlier. This is called a dynamic IP address. DHCP (dynamic host configuration protocol) is what handles this process, but it&apos;s not crucial to really know anything more about it for setting up a basic home server.&lt;/p&gt;
&lt;p&gt;A static IP address is, surprise surprise, one that the router will &lt;em&gt;always&lt;/em&gt; assign a particular device whenever it connects to the network.&lt;/p&gt;
&lt;p&gt;All I had to do to setup a static IP address for my machine was use my router&apos;s software. This is different on a router-by-router basis, but it was very straightforward in my case. Get into my router&apos;s admin screens, find a menu option for IP addresses, find the relevant device, and assign it a static IP.&lt;/p&gt;
&lt;h2&gt;Port forwarding&lt;/h2&gt;
&lt;p&gt;This was more or less the gate keeper concept for me previously in setting up a home server. What the heck &lt;em&gt;is&lt;/em&gt; port forwarding?&lt;/p&gt;
&lt;p&gt;When SSHing into my device, I&apos;m essentially doing two things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Pinging the IP address of my home network (this is my public IP via my WAN).&lt;/li&gt;
&lt;li&gt;Pinging the static IP address of my device (via my LAN) with a specific port number (in the case of SSH, it&apos;s 22).&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For step 2 to be successful, I have to configure my router to know which local IP address to &quot;forward&quot; the request to when an SSH login is attempted. Also, for security reasons, I don&apos;t want to just forward port 22 from my public IP to my device IP since bad peeps might want to spam SSH login attempts if they get ahold of my public IP.&lt;/p&gt;
&lt;p&gt;This is, at its essence, what port forwarding does. It is a configuration at the router level that will forward requests from a public IP + port combination to a local IP + port combination.&lt;/p&gt;
&lt;p&gt;Again, the setup for this was pretty straightforward. I jumped back into my router&apos;s admin screens, found a menu option for port forwarding, picked an obscure port number and pointed it to my device&apos;s static IP address at port 22.&lt;/p&gt;
&lt;h2&gt;SSH software&lt;/h2&gt;
&lt;p&gt;Finally, the last piece here was to setup software on my device to properly accept SSH login requests at port 22. This will vary depending on which OS is being used, but in my case, I was using Ubuntu Linux, so I installed the &lt;code&gt;openssh-server&lt;/code&gt; package with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt install openssh-server
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And now this computer was ready to allow username/password combinations for existing users for SSH logins. Allowing connections with SSH keys and disabling passwords is something that can be configured with most SSH packages as well.&lt;/p&gt;
&lt;h2&gt;..magic&lt;/h2&gt;
&lt;p&gt;That was...it! Now I&apos;m able to bring around an iPad and write blog posts and code, which gives me a mild sorcerer complex, and it&apos;s wonderful. For extra kicks, you can add an A record with your DNS provider to point mahbox.mahdomain.com to your public IP.&lt;/p&gt;
</content:encoded></item><item><title>Test-driven reviews</title><link>https://nshki.com/test-driven-reviews/</link><guid isPermaLink="true">https://nshki.com/test-driven-reviews/</guid><description>My thoughts around using tests as the guideline for code reviews.</description><pubDate>Sat, 09 Nov 2019 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;You&apos;ve likely heard of test-driven development. It&apos;s a practice that involves writing automated tests before writing code that accomplishes the feature or objective you&apos;re working on. You want to write tests that fail at first and only pass when the implementation can be considered &quot;complete&quot;. It&apos;s a very powerful way of approaching software development.&lt;/p&gt;
&lt;p&gt;Over the past year or so, I&apos;ve been more heavily involved in code reviews at my job. I&apos;ve reviewed PRs that are anywhere between 3-line code changes to 70+ changed files. Code reviews can be done in a lot of different ways, and they are different from person to person.&lt;/p&gt;
&lt;p&gt;In this post, I want to document a language-agnostic approach to code reviews that has helped me give meaningful feedback.&lt;/p&gt;
&lt;h2&gt;Review the Tests First&lt;/h2&gt;
&lt;p&gt;By reviewing the tests first, you can get solid context of what to expect in the application code. If you feel like you&apos;re still not sure what to be looking for after reviewing the tests, that&apos;s a great indication that the tests need to be rewritten. This is like the &quot;red&quot; of TDD except it&apos;s for understanding the test.&lt;/p&gt;
&lt;p&gt;This is especially important for larger PRs (your team could try enforcing smaller PRs, but at some point, large PRs are somewhat unavoidable). Also, this is important regardless of test-driven reviews because it will be applicable whenever anybody reviews any test in your codebase.&lt;/p&gt;
&lt;p&gt;Some mindsets I&apos;ve found that really support this are:&lt;/p&gt;
&lt;h3&gt;1. Test the &quot;What&quot; and Not the &quot;How&quot;&lt;/h3&gt;
&lt;p&gt;This means writing tests that read, &quot;in this particular scenario, this input should result in this output&quot; rather than &quot;in this particular scenario, some particular function or method should be called&quot;. The more tightly coupled your tests become with the application code, the less natural it becomes to read and it also harms the test suite&apos;s ability to be a safe guard when refactoring.&lt;/p&gt;
&lt;h3&gt;2. Clever Tests Are Not Better Tests&lt;/h3&gt;
&lt;p&gt;It&apos;s tempting to write control structures and various helpers into tests to make them easier to write, but 9 times out of 10, I recommend against it. This is because the &quot;naive&quot; way of writing a test is generally easiest to understand whenever anybody revisits the test.&lt;/p&gt;
&lt;p&gt;In other words, write the tests as dead simple as possible. Hardcode expectations instead of generating them. Repeating an operation with slight variations multiple times? Hardcode them, don&apos;t use a loop.&lt;/p&gt;
&lt;h3&gt;3. Localize Tests As Much As Possible&lt;/h3&gt;
&lt;p&gt;By &quot;localize,&quot; I mean keep all crucial context within the test as much as possible. If a different part of the file needs to be looked at to fully understand a test, that&apos;s a hindrance. If an entirely different file needs to be loaded to fully understand a test, that&apos;s &lt;em&gt;definitely&lt;/em&gt; a hindrance.&lt;/p&gt;
&lt;p&gt;The goal of localizing test context is to reduce the time from &quot;open test file&quot; to &quot;understand test file&quot; as much as possible. While DRY principles are important in the application code, they can become harmful in the test suite.&lt;/p&gt;
&lt;h2&gt;Closing Thoughts&lt;/h2&gt;
&lt;p&gt;Well-written and easy-to-understand tests are just as crucial to an application as the application code itself. It enables not just your team, but also you to better iterate on your project.&lt;/p&gt;
&lt;p&gt;Code reviews are a great place to help evolve this sort of mindset in your team.&lt;/p&gt;
</content:encoded></item><item><title>Chusaku, a controller annotation gem</title><link>https://nshki.com/chusaku-a-controller-annotation-gem/</link><guid isPermaLink="true">https://nshki.com/chusaku-a-controller-annotation-gem/</guid><description>A write-up of a Rails controller annotation gem.</description><pubDate>Sat, 13 Jul 2019 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;m an avid user of &lt;a href=&quot;https://github.com/ctran/annotate_models&quot;&gt;Annotate&lt;/a&gt;, a lovely gem written by &lt;a href=&quot;https://github.com/ctran&quot;&gt;Cuong Tran&lt;/a&gt; that annotates various Rails project files with summaries of the database schema. Without flipping back and forth between models, factories, and &lt;code&gt;schema.rb&lt;/code&gt;, the annotations allow me to see what columns are present in relevant tables at a quick glance.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/nshki/chusaku&quot;&gt;Chusaku&lt;/a&gt;, which is a play on the Japanese word 註釈, is a Rails controller annotation gem that I wrote a couple months prior to scratch my own itch of wanting to see route info directly in controller files.&lt;/p&gt;
&lt;p&gt;Take for example the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class TacosController &amp;lt; ApplicationController
  def create
    # ...
  end

  def destroy
    # ...
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We have a very simple controller that handles the creation and destruction (&quot;nomming&quot;) of delicious tacos. However, just by looking at this file, I have no idea if these actions correspond to any routes in my Rails project.&lt;/p&gt;
&lt;p&gt;With Chusaku, we can simply run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;bundle exec chusaku
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And our file would then look something like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class TacosController &amp;lt; ApplicationController
  # @route POST /make-myself-a-taco (create_taco)
  def create
    # ...
  end

  # @route DELETE /put-the-taco-in-my-tummy (eat_taco)
  def destroy
    # ...
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We can see the HTTP verb, path, and route name that corresponds to each action.&lt;/p&gt;
&lt;h2&gt;Strategy&lt;/h2&gt;
&lt;p&gt;As a programmer, I thought it would be fun to tackle this problem from scratch. How would I parse my controller files and annotate only the actions that have corresponding entries in my routes?&lt;/p&gt;
&lt;h3&gt;Step 1: Gather routes info from Rails&lt;/h3&gt;
&lt;p&gt;First, I needed to somehow parse a project&apos;s routes. The only way I knew how to quickly see a project&apos;s routes was with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;bin/rake routes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;But parsing CLI output for the purpose of this project seemed awful. After a bit of digging, I found that Rails &lt;em&gt;does&lt;/em&gt; expose its routes info in Ruby. This led to the creation of &lt;a href=&quot;https://github.com/nshki/chusaku/blob/master/lib/chusaku/routes.rb&quot;&gt;Chusaku::Routes&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Step 2: Parse controller files&lt;/h3&gt;
&lt;p&gt;While this step is fairly obvious, the details of it took me a second. I should open a file for reading, yes, but I also need to be able to tell if what I&apos;m looking at corresponds with a route in Rails.&lt;/p&gt;
&lt;p&gt;Using Chusaku::Routes, I know what routes are available to me, so how can I find relevant actions? Regular expressions. By extracting the name of an action, I can check to see if any route matches it.&lt;/p&gt;
&lt;p&gt;Great.&lt;/p&gt;
&lt;p&gt;But now what? I want to insert an annotation above the action, but what if there is already a comment that exists? What if an annotation already exists?&lt;/p&gt;
&lt;p&gt;At this point, I decided that being able to categorize lines of a file into three buckets was necessary. The buckets are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Action&lt;/em&gt;: a line that defines an action&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Comment&lt;/em&gt;: line(s) that are purely comments&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Code&lt;/em&gt;: anything else&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is how I pictured it in my head:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/chusaku-a-controller-annotation-gem/visualization.png&quot; alt=&quot;Visualization&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This led to the creation of &lt;a href=&quot;https://github.com/nshki/chusaku/blob/master/lib/chusaku/parser.rb&quot;&gt;Chusaku::Parser&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Step 3: Annotate&lt;/h3&gt;
&lt;p&gt;Finally, I just want to write annotations above actions in each controller file where applicable. Using both Chusaku::Routes and Chusaku::Parser, I was able to insert annotations where necessary and was also able to prevent edge cases where annotations already existed or was no longer needed.&lt;/p&gt;
&lt;p&gt;This leads us to the main module, &lt;a href=&quot;https://github.com/nshki/chusaku/blob/master/lib/chusaku.rb&quot;&gt;Chusaku&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Closing Thoughts&lt;/h2&gt;
&lt;p&gt;Writing Chusaku was incredibly fun and I intend to continue using it for my Rails projects. You can see the repo and associated test suite &lt;a href=&quot;https://github.com/nshki/chusaku&quot;&gt;here&lt;/a&gt;. If you like the project, please consider starring it on GitHub, and if you see any problems, please file an issue or open a pull request!&lt;/p&gt;
</content:encoded></item><item><title>Testing is for all of us</title><link>https://nshki.com/testing-is-for-all-of-us/</link><guid isPermaLink="true">https://nshki.com/testing-is-for-all-of-us/</guid><description>My thoughts on why I think automated testing is a crucial part of any software team.</description><pubDate>Sat, 20 Apr 2019 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;If you&apos;re a software engineer, regardless of if you studied CS in college, learned programming through the Internet, or went through a bootcamp, you&apos;ve heard it time and time again.&lt;/p&gt;
&lt;p&gt;Testing is important.&lt;/p&gt;
&lt;p&gt;But why? It&apos;s easy to hear those words over and over, but not quite grasp the significance of them without being in the real world. This is my take on why I think testing, and more specifically automated testing, is so important.&lt;/p&gt;
&lt;h3&gt;Quality Assurance (QA)&lt;/h3&gt;
&lt;p&gt;Quality assurance is a multi-faceted discipline whose primary objective is to ensure that product(s) are meeting high standards. In software, some of the large buckets of QA are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Functional QA&lt;/strong&gt;: Does everything work properly?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Design QA&lt;/strong&gt;: Does everything look right?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Stress Testing&lt;/strong&gt;: Does it still run with high traffic?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security&lt;/strong&gt;: Is it difficult to compromise?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Automation can help in every one of these facets. Integration tests can be put in place to catch regressions, CI platforms can be configured to output screenshots in different screen sizes, and scripts can greatly increase the server load and/or look for common security exploits in an application.&lt;/p&gt;
&lt;h3&gt;Time and Confidence&lt;/h3&gt;
&lt;p&gt;In interest of time, there may be situations in which teams don&apos;t prioritize automated testing in order to get features out the door. This inevitably puts a lot of pressure on the team performing manual QA. If this trend continues over time, the company may prioritize hiring more and more manual QA staff, at which point it becomes more and more difficult to achieve satisfactory automated test coverage.&lt;/p&gt;
&lt;p&gt;The problem here is that there is a compounding opportunity cost.&lt;/p&gt;
&lt;p&gt;As a real-world example, the QA team that I work with can perform a full, manual regression checklist for an application in a full day. That checklist covers things like UI interactions, SMS and email notifications being sent out, etc. That same checklist, after it was all automated, &lt;em&gt;takes less than 5 minutes&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Prior to automation, the QA team performed a full regression check before every release. Any defects were caught, logged, and assigned back to the engineering team. After automation, with continuous integration (CI) and continuous deployment (CD), any time the engineering team pushes code and something breaks, the team is notified and a deployment does not occur.&lt;/p&gt;
&lt;p&gt;A well-maintained automated test suite enables the team to find bugs faster, gives back an incredible amount of time to help them focus on things they couldn&apos;t have before, and increases the confidence of the entire team. No matter how incredible someone is at manual QA, at the end of the day, they are human, and humans can miss things. Machines can be more consistent at finding defects.&lt;/p&gt;
&lt;h3&gt;A Function of Engineering&lt;/h3&gt;
&lt;p&gt;I think that it&apos;s crucial to have engineering teams write automated tests for the features they develop. They have the context necessary to fully automate test cases since they are in the code on a day-to-day basis. Things like unit tests and API tests would be incredibly difficult for an external team to write.&lt;/p&gt;
&lt;p&gt;Furthermore, regardless of whether a team follows test-driven development or not, automated tests will help engineers catch defects in their code before it&apos;s released into the wild, and perhaps just as importantly, enables refactoring in confidence.&lt;/p&gt;
&lt;p&gt;This leads me to believe that QA should be a function of engineering. At the end of the day, QA engineers are exactly that, &lt;em&gt;engineers&lt;/em&gt;. Automated test suites need maintenance, just like any other piece of software. QA engineers are necessary to ensure the suite is well-oiled by optimizing reliability (fixing flaky tests), improving runtimes, and refactoring.&lt;/p&gt;
&lt;h3&gt;What About Manual QA?&lt;/h3&gt;
&lt;p&gt;Automated testing does not rule out manual QA by any means. It massively complements it. Because of the time it gives back to the team, QA staff can focus more on things like design QA* and discovering bugs through new edge cases rather than having users find them in the wild (and angrily reporting them).&lt;/p&gt;
&lt;p&gt;*Some teams prefer to have their design teams perform design QA, and that makes perfect sense.&lt;/p&gt;
&lt;h3&gt;Closing Thoughts&lt;/h3&gt;
&lt;p&gt;Automated testing should absolutely be a priority in a software team. Increase developer productivity, save incredible amounts of time for QA staff, and instill confidence in the entire team that the software works and looks correctly, every time.&lt;/p&gt;
</content:encoded></item><item><title>Reflections on a custom &quot;Component&quot;</title><link>https://nshki.com/reflections-on-a-custom-component/</link><guid isPermaLink="true">https://nshki.com/reflections-on-a-custom-component/</guid><description>My thoughts on defining a JavaScript pattern that I called &quot;component&quot; from pre-React days.</description><pubDate>Sun, 10 Mar 2019 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;This post is a collection of my thoughts, both good and bad, on defining a JavaScript pattern that I called &quot;Component&quot; prior to adopting React.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Background&lt;/h3&gt;
&lt;p&gt;In 2016, my interest in front end frameworks like React started growing, but I hadn&apos;t committed to learning one yet. My day-to-day was heavily in Ruby on Rails, and I was a big fan of CoffeeScript for its likeness to Ruby and its ES6-like features before tooling like Babel and Webpacker really took off.&lt;/p&gt;
&lt;p&gt;I loved the idea of componentizing an application but was still married to the more traditional idea of separation of concerns, where markup, styles, and behavior should be separate. The spark to writing my &quot;Component&quot; was the question, &quot;How can I maximize the reusability of JavaScript while isolating functionality like lego pieces?&quot;&lt;/p&gt;
&lt;h3&gt;The Component Class&lt;/h3&gt;
&lt;p&gt;Before moving on, here&apos;s a slightly stripped down version of what the &lt;code&gt;Component&lt;/code&gt; class looked like in a project that used jQuery:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# component.coffee

class Component
  # @param {String} name
  # @param {Array&amp;lt;String&amp;gt;} targets
  # @param {Function} functionality
  # @return {void}
  constructor: (name, targets = [], functionality) -&amp;gt;
    selector = &quot;[data-component=&apos;#{name}&apos;]&quot;

    $(selector).each -&amp;gt;
      config =
        element: this
        selector: selector

      # Register target selectors.
      targets.forEach (t) -&amp;gt;
        targetAttr = &quot;data-#{name}-#{t}&quot;
        targetName = $(config.element).attr(targetAttr)
        config[t] = &quot;[data-#{name}-target=&apos;#{targetName}&apos;]&quot;

      # Run passed function with config object.
      functionality(config)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At its core, it really isn&apos;t a lot of code, but it allows for a way to define reusable, isolated functionality in a way that is incorporated into markup purely through &lt;code&gt;data&lt;/code&gt; attributes. In practice, it was very similar to using something like &lt;a href=&quot;https://acss.io/&quot;&gt;Atomic CSS&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;An instance of &lt;code&gt;Component&lt;/code&gt; takes three arguments:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;name&lt;/code&gt;: A string that gets used to identify base elements that use this functionality.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;targets&lt;/code&gt;: An array of strings that define other elements that are affected by this functionality.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;functionality&lt;/code&gt;: A function that runs for every base element that uses this functionality.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Component in Action&lt;/h3&gt;
&lt;p&gt;An example usage of &lt;code&gt;Component&lt;/code&gt; to be able to easily show/hide elements via an &lt;code&gt;.is-active&lt;/code&gt; CSS class would be something like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# toggler.coffee

new Component &apos;toggler&apos;, [&apos;toggleable&apos;], (c) -&amp;gt;
  $(c.element).on &apos;click&apos;, () -&amp;gt;
    $(c.toggleable).toggleClass(&apos;is-active&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This instance defines a new &lt;code&gt;Component&lt;/code&gt; called &lt;code&gt;toggler&lt;/code&gt; that toggles an &lt;code&gt;.is-active&lt;/code&gt; CSS class on its &lt;code&gt;toggleable&lt;/code&gt; target on click.&lt;/p&gt;
&lt;p&gt;The function that we pass as an argument to the constructor can reference each instance of a &lt;code&gt;toggler&lt;/code&gt; via &lt;code&gt;c.element&lt;/code&gt; and can also reference each &lt;code&gt;toggler&lt;/code&gt;&apos;s unique target via &lt;code&gt;c.toggleable&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;That&apos;s it. That&apos;s all the JS we need for this functionality. Now for the markup:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;button
  data-component=&quot;toggler&quot;
  data-toggler-toggleable=&quot;toggleable-a&quot;
&amp;gt;
  Show/Hide A
&amp;lt;/button&amp;gt;

&amp;lt;button
  data-component=&quot;toggler&quot;
  data-toggler-toggleable=&quot;toggleable-b&quot;
&amp;gt;
  Show/Hide B
&amp;lt;/button&amp;gt;

&amp;lt;div data-toggler-target=&quot;toggleable-a&quot;&amp;gt;
  &amp;lt;p&amp;gt;A: I can be shown or hidden!&amp;lt;/p&amp;gt;
&amp;lt;/div&amp;gt;

&amp;lt;div data-toggler-target=&quot;toggleable-b&quot;&amp;gt;
  &amp;lt;p&amp;gt;B: I can be shown or hidden!&amp;lt;/p&amp;gt;
&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We create two &lt;code&gt;&amp;lt;button&amp;gt;&lt;/code&gt; elements that will toggle different elements on click by giving them &lt;code&gt;data-component=&quot;toggler&quot;&lt;/code&gt; attributes. We know which elements will be toggled by each button via their &lt;code&gt;data-toggler-toggleable&lt;/code&gt; attributes, which have unique values. Finally, we create two &lt;code&gt;&amp;lt;div&amp;gt;&lt;/code&gt;s that are targets of each button, denoted by the &lt;code&gt;data-toggler-target&lt;/code&gt; attribute and a unique value that matches with the &lt;code&gt;data-toggler-toggleable&lt;/code&gt; attribute on the base element.&lt;/p&gt;
&lt;p&gt;Obviously, the CSS also needs to be written, but assuming it is, we&apos;ve successfully defined what a &lt;code&gt;toggler&lt;/code&gt; is and we can sprinkle it throughout the application by just defining some &lt;code&gt;data&lt;/code&gt; attributes.&lt;/p&gt;
&lt;p&gt;Here&apos;s an accompanying pen:&lt;/p&gt;
&lt;p&gt;&amp;lt;iframe height=&quot;350&quot; style=&quot;width: 100%;&quot; title=&quot;Component&quot; src=&quot;https://codepen.io/nshki_/embed/vPJGym/?height=265&amp;amp;theme-id=0&amp;amp;default-tab=js,result&quot; allowfullscreen&amp;gt;&amp;lt;/iframe&amp;gt;&lt;/p&gt;
&lt;h3&gt;The Good&lt;/h3&gt;
&lt;p&gt;This pattern allowed the team I was with at the time to write very modular pieces of functionality that were usable across multiple projects. We dropped in CoffeeScript files, attached some &lt;code&gt;data&lt;/code&gt; attributes, and voila, things worked.&lt;/p&gt;
&lt;p&gt;This was a huge win.&lt;/p&gt;
&lt;p&gt;Not only did it reduce development time on individual projects, it created an opportunity for the team to define and maintain a library of functionality that could be tested in isolation and baked into a company project starter.&lt;/p&gt;
&lt;p&gt;This also made it so it was possible to identify all DOM elements on a page that were being used for particular functionalities, which seemed like one step toward the right direction in a traditional separation of concerns mindset.&lt;/p&gt;
&lt;h3&gt;The Bad&lt;/h3&gt;
&lt;p&gt;The number one biggest problem with this pattern was that I neglected to write documentation.&lt;/p&gt;
&lt;p&gt;One of the projects that &lt;code&gt;Component&lt;/code&gt; was used in got passed off entirely to the client, who was building its own engineering team at the time. Unfortunately, the hand-off timeframe was criminally short, and I hadn&apos;t written any decent documentation or had enough time to bring the client team up-to-speed on the pattern.&lt;/p&gt;
&lt;p&gt;Needless to say, in a fast-paced environment, the pattern just created a swamp of uncertainty and introduced bugs on bugs on bugs.&lt;/p&gt;
&lt;p&gt;I felt an insiduous amount of guilt.&lt;/p&gt;
&lt;p&gt;Additionally, while the pattern worked as intended internally, it &lt;em&gt;did&lt;/em&gt; make the markup gain some serious weight. Parsing through mountains of &lt;code&gt;data&lt;/code&gt; attributes is a strenuous process, and a new breed of bugs was created where attribute value typos had serious consequences.&lt;/p&gt;
&lt;h3&gt;Closing Thoughts&lt;/h3&gt;
&lt;p&gt;Overall, I&apos;m glad I went through the motions of writing this pattern. Not only did it create real value for my team at the time, but it gave me some perspective and great empathy for the developers in the world that create tooling for other developers.&lt;/p&gt;
&lt;p&gt;Moving forward, I want to be more considerate of how easily other developers can use something I write. An intuitive API as well as well-maintained documentation is just as important as the tool itself.&lt;/p&gt;
&lt;p&gt;As a side note, when &lt;a href=&quot;https://stimulusjs.org/&quot;&gt;Stimulus&lt;/a&gt; was released by Basecamp, I felt much better about some of the design decisions I made when writing &lt;code&gt;Component&lt;/code&gt;, since the concept of targets was extremely similar.&lt;/p&gt;
</content:encoded></item><item><title>Getting into open source</title><link>https://nshki.com/getting-into-open-source/</link><guid isPermaLink="true">https://nshki.com/getting-into-open-source/</guid><description>My thoughts and experiences around getting involved in open source software through a project called if me.</description><pubDate>Sat, 09 Feb 2019 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;These are my thoughts and experiences around getting involved in open source software through a project called &lt;a href=&quot;https://www.if-me.org/&quot;&gt;if me&lt;/a&gt;. This is for anyone who has wanted to get into open source but hasn&apos;t yet.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Why Open Source?&lt;/h3&gt;
&lt;p&gt;I&apos;ve been intrigued by open source for years. Some in the developer community attribute a lot of their learnings and successes to open source, and I use open source software on the daily. I&apos;ve also always wondered what a software-building environment that isn&apos;t purely driven by profits looked like.&lt;/p&gt;
&lt;p&gt;Curiosity was my main drive. I just never knew where to start.&lt;/p&gt;
&lt;h3&gt;Finding My First Open Source Project&lt;/h3&gt;
&lt;p&gt;I first determined a couple key factors I was looking for in a potential project to contribute to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;em&gt;It must be software for good&lt;/em&gt;. In other words, I&apos;m only interested if it&apos;s trying to better people&apos;s lives in an obvious way.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;It must use the technologies I love&lt;/em&gt;. I could&apos;ve looked for a project in a stack I&apos;m not familiar with to learn it, but I was interested in developing a deeper knowledge for what I already knew and loved (Ruby on Rails).&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Obviously, these are different for everybody, but knowing my priorities made finding a great fit easier.&lt;/p&gt;
&lt;p&gt;I searched for various lists of open source projects with this criteria and ended up just browsing GitHub repositories by topic. &lt;a href=&quot;https://www.if-me.org/&quot;&gt;if me&lt;/a&gt; was a project that was featured in GitHub&apos;s social impact collection that caught my eye immediately. Not only was it a Rails project, but it was tackling a meaningful issue: mental health.&lt;/p&gt;
&lt;p&gt;At the time, the project&apos;s documentation encouraged interested contributors to reach out via email, so I sent one explaining my interest and willingness to contribute. The founder of the project, &lt;a href=&quot;https://mobile.twitter.com/fleurchild&quot;&gt;Julia Nguyen&lt;/a&gt;, responded promptly and added me to the organization&apos;s Slack. While having a Slack group wasn&apos;t 100% necessary, it has been an amazing tool to get to know the other contributors better outside of GitHub discussions.&lt;/p&gt;
&lt;h3&gt;First Contribution&lt;/h3&gt;
&lt;p&gt;My first contribution ended up actually not being code at all. It was design.&lt;/p&gt;
&lt;p&gt;Julia laid out various areas I could start tackling as a first-time contributor. These were all documented as GitHub issues in the project repo as well. One of them was assisting with an app redesign, which in my mind affected all other contribution opportunities.&lt;/p&gt;
&lt;p&gt;I expressed interest in starting the redesign efforts, got a thumbs up, and spent spare afternoons and evenings working on mockups. Once a rough direction started taking shape, &lt;a href=&quot;https://github.com/ifmeorg/ifme/issues/691&quot;&gt;I opened a new GitHub issue&lt;/a&gt; to share.&lt;/p&gt;
&lt;p&gt;It was great collecting feedback from other contributors via Slack and GitHub to iterate on the design. Because open source software brings people of diverse backgrounds together, I was exposed to a lot of insights I wouldn&apos;t have thought of by myself in a silo or even in a company.&lt;/p&gt;
&lt;p&gt;The design was eventually implemented by the community and is continually being iterated on.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/getting-into-open-source/ifme.png&quot; alt=&quot;if me redesign&quot; /&gt;&lt;/p&gt;
&lt;h3&gt;The Community Is the Most Important Part&lt;/h3&gt;
&lt;p&gt;As I engaged more with the project, I realized that the community of contributors really defined the open source experience for me.&lt;/p&gt;
&lt;p&gt;Julia is extremely welcoming as are all the other contributors. Discussions can include everything from helping people determine their first GitHub issue to chatting about how Neopets kick-started our careers.&lt;/p&gt;
&lt;p&gt;if me has a culture of empowering contributors to lead initiatives. The implementation of my design corresponded with the adoption of React in the codebase, so I was encouraged to create GitHub issues for individual components and review pull requests for them. This taught me that contributing code isn&apos;t the only way to grow as an engineer -- reviewing contributions and communicating effectively is just as, if not more, impactful.&lt;/p&gt;
&lt;p&gt;I eventually got around to contributing code to the project as well. Refactoring efforts were a large part of what I worked on so far, and it gave me insights into how large teams effectively architect and maintain codebases.&lt;/p&gt;
&lt;p&gt;During this time, a team from &lt;a href=&quot;https://railsgirlssummerofcode.org/&quot;&gt;Rails Girls Summer of Code&lt;/a&gt; got involved with contributing to the project. &lt;a href=&quot;https://mobile.twitter.com/ifmeorg/status/1045722350291832832&quot;&gt;I had the chance&lt;/a&gt; to work with two incredibly talented individuals, Atibhi and Prateksha, from Bangalore, India through various pull requests and discussions. Helping them in their journey to gain more engineering experience alongside other mentors was invaluable experience for myself as well.&lt;/p&gt;
&lt;p&gt;My most recent contributions involve reviewing Japanese translations, which is something that I&apos;ve never had the opportunity to help with at a day job.&lt;/p&gt;
&lt;h3&gt;Closing Thoughts&lt;/h3&gt;
&lt;p&gt;Getting involved with open source was one of the best decisions I&apos;ve made for my career. I&apos;ve learned that open source is not just about contributing code -- filing issues, creating designs, reviewing contributions, mentoring others, being mentored, having discussions with incredible people, and being part of a community can all be a part of the open source experience as well.&lt;/p&gt;
&lt;p&gt;There&apos;s really nothing stopping you from starting your open source journey if you&apos;re interested. Give it a shot sometime, and I bet it&apos;ll be full of pleasant surprises.&lt;/p&gt;
</content:encoded></item><item><title>ES6 in Gulp projects</title><link>https://nshki.com/es6-in-gulp-projects/</link><guid isPermaLink="true">https://nshki.com/es6-in-gulp-projects/</guid><description>How to setup a Gulp project to use ES6 and still use libraries such as jQuery.</description><pubDate>Tue, 29 Jan 2019 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;This post covers how to setup a Gulp project to use ES6 and also accounts for usage of libraries such as jQuery.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update as of July 19th, 2019&lt;/strong&gt;: Thanks to &lt;a href=&quot;https://twitter.com/regiscamimura&quot;&gt;@regiscamimura&lt;/a&gt; for pointing out that the Rollup NPM package is now required for this process to work. The article has been updated to reflect that!&lt;/p&gt;
&lt;h3&gt;Preface&lt;/h3&gt;
&lt;p&gt;The front end development landscape is changing extremely quickly. React, Redux, Vue, Vuex, Webpack, and SPAs are just a few of the hot phrases you&apos;ll see today. However, many teams and individuals still use &lt;a href=&quot;https://gulpjs.com/&quot;&gt;Gulp&lt;/a&gt; -- it is a tried and true build tool and sometimes from an operational standpoint, it makes sense for it to continue to be used.&lt;/p&gt;
&lt;p&gt;Even if adopting React, Vue, or some other front end framework is not a viable option for some, adopting ES6 certainly is. ES6 is the next iteration of JavaScript that introduces many new features, and while not all browsers have it fully implemented, they are certainly on their way to do so.&lt;/p&gt;
&lt;p&gt;I won&apos;t make any assumptions about your current setup, so this post will setup a Gulp configuration from scratch so that you can cherry-pick missing pieces and start writing ES6 in your projects today.&lt;/p&gt;
&lt;h3&gt;Setup NPM and Dependencies&lt;/h3&gt;
&lt;p&gt;If you haven&apos;t already &lt;a href=&quot;https://nodejs.org/en/&quot;&gt;installed Node and NPM&lt;/a&gt; on your system, do so before proceeding.&lt;/p&gt;
&lt;p&gt;Let&apos;s setup &lt;a href=&quot;https://www.npmjs.com/&quot;&gt;NPM&lt;/a&gt; so you have access to all the packages we will be using.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;npm init
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Follow the prompts, and you&apos;ll have a &lt;code&gt;package.json&lt;/code&gt; in your directory. Now let&apos;s install some development dependencies.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;npm install --save-dev @babel/core gulp gulp-better-rollup rollup rollup-plugin-babel rollup-plugin-node-resolve rollup-plugin-commonjs
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That seems like a mouthful, what did we just install?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://www.npmjs.com/package/@babel/core&quot;&gt;@babel/core&lt;/a&gt; -- A tool that transpiles ES6 code into JavaScript that any browser will understand.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.npmjs.com/package/gulp&quot;&gt;gulp&lt;/a&gt; -- Our build tool. :)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.npmjs.com/package/gulp-better-rollup&quot;&gt;gulp-better-rollup&lt;/a&gt; -- A Gulp plugin that allows us to use &lt;a href=&quot;https://rollupjs.org/guide/en&quot;&gt;Rollup&lt;/a&gt;, a module bundler that allows us to use ES6 imports and exports in our code.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.npmjs.com/package/rollup&quot;&gt;rollup&lt;/a&gt; -- The module bundler referenced above.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.npmjs.com/package/rollup-plugin-babel&quot;&gt;rollup-plugin-babel&lt;/a&gt; -- A Rollup plugin that integrates Babel into the bundling process.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.npmjs.com/package/rollup-plugin-node-resolve&quot;&gt;rollup-plugin-node-resolve&lt;/a&gt; -- A Rollup plugin that allows us to use third party modules in &lt;code&gt;node_modules/&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.npmjs.com/package/rollup-plugin-commonjs&quot;&gt;rollup-plugin-commonjs&lt;/a&gt; -- A Rollup plugin that converts &lt;a href=&quot;https://en.wikipedia.org/wiki/CommonJS&quot;&gt;CommonJS&lt;/a&gt; modules to ES6 so we can import them without issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Configure Gulp&lt;/h3&gt;
&lt;p&gt;Install the Gulp command line tool if you haven&apos;t done so already:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;npm install --global gulp-cli
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now let&apos;s create and configure our &lt;code&gt;gulpfile.js&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const gulp = require(&apos;gulp&apos;);
const rollup = require(&apos;gulp-better-rollup&apos;);
const babel = require(&apos;rollup-plugin-babel&apos;);
const resolve = require(&apos;rollup-plugin-node-resolve&apos;);
const commonjs = require(&apos;rollup-plugin-commonjs&apos;);

gulp.task(&apos;scripts&apos;, () =&amp;gt; {
  return gulp.src(&apos;js/*.js&apos;)
    .pipe(rollup({ plugins: [babel(), resolve(), commonjs()] }, &apos;umd&apos;))
    .pipe(gulp.dest(&apos;dist/&apos;));
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This configuration defines a new Gulp task called &lt;code&gt;scripts&lt;/code&gt;, which will run with &lt;code&gt;gulp scripts&lt;/code&gt;. It looks in a directory called &lt;code&gt;js/&lt;/code&gt; for any &lt;code&gt;.js&lt;/code&gt; files, runs Rollup against them, and outputs the result in a directory called &lt;code&gt;dist/&lt;/code&gt;. You can change the &lt;code&gt;gulp.src&lt;/code&gt; and &lt;code&gt;gulp.dest&lt;/code&gt; to what&apos;s appropriate for your project.&lt;/p&gt;
&lt;p&gt;We configure Rollup to use Babel, allow for importing third party modules from &lt;code&gt;node_modules/&lt;/code&gt;, and allow importing any modules written in CommonJS. We specify an output format of &lt;code&gt;umd&lt;/code&gt;, which is compatible with the most environments (browser, Node, etc.).&lt;/p&gt;
&lt;p&gt;Go ahead and drop a JavaScript file in &lt;code&gt;js/&lt;/code&gt; and run &lt;code&gt;gulp scripts&lt;/code&gt;. It&apos;ll transpile a resulting file in &lt;code&gt;dist/&lt;/code&gt;, a good first step! Of course, you&apos;ll probably just want one JavaScript file in the root of the &lt;code&gt;js/&lt;/code&gt; directory and import the rest with ES6 syntax.&lt;/p&gt;
&lt;h3&gt;What If I Still Need To Use jQuery?&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://jquery.com/&quot;&gt;jQuery&lt;/a&gt; is still widely used and it could be too much of an effort to phase out of an existing system, especially if a million jQuery plugins are in play. If you can&apos;t abandon jQuery and/or also want to keep all front end dependencies managed through NPM (as opposed to CDNs or keeping them in the project repo), then read on.&lt;/p&gt;
&lt;p&gt;Let&apos;s add jQuery as a project dependency.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;npm install jquery
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now in a JavaScript file, we can use it like so:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import $ from &apos;jquery&apos;;

$(document).ready(() =&amp;gt; {
  console.log(&apos;Look ma, no CDNs!&apos;);
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&apos;s great. However, when there are jQuery plugins that depend on a globally available jQuery object, things get weird. As an example, I&apos;ll use a plugin called &lt;a href=&quot;https://www.npmjs.com/package/@fancyapps/fancybox&quot;&gt;fancyBox&lt;/a&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import $ from &apos;jquery&apos;;

// This looks in the `node_modules/` directory thanks to
// rollup-plugin-node-module. We can find the path to the minified
// JavaScript by manually looking in the `node_modules/` directories
// ourselves.
//
// We don&apos;t assign the import to any variable, since most jQuery
// plugins simply extend jQuery by adding new methods developers
// can use in their code. e.g. $(&apos;.my-element&apos;).myPluginMethod();
//
// Also, per ES6 imports, we can omit the `.js` at the end.
import &apos;@fancyapps/fancybox/dist/jquery.fancybox.min&apos;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If we run this through Gulp and load it on a web page, we get a JavaScript error:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ReferenceError: jQuery is not defined
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Boo. Why?&lt;/p&gt;
&lt;p&gt;A good first guess might be because we simply imported jQuery as a variable &lt;code&gt;$&lt;/code&gt;, not as &lt;code&gt;jQuery&lt;/code&gt;. We could try rewriting the first line, perhaps?&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// This won&apos;t fix the error.
import jQuery from &apos;jquery&apos;;
import &apos;@fancyapps/fancybox/dist/jquery.fancybox.min&apos;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Unfortunately, this results in the same error. This is because jQuery plugins look for a globally defined &lt;code&gt;jQuery&lt;/code&gt; object at runtime, and our transpiled files will always be protected from polluting the global scope (this is a good thing).&lt;/p&gt;
&lt;p&gt;As a second stab, we could try this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// This also won&apos;t fix the error.
import $ from &apos;jquery&apos;;
window.jQuery = $;
import &apos;@fancyapps/fancybox/dist/jquery.fancybox.min&apos;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Alas, same error. Why would assigning jQuery to the global scope not work either? This is because Rollup processes all imports first. This means that in our transpiled file, all of our imported modules are outputted &lt;em&gt;before&lt;/em&gt; any of the JavaScript we write.&lt;/p&gt;
&lt;p&gt;In this case, that means our &lt;code&gt;window.jQuery = $;&lt;/code&gt; appears &lt;em&gt;after&lt;/em&gt; our jQuery and fancyBox code.&lt;/p&gt;
&lt;p&gt;With this knowledge, I propose this final solution:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// js/config/jqueryLoad.js
import $ from &apos;jquery&apos;;
window.$ = $;
window.jQuery = $;

// js/index.js
import &apos;./config/jqueryLoad&apos;;
import &apos;@fancyapps/fancybox/dist/jquery.fancybox.min&apos;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With this multifile approach, we&apos;re doing a couple things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;We delegate the importing of jQuery to a separate file &lt;code&gt;jqueryLoad.js&lt;/code&gt; that also assigns &lt;code&gt;$&lt;/code&gt; and &lt;code&gt;jQuery&lt;/code&gt; in the global scope.&lt;/li&gt;
&lt;li&gt;We import our newly created config file before any plugins.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now when Rollup transpiles our root JavaScript files, it will output our imported code first, meaning that the contents of &lt;code&gt;jqueryLoad.js&lt;/code&gt; will appear before any of the jQuery plugin code.&lt;/p&gt;
&lt;p&gt;You may now open your bottle of champagne.&lt;/p&gt;
&lt;h3&gt;Closing Thoughts&lt;/h3&gt;
&lt;p&gt;There&apos;s nothing wrong with using tools that aren&apos;t the latest and greatest, especially if there are organizational and operational reasons for it. At the same time, it&apos;s nice to have a little taste of the future as well.&lt;/p&gt;
&lt;p&gt;I hope this can come in handy for some of you. Code on.&lt;/p&gt;
</content:encoded></item><item><title>Vim, one year in</title><link>https://nshki.com/vim-one-year-in/</link><guid isPermaLink="true">https://nshki.com/vim-one-year-in/</guid><description>Reflecting on my decision to exclusively use Vim one year ago.</description><pubDate>Sat, 26 Jan 2019 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I started exclusively using &lt;a href=&quot;https://www.vim.org/&quot;&gt;Vim&lt;/a&gt; as my editor a little over a year ago.&lt;/p&gt;
&lt;p&gt;This wasn&apos;t a decision I made lightly. I was heavily invested in the &lt;a href=&quot;https://atom.io/&quot;&gt;Atom&lt;/a&gt; editor ecosystem at the time, but was also transitioning into using a Chromebook as my primary machine. Linux apps support was not yet released, so I experimented with multiple ways of viably programming on Chrome OS.&lt;/p&gt;
&lt;p&gt;I eventually committed to using Vim since it was an editor that is supported on virtually any environment. At the time, I figured I could program on a VPS on a Chromebook to stay true to having literally everything in the cloud. I was also always intrigued by Vim since I&apos;ve read that it&apos;s a skill that you can develop over time, and productivity gets a huge boost as a result.&lt;/p&gt;
&lt;p&gt;As a fallback, if I hated it, I could just run Ubuntu on my Chromebook and go back to my old ways.&lt;/p&gt;
&lt;p&gt;Turns out I didn&apos;t hate one bit of it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/vim-one-year-in/setup.png&quot; alt=&quot;My Vim setup&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Shown here is what my current Vim setup looks like. It&apos;s technically &lt;a href=&quot;https://neovim.io/&quot;&gt;Neovim&lt;/a&gt;, but I still count it as Vim, and I have &lt;a href=&quot;https://github.com/tmux/tmux/wiki&quot;&gt;Tmux&lt;/a&gt; in the mix as well.&lt;/p&gt;
&lt;p&gt;I type around 120 WPM. When I have to stop and move a mouse, it really breaks my flow. It took me a couple days to get used to awkwardly navigating around with only my keyboard, but once I got somewhat of a grasp on that, I could already tell I&apos;d be able to work faster than before.&lt;/p&gt;
&lt;p&gt;What &lt;em&gt;really&lt;/em&gt; got me, though, was when I discovered how to change everything within parantheses. With the help of a few keystrokes -- &lt;code&gt;ci(&lt;/code&gt; -- I was able to change something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;best_flavor(ice_cream: &apos;chocolate chip cookie dough ice cream&apos;, bubble_tea: &apos;taro lychee fancy schmancy stuff&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;...to this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;best_flavor(everything: &apos;vanilla&apos;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Like everyone else, I was intimidated by what I thought were mountains of keyboard shortcuts to start using Vim even half effectively. To my surprise though, they really weren&apos;t that hard to remember at all.&lt;/p&gt;
&lt;p&gt;Most key combinations in Vim are mnemonic. Take &lt;code&gt;ci(&lt;/code&gt; from earlier as an example, it literally reads as &quot;&lt;strong&gt;c&lt;/strong&gt;hange &lt;strong&gt;i&lt;/strong&gt;n &lt;strong&gt;(&lt;/strong&gt;)&quot;. Even the keys used to navigate correspond to English words, such as &lt;code&gt;w&lt;/code&gt; for &lt;strong&gt;w&lt;/strong&gt;ord and &lt;code&gt;b&lt;/code&gt; for &lt;strong&gt;b&lt;/strong&gt;ack. Vim commands really are like you&apos;re talking to your editor, and that makes it fun to use.&lt;/p&gt;
&lt;p&gt;As far as functionality goes, I just had to spend a little time finessing my config and I had everything that I had in Atom and more. Vim has an incredibly rich plugin ecosystem, and the minor tweaks needed to make it look more like a modern editor were very easy. My &lt;code&gt;.vimrc&lt;/code&gt; is &lt;a href=&quot;https://github.com/nshki/dotfiles&quot;&gt;open sourced on GitHub&lt;/a&gt; if you&apos;re curious!&lt;/p&gt;
&lt;p&gt;The one bit that took me a while to fully comprehend was the concept of buffers. I&apos;ve always seen Vim screenshots with multiple files open so I assumed it had tabs just like any other program.&lt;/p&gt;
&lt;p&gt;Buffers are in memory. You can have the contents of a file loaded in it or it can be a blank slate. Each Vim session has one buffer per file you have open. This means that you can have multiple panes in Vim looking at the same file (this is useful for things like referencing sections of a file while you write in another section). You can cycle through buffers in any given pane.&lt;/p&gt;
&lt;p&gt;Vim tabs, on the other hand, are just ways to organize panes. If I want a two column vertical split as one tab and a three row horizontal split in another, I can do that, and I can flip between them whenever.&lt;/p&gt;
&lt;p&gt;Once I got that, I was &lt;em&gt;much&lt;/em&gt; more comfortable in Vim land. Thanks to my friend &lt;a href=&quot;https://mobile.twitter.com/_jmshaw&quot;&gt;Justin&lt;/a&gt; for constantly hounding me about using buffers.&lt;/p&gt;
&lt;p&gt;Overall, I don&apos;t regret my decision to switch to Vim one bit. It feels liberating, and it&apos;s amazing to invest time in a tried-and-true piece of software that&apos;s just as old as I am.&lt;/p&gt;
&lt;p&gt;Nowadays, I only have three apps open on my computer: a browser, Slack, and a terminal. If I ever need to SSH into servers to edit anything, I usually have my favorite editor available to me right on the server. To top it off, I work faster and more efficiently than before without a mouse slowing me down.&lt;/p&gt;
&lt;p&gt;If anything changes, I&apos;m sure I&apos;ll write about it, but I don&apos;t see that happening anytime soon.&lt;/p&gt;
</content:encoded></item><item><title>Tidying my digital life</title><link>https://nshki.com/tidying-my-digital-life/</link><guid isPermaLink="true">https://nshki.com/tidying-my-digital-life/</guid><description>Tidying up and Marie Kondo-fying my digital life.</description><pubDate>Thu, 24 Jan 2019 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;This post is about tidying up and &lt;a href=&quot;https://konmari.com/&quot;&gt;Marie Kondo&lt;/a&gt;-fying my digital life. Because decluttering my mental space is just as important as decluttering my physical space.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/tidying-my-digital-life/tidying.png&quot; alt=&quot;Marie Kondo&quot; /&gt;&lt;/p&gt;
&lt;h3&gt;Preface&lt;/h3&gt;
&lt;p&gt;I should start off by saying that I am and have always been a neat freak. My parents have told me that even as a kid, I used to obsess over keeping my toys neatly organized. This probably plays a huge role in my liking for refactoring entire code bases.&lt;/p&gt;
&lt;p&gt;Keeping my digital life tidy has been something I&apos;ve been working on over the past several years, and as someone who spends a majority of their days online, it has reduced unfathomable amounts of stress. No panicking over where certain files are, no digging through endless emails, and no sleepless nights wondering what services have my account information.&lt;/p&gt;
&lt;p&gt;This is how I approach tidying my digital life.&lt;/p&gt;
&lt;h3&gt;My Cloud&lt;/h3&gt;
&lt;p&gt;I used to own external hard drives. I&apos;d put all my files on them and keep them organized in directory structures that made sense at the time. It was convenient since I could hook it up to any computer and have access, no matter where I was.&lt;/p&gt;
&lt;p&gt;Today, with services like Dropbox and Google Drive, I don&apos;t see a need for them anymore. Sure, there are security implications, but that doesn&apos;t outweigh knowing that even if all my devices broke, I still have access to my files. This is the same reason I use Authy over Google Authenticator.&lt;/p&gt;
&lt;p&gt;I revisit my directory structures constantly, asking myself, &quot;Will this reduce stress?&quot; My general rules of thumb are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Don&apos;t have too many files visible in a folder.&lt;/li&gt;
&lt;li&gt;Don&apos;t have too many folders in one nesting level.&lt;/li&gt;
&lt;li&gt;Keep everything as easy to understand as possible.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If I ever have trouble finding anything, that&apos;s a cue for me to reorganize.&lt;/p&gt;
&lt;p&gt;I stream all my music, movies, and TV shows. I store all my photos in Google Photos. My cloud takes up less than 5GB of space, and it includes things like all my notes from college, the first site I ever built, and every PSD I created from way back in the day.&lt;/p&gt;
&lt;h3&gt;My Email&lt;/h3&gt;
&lt;p&gt;I use Gmail. I always archive my emails, I never delete. That means I have a &lt;em&gt;lot&lt;/em&gt; of emails, but at least if there&apos;s important information I need to find in the future, it won&apos;t be deleted. If an email is actionable but I want to address it later, I snooze it (thanks &lt;a href=&quot;https://www.google.com/inbox/&quot;&gt;Inbox&lt;/a&gt;, you will be missed).&lt;/p&gt;
&lt;p&gt;I generally always maintain a zero inbox with no unread mail.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/tidying-my-digital-life/nomail.png&quot; alt=&quot;My zero inbox&quot; /&gt;&lt;/p&gt;
&lt;p&gt;I used to manually categorize my emails but I stopped doing that -- it was way too much effort than it was worth. Companies like Google build sophisticated search algorithms for a reason. I use them pretty liberally.&lt;/p&gt;
&lt;p&gt;I unsubscribe from &lt;em&gt;everything&lt;/em&gt;. If I ever get a promotional email, I&apos;ll archive it, but you best believe I&apos;m taking myself off their mailing list. This has made it so whenever I get an email notification, it&apos;s something I&apos;ll &lt;em&gt;actually&lt;/em&gt; care about. This also made it so I don&apos;t hate emails, either.&lt;/p&gt;
&lt;h3&gt;My Digital Footprint&lt;/h3&gt;
&lt;p&gt;I keep track of all my online accounts. This is so I have a good overview of who has my data and where I can cut back to decrease that amount.&lt;/p&gt;
&lt;p&gt;At first, it was a process to gather all this information. I jotted down all the services that I used on a regular basis, but the forgotten accounts took a bit of digging. Luckily, I keep all my account creation emails, so I was able to leverage that to bullet everything out. I&apos;d imagine if I didn&apos;t have the emails, I&apos;d have to use a service that checks a million apps for an account under my email address.&lt;/p&gt;
&lt;p&gt;To my surprise I had &lt;em&gt;over 150 accounts&lt;/em&gt; scattered around the Internet, which seemed bonkers to me. This is the equivalent of Marie Kondo&apos;s method of dumping all your clothes on your bed so you can wrap your head around just how much you own.&lt;/p&gt;
&lt;p&gt;I immediately add to my list when I open a new account so that it&apos;s always up-to-date. I also constantly look for accounts to close. When I close an account, I make sure it&apos;s an explicit &quot;delete account&quot;. If such an option is unavailable online, I call customer service and request it. If they&apos;re extra stingy about it, that should be a red flag, but generally if I&apos;m courteous, they are too.&lt;/p&gt;
&lt;p&gt;I track each service and list its status (open, closed, closing). I also tag services by category (e.g. communication, development, social network) and flag whether 2FA is enabled (I revisit my list to see where I can enable 2FA a lot, too). If something is marked as &quot;closing&quot; for too long and I know it&apos;s because of their support team being slow to respond, I severely question that account&apos;s service provider.&lt;/p&gt;
&lt;h3&gt;It&apos;s a Never-ending Journey&lt;/h3&gt;
&lt;p&gt;Getting to this point was a &lt;strong&gt;process&lt;/strong&gt; for me, and it&apos;s a never-ending one at that. I&apos;m always thinking about how to declutter and tidy whenever I&apos;m using my devices. There&apos;s always a way to improve, and in turn, a way to destress and ultimately live a happier life.&lt;/p&gt;
&lt;p&gt;Comments, questions, or suggestions? Always happy to discuss further via Twitter!&lt;/p&gt;
</content:encoded></item><item><title>Managing WordPress with Composer</title><link>https://nshki.com/managing-wordpress-with-composer/</link><guid isPermaLink="true">https://nshki.com/managing-wordpress-with-composer/</guid><description>How to use Composer to offload WordPress core and plugin management to reduce your project&apos;s repo size.</description><pubDate>Wed, 23 Jan 2019 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;This post covers how to use &lt;a href=&quot;https://getcomposer.org/&quot;&gt;Composer&lt;/a&gt; to offload WordPress core and plugin management to reduce your project&apos;s repo size.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Why Composer?&lt;/h3&gt;
&lt;p&gt;If you&apos;ve ever built a website, chances are that you&apos;ve built a WordPress site before. It&apos;s used by 32.9% of all websites as of January 2019 &lt;a href=&quot;https://w3techs.com/technologies/history_overview/content_management/all&quot;&gt;according to W3Techs&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;And if you&apos;ve ever built a WordPress site before, you&apos;ll know that the codebase is fairly, well, messy.&lt;/p&gt;
&lt;p&gt;Composer is a dependency manager for PHP that will remedy that a bit. No need to push commits that read &quot;Upgrade all plugins&quot; or &quot;Upgrade WordPress&quot; anymore. No need to check in directories and directories of files you won&apos;t ever touch if you&apos;re just building a theme. Just have a config and lock file and you&apos;re ready to rock and roll.&lt;/p&gt;
&lt;h3&gt;Install Composer&lt;/h3&gt;
&lt;p&gt;First things first. Let&apos;s install Composer.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Linux users&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo apt install composer
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;macOS users&lt;/em&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;brew install composer
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;From the root of your project, you can then run the following to setup your initial config file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;composer init
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Answer &quot;no&quot; to the prompts that ask you to define dependencies interactively. We&apos;ll skip that step since a majority of WordPress-related packages aren&apos;t available through the &lt;a href=&quot;https://packagist.org/&quot;&gt;official Composer registry&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You should end up with a &lt;code&gt;composer.json&lt;/code&gt; that looks something like this (you can also manually create this file too, the &lt;code&gt;composer init&lt;/code&gt; command just helps you through that process):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;name&quot;: &quot;&amp;lt;name&amp;gt;/&amp;lt;project name&amp;gt;&quot;,
  &quot;authors&quot;: [
    {
      &quot;name&quot;: &quot;&amp;lt;your name&amp;gt;&quot;,
      &quot;email&quot;: &quot;&amp;lt;your email&amp;gt;&quot;
    }
  ],
  &quot;require&quot;: {}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Adding WordPress Packagist as a Repository&lt;/h3&gt;
&lt;p&gt;As mentioned a little earlier, the official Composer registry doesn&apos;t have most WordPress-related packages, so we&apos;ll have to use an alternate one. &lt;a href=&quot;https://wpackagist.org/&quot;&gt;WordPress Packagist&lt;/a&gt; is the most widely used registry for WordPress, so let&apos;s use that.&lt;/p&gt;
&lt;p&gt;To tell Composer to look there for packages, we need to add the following section to &lt;code&gt;composer.json&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&quot;repositories&quot;: [
  {
    &quot;type&quot;: &quot;composer&quot;,
    &quot;url&quot;: &quot;https://wpackagist.org&quot;
  }
]

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we have access to install any package from Packagist and WordPress Packagist!&lt;/p&gt;
&lt;h3&gt;Configuring Install Paths&lt;/h3&gt;
&lt;p&gt;By default, Composer will install packages in a &lt;code&gt;vendor/&lt;/code&gt; directory. While that may work for other PHP projects, that&apos;s obviously not where we want WordPress plugins, so we need to manually configure the install path ourselves.&lt;/p&gt;
&lt;p&gt;WordPress plugins have a type of &lt;code&gt;wordpress-plugin&lt;/code&gt; via WordPress Packagist, so we can tell Composer to install all &lt;code&gt;wordpress-plugin&lt;/code&gt; package types to &lt;code&gt;wp-content/plugins/&lt;/code&gt; by adding this section:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&quot;extra&quot;: {
  &quot;installer-paths&quot;: {
    &quot;wp-content/plugins/{$name}&quot;: [&quot;type:wordpress-plugin&quot;]
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Adding Public Plugins&lt;/h3&gt;
&lt;p&gt;WordPress Packagist works by scanning the official WordPress Subversion repository every hour, so anything that you can find through the admin UI, you&apos;ll be able to find in WordPress Packagist.&lt;/p&gt;
&lt;p&gt;To add a plugin, find the plugin on &lt;a href=&quot;https://wpackagist.org&quot;&gt;wpackagist.org&lt;/a&gt; and add a line like the following in the &lt;code&gt;&quot;require&quot;&lt;/code&gt; section of &lt;code&gt;composer.json&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&quot;wpackagist-plugin/&amp;lt;plugin name&amp;gt;&quot;: &quot;*&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can replace the &lt;code&gt;*&lt;/code&gt; with a specific version number of the plugin as well -- using &lt;code&gt;*&lt;/code&gt; will just tell Composer to fetch the latest version.&lt;/p&gt;
&lt;p&gt;Once you&apos;ve added all the plugins you want, run the install command and rejoice!&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;composer install
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Adding Premium Plugins&lt;/h3&gt;
&lt;p&gt;Premium plugins are generally not available via the official WordPress plugin registry. Depending on the plugin creator, there may be specific documentation to add the plugin via Composer to your project.&lt;/p&gt;
&lt;p&gt;If no such documentation exists though, there are still ways to have Composer manage your premium plugins.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The first is to register a plugin download link as a package.&lt;/strong&gt; If you have a license for a plugin, chances are you have an account on their website, and you&apos;ll be able to locate a download link for the plugin.&lt;/p&gt;
&lt;p&gt;In that case, you can add the following to &lt;code&gt;composer.json&lt;/code&gt; under &lt;code&gt;&quot;repositories&quot;&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;type&quot;: &quot;package&quot;,
  &quot;package&quot;: {
    &quot;name&quot;: &quot;&amp;lt;org name&amp;gt;/&amp;lt;plugin name&amp;gt;&quot;,
    &quot;version&quot;: &quot;dev-master&quot;,
    &quot;type&quot;: &quot;wordpress-plugin&quot;,
    &quot;dist&quot;: {
      &quot;type&quot;: &quot;zip&quot;,
      &quot;url&quot;: &quot;&amp;lt;download link here&amp;gt;&quot;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then add this line to the &lt;code&gt;&quot;require&quot;&lt;/code&gt; section:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&quot;&amp;lt;org name&amp;gt;/&amp;lt;plugin name&amp;gt;&quot;: &quot;dev-master&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now when Composer looks through all the required packages and can&apos;t find &lt;code&gt;&amp;lt;org name&amp;gt;/&amp;lt;plugin name&amp;gt;&lt;/code&gt; in Packagist and WordPress Packagist, it&apos;ll fall back to the package you manually registered.&lt;/p&gt;
&lt;p&gt;We&apos;re defining the version as &lt;code&gt;&quot;dev-master&quot;&lt;/code&gt; since most download links will give you the latest version of the plugin. If that&apos;s not the case, then feel free to swap it out for the appropriate version number. What&apos;s important is that the versions match in &lt;code&gt;&quot;require&quot;&lt;/code&gt; and &lt;code&gt;&quot;repositories&quot;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The second method is to self-host the premium plugin as a private repository.&lt;/strong&gt; This should be more or less of a last resort since you&apos;ll be responsible for updating the private repository for new versions, but this will work nicely especially in teams that use starters across multiple different projects.&lt;/p&gt;
&lt;p&gt;Create a new private repository with your service of choice and add the plugin files to it. We want to make this private so we don&apos;t expose the premium plugin files to the world -- it&apos;s premium for a reason.&lt;/p&gt;
&lt;p&gt;Now create a new file called &lt;code&gt;composer.json&lt;/code&gt; &lt;em&gt;inside this repo&lt;/em&gt;. It should be in the following format:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;name&quot;: &quot;&amp;lt;your name&amp;gt;/&amp;lt;plugin name&amp;gt;&quot;,
  &quot;description&quot;: &quot;Self-hosted version of &amp;lt;plugin name&amp;gt;.&quot;,
  &quot;license&quot;: &quot;proprietary&quot;,
  &quot;type&quot;: &quot;wordpress-plugin&quot;,
  &quot;require&quot;: {
    &quot;composer/installers&quot;: &quot;~1.0&quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;&amp;lt;your name&amp;gt;&lt;/code&gt; should correspond to your account name on your service of choice. e.g. My username on GitHub is &lt;code&gt;nshki&lt;/code&gt; so that would be what I use.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;composer/installers&lt;/code&gt; is required here so that configs that install packages to custom paths can operate as expected.&lt;/p&gt;
&lt;p&gt;Now in your WordPress repository, add the following under &lt;code&gt;&quot;repositories&quot;&lt;/code&gt; in &lt;code&gt;composer.json&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;type&quot;: &quot;vcs&quot;,
  &quot;url&quot;: &quot;git@&amp;lt;your service&amp;gt;:&amp;lt;your name&amp;gt;/&amp;lt;repo name&amp;gt;.git&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;vcs&lt;/code&gt; stands for version control system. The &lt;code&gt;url&lt;/code&gt; can just be the SSH string your service provides for your repo.&lt;/p&gt;
&lt;p&gt;Now you can require that plugin in the familiar format (under &lt;code&gt;&quot;require&quot;&lt;/code&gt;):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&quot;&amp;lt;your name&amp;gt;/&amp;lt;repo name&amp;gt;&quot;: &quot;dev-master&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;composer install
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Composer should be pulling in your newly created private repo as a plugin in your project!&lt;/p&gt;
&lt;h3&gt;Have Composer Install WordPress&lt;/h3&gt;
&lt;p&gt;If you&apos;re like me and would want &lt;em&gt;all&lt;/em&gt; dependencies to be managed by Composer for PHP projects, then read on.&lt;/p&gt;
&lt;p&gt;On the homepage of WordPress Packagist, there&apos;s a small tid bit that refers to a couple resources for installing WordPress core itself using Composer. I&apos;ll be providing instructions for how to use &lt;a href=&quot;https://github.com/johnpbloch/wordpress&quot;&gt;&lt;code&gt;johnpbloch/wordpress&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;First, just add the following dependency under &lt;code&gt;&quot;require&quot;&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&quot;johnpbloch/wordpress&quot;: &quot;*&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You could run the install command right after, but you&apos;ll notice that WordPress gets downloaded into a directory called &lt;code&gt;wordpress/&lt;/code&gt; in the root project directory. Not what we want, assuming you want the root to correspond to the root of the WordPress site.&lt;/p&gt;
&lt;p&gt;Now, it&apos;s important to note that by design, &lt;em&gt;Composer will wipe out directories in favor of downloaded dependencies&lt;/em&gt;. This means that if packages weren&apos;t installed under &lt;code&gt;vendor/&lt;/code&gt; or WordPress under &lt;code&gt;wordpress/&lt;/code&gt;, our entire project could be nuked.&lt;/p&gt;
&lt;p&gt;This means that we need to write some custom scripts to reconcile this.&lt;/p&gt;
&lt;p&gt;Luckily, Composer allows us to do just that all from within &lt;code&gt;composer.json&lt;/code&gt;. Add the following to your config:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&quot;scripts&quot;: {
  &quot;post-install-cmd&quot;: [
    &quot;if [ ! -f wp-load.php ]; then rm wordpress/composer.*; fi&quot;,
    &quot;if [ ! -f wp-load.php ]; then rm -rf wordpress/wp-content/; fi&quot;,
    &quot;if [ ! -f wp-load.php ]; then rm -rf wp-admin/; fi&quot;,
    &quot;if [ ! -f wp-load.php ]; then rm -rf wp-includes/; fi&quot;,
    &quot;if [ ! -f wp-load.php ]; then mv wp-content/ wordpress/; fi&quot;,
    &quot;if [ -d wordpress ] &amp;amp;&amp;amp; [ ! -f wp-load.php ]; then mv wordpress/* .; fi&quot;,
    &quot;if [ -d wordpress ]; then rm -rf wordpress/; fi&quot;
  ],
  &quot;post-update-cmd&quot;: [
    &quot;rm wordpress/composer.*&quot;,
    &quot;rm -rf wordpress/wp-content/&quot;,
    &quot;rm -rf wp-admin/&quot;,
    &quot;rm -rf wp-includes/&quot;,
    &quot;mv wp-content/ wordpress/&quot;,
    &quot;mv wordpress/* .&quot;,
    &quot;rm -rf wordpress/&quot;
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;em&gt;Ooooookay, slow down&lt;/em&gt;. Let&apos;s break this down.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;post-install-cmd&lt;/code&gt; and &lt;code&gt;post-update-cmd&lt;/code&gt; are hooks that are triggered after running &lt;code&gt;composer install&lt;/code&gt; and &lt;code&gt;composer update&lt;/code&gt;. You can &lt;a href=&quot;https://getcomposer.org/doc/articles/scripts.md&quot;&gt;read up&lt;/a&gt; on all the available hooks if you&apos;re interested.&lt;/p&gt;
&lt;p&gt;Composer scripts allow us to run shell commands, and that&apos;s what we&apos;re doing here.&lt;/p&gt;
&lt;p&gt;The general steps in English are as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Remove Composer files from within the downloaded WordPress. This is to prevent our own from being overwritten.&lt;/li&gt;
&lt;li&gt;Remove &lt;code&gt;wp-content/&lt;/code&gt; from within the downloaded WordPress since we want to use our own (and so we preserve things like uploads, etc.).&lt;/li&gt;
&lt;li&gt;Remove &lt;code&gt;wp-admin/&lt;/code&gt; and &lt;code&gt;wp-includes/&lt;/code&gt; from our root directory to prevent &quot;directory already exists&quot; errors when we move files.&lt;/li&gt;
&lt;li&gt;Move everything from &lt;code&gt;wordpress/&lt;/code&gt; into our root directory. Files get overwritten and updated.&lt;/li&gt;
&lt;li&gt;Remove the &lt;code&gt;wordpress/&lt;/code&gt; directory.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You&apos;ll notice that the install hook has a bunch of conditionals that check for a file called &lt;code&gt;wp-load.php&lt;/code&gt;. This is to prevent the install command from nuking any existing version of WordPress -- perhaps you updated it through the admin UI, for example. The file was arbitrary, the main objective was to look for something that indicates that WordPress already exists.&lt;/p&gt;
&lt;p&gt;The update hook doesn&apos;t care about overwriting your existing WordPress core files since that&apos;s what it &lt;em&gt;should&lt;/em&gt; do.&lt;/p&gt;
&lt;p&gt;Now go ahead and install or update and profit!&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;composer install
composer update
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Ignoring WordPress Core and Plugins in Git&lt;/h3&gt;
&lt;p&gt;One of the biggest points to using Composer was to remove all this extra fluff from our project repository, no? Let&apos;s do that.&lt;/p&gt;
&lt;p&gt;If this was an existing project, you&apos;ll need to use &lt;code&gt;git rm&lt;/code&gt; to remove all WordPress core and plugins from the repository. You can use &lt;code&gt;git rm --cached&lt;/code&gt; to remove it from Git but keep the files.&lt;/p&gt;
&lt;p&gt;Add the following to your &lt;code&gt;.gitignore&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Ignoring WordPress core files (but making sure to keep themes).
/wp-content/plugins/
/wp-content/uploads/
/wp-admin/
/wp-includes/
/index.php
/license.txt
/readme.html
/wp-activate.php
/wp-blog-header.php
/wp-comments-post.php
/wp-config-sample.php
/wp-cron.php
/wp-links-opml.php
/wp-load.php
/wp-login.php
/wp-mail.php
/wp-settings.php
/wp-signup.php
/wp-trackback.php
/xmlrpc.php

# Ignoring Composer directories.
/vendor

# Allowing these explicitly.
!/wp-config.php
!/wp-content/index.php
!/wp-content/themes/
!/wp-content/themes/**
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Your repository should now be boiled down to just the essentials.&lt;/p&gt;
&lt;h3&gt;Closing Thoughts&lt;/h3&gt;
&lt;p&gt;Using Composer for WordPress is by no means necessary, but for someone like me who is used to using package managers like Bundler or Yarn, it&apos;s a huge boon to productivity and makes automation much easier down the line.&lt;/p&gt;
&lt;p&gt;Deploying with Composer is quite easy as well. If you&apos;re managing your own servers, just install Composer and all you&apos;ll have to do is clone the repository and &lt;code&gt;composer install&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;If this setup works well for you or you have any questions, &lt;a href=&quot;https://mobile.twitter.com/nshki_&quot;&gt;please reach out over Twitter&lt;/a&gt;!&lt;/p&gt;
</content:encoded></item><item><title>Reviving the blog</title><link>https://nshki.com/reviving-the-blog/</link><guid isPermaLink="true">https://nshki.com/reviving-the-blog/</guid><description>Motivations and thoughts around reviving my personal blog.</description><pubDate>Mon, 21 Jan 2019 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;For the past several years, I&apos;ve been treating my website primarily as an online résumé of sorts -- I focused on making it flashy, telling my professional story, and selling myself.&lt;/p&gt;
&lt;p&gt;Not to say those things aren&apos;t important, but I think my focus has been off the mark. I&apos;m hoping that returning to blogging will help me organize my thoughts better and give me a better way to reflect on learnings. Publishing resourceful posts (hopefully) and giving employers a better picture of how I think and work are positive side effects.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;The single best thing I ever did for my career was start a blog on my own website.&quot;
-- Brad Frost (@brad_frost)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://twitter.com/brad_frost/status/1086328236764614657?s=20&quot;&gt;Link to original tweet&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I spent a little time recycling illustrations from my previous site for this blog, and was able to write it in Gatsby fairly quickly. If you haven&apos;t tried &lt;a href=&quot;https://www.gatsbyjs.org/&quot;&gt;Gatsby&lt;/a&gt; yet, I highly recommend it!&lt;/p&gt;
&lt;p&gt;I &lt;em&gt;was&lt;/em&gt; fairly proud of my previous site, however, so I decided to &lt;a href=&quot;https://nshki.github.io/nshki.com-2017/&quot;&gt;keep it alive on GitHub Pages&lt;/a&gt;. Would like to write about the process I went through to build that sometime.&lt;/p&gt;
&lt;p&gt;Looking forward to writing more regularly as 2019 trucks forward.&lt;/p&gt;
</content:encoded></item><item><title>Switching from a Mac to a Chromebook (as a web developer)</title><link>https://nshki.com/mac-to-chromebook/</link><guid isPermaLink="true">https://nshki.com/mac-to-chromebook/</guid><description>This is my experience switching from a 2014 Retina Macbook Pro to a Google Pixelbook.</description><pubDate>Sun, 26 Nov 2017 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;This is my experience switching from a 2014 Retina Macbook Pro to a Google Pixelbook. Doing this is by no means for everyone, but I wanted to share my findings for anyone toying with the idea.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Edit as of June 6th, 2018&lt;/strong&gt;: I’ve experimented with two more ways of getting a development workflow on my Pixelbook and have updated the post to reflect them below.&lt;/p&gt;
&lt;h3&gt;Background&lt;/h3&gt;
&lt;p&gt;Ever since Apple announced their unibody aluminum MacBooks in 2008, I‘ve been a huge fan of the Mac.&lt;/p&gt;
&lt;p&gt;Their attention to detail and tightly coupled hardware and software immediately wooed me into buying into their ecosystem. It was a blast into the future, especially considering I started off with roots in Windows 2000, Notepad++, and IE6.&lt;/p&gt;
&lt;p&gt;The Mac introduced me to Unix, its sophisticated build tools, and the command line. I got spoiled by software like Homebrew, Sketch, and Pixelmator. As the years went by, I’ve always thought it was obvious that the Mac was the best platform for developers like me.&lt;/p&gt;
&lt;p&gt;Today, that’s not the case.&lt;/p&gt;
&lt;p&gt;To be clear, this post isn’t about bashing Apple. If someone were to ask me what computer they should buy if they wanted to get into development, I’d likely recommend a MacBook Pro.&lt;/p&gt;
&lt;p&gt;This is more about my waning faith in the Apple ecosystem.&lt;/p&gt;
&lt;p&gt;Something has been off since Steve Jobs’ untimely death. With the release of plastic iPhones, iPads with pens, and MacBooks with touchbars, I’ve been feeling Apple’s been steering further away from the original vision of all their products.&lt;/p&gt;
&lt;p&gt;Product refreshes have been underwhelming to say the least, and their recent, seemingly poor design decisions on products such as the Magic Mouse — a device you can’t use as it charges — have really made me start looking elsewhere for an experience that just &lt;em&gt;works&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;Enter Google&lt;/h3&gt;
&lt;p&gt;Google has been competing with Apple for a while now, with the likes of Android, Google Apps, and Chrome. Their products are by no means perfect, but whether we like to admit it or not, a lot of us have already sold our souls to many of them. e.g. Gmail is the de facto standard for email and Google Docs is a go-to for any type of document collaboration.&lt;/p&gt;
&lt;p&gt;This was the primary reason I switched from iOS to Android 3 years ago, and it’s one of the few reasons why I’m picking Chrome OS over macOS today.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/mac-to-chromebook/pixelbook.jpg&quot; alt=&quot;My Pixelbook&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Just this past October, Google announced the Pixelbook, a gorgeously made machine with top-notch specs, tablet and laptop modes, and a Wacom-powered pen. This was the first computer that got me to the same level of excitement as the unibody MacBook since 2008.&lt;/p&gt;
&lt;p&gt;This got me seriously considering, &lt;em&gt;what would happen if I replaced my MacBook Pro with a Pixelbook&lt;/em&gt;?&lt;/p&gt;
&lt;p&gt;So I ended up ordering one.&lt;/p&gt;
&lt;h3&gt;Workflow&lt;/h3&gt;
&lt;p&gt;Email, calendar, word processing, spreadsheets, presentations, reminders, notes, and 95% of all my non-dev needs have been handled ubiquitously by Google Apps for years, so that was off my radar (not to mention the Pixelbook supports running Android apps).&lt;/p&gt;
&lt;p&gt;I was really only concerned with two things: &lt;strong&gt;design software and development environment&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/mac-to-chromebook/figma.png&quot; alt=&quot;Figma&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Turns out &lt;a href=&quot;https://www.figma.com/&quot;&gt;Figma&lt;/a&gt; is an excellent web-based design tool — its toolset closely mirrors Sketch with a few minor differences. After spending a couple weeks exclusively using Figma, I determined that I could live without Sketch and Pixelmator on the desktop.&lt;/p&gt;
&lt;p&gt;The harder problem was development environment.&lt;/p&gt;
&lt;p&gt;Again, turns out there are several options, though in the end I found only one to be completely satisfactory.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The first option I explored was using cloud IDEs&lt;/strong&gt;. There are a handful of them out there: Cloud9 IDE, Codeanywhere, Codenvy, CodeTasty, and a few more. After trying all of them, I found Cloud9 to be the most robust and not-janky feeling IDE.&lt;/p&gt;
&lt;p&gt;I tested Cloud9 by using the free tier exclusively for a pull request on a volunteer project using a Rails &amp;amp; React.js stack. My main gripes with Cloud9 were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;I had to implement a lot of workarounds in my development flow&lt;/em&gt;. Servers wouldn’t run without specifying &lt;code&gt;$C9_HOSTNAME&lt;/code&gt; everywhere, PostgreSQL databases on Cloud9 didn’t support unicode by default, and only ports 8080, 8081, and 8082 were open.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;It’s not very customizable&lt;/em&gt;. It ships with a classic and flat theme, but both pale in comparison to modern editors like Visual Studio Code or Atom.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Its future seemed unclear&lt;/em&gt;. It recently got acquired by Amazon, its favicon is still not Retina-ready in 2017, and its blog is not very active.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Although I &lt;em&gt;could&lt;/em&gt; live with these issues, I didn’t want to have to sacrifice my development experience just so I could own a new and shiny Pixelbook.&lt;/p&gt;
&lt;p&gt;There were a lot of great things about Cloud9 though, such as: autocompletion support, built-in terminal, robust keyboard shortcuts, and good syntax highlighting.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The second option was to install Termux and use a local Linux server without putting my computer in developer mode&lt;/strong&gt; (&lt;a href=&quot;https://blog.lessonslearned.org/building-a-more-secure-development-chromebook/&quot;&gt;here’s an excellent guide written by Kenneth White&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;This had a lot of promise, but I ran into a one gamebreaker gripe: &lt;em&gt;I didn’t have sudo access&lt;/em&gt;. This is just plain necessary for certain things. I tried installing a couple packages I regularly used but got blocked by permissions errors. I may revisit this again in the future, but I shelved this option for now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The third option was to setup a VPS&lt;/strong&gt;, &lt;s&gt;and this is what I&apos;m sticking to as I write this&lt;/s&gt;. This seems like the correct option as a Chromebook user — to put my faith in the cloud and commit to using the cloud for all my computing needs.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/mac-to-chromebook/vps.png&quot; alt=&quot;VPS&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Now, I could have put the Pixelbook in developer mode, installed Crouton, and run Linux from the device natively, but I didn’t want to deal with hardware incompatibility, losing Google Assistant, and making my device way less secure.&lt;/p&gt;
&lt;p&gt;Instead, I setup a DigitalOcean droplet with a fresh install of Ubuntu 16.04, and everything worked as it should with none of the gripes listed above.&lt;/p&gt;
&lt;p&gt;For my editor, I went all-in with Vim and have been very pleased. &lt;a href=&quot;https://robots.thoughtbot.com/tags/vim&quot;&gt;Thoughtbot’s write-ups and videos on Vim&lt;/a&gt; have been incredibly great resources.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Fast forward several months. The fourth option is giving in and using Crouton to run a Linux distro alongside Chrome OS.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Some time after publishing this article initially, I decided to finally give Crouton a try. Like I mentioned above, I expected a few cons to this approach: hardware incompatibilities, losing Google Assistant, and having a significantly less secure machine. I was in for a bit of a surprise.&lt;/p&gt;
&lt;p&gt;There were reports of a faulty trackpad online, but I didn’t experience that at all with a local install of Ubuntu. The top row of the Pixelbook keyboard — with the exception of the escape key — unfortunately did not work, but I was able to make due.&lt;/p&gt;
&lt;p&gt;The installation process itself was quite straightforward. I ended up using &lt;a href=&quot;https://tutorials.ubuntu.com/tutorial/install-ubuntu-on-chromebook#0&quot;&gt;Ubuntu’s guide&lt;/a&gt; on installing Linux on a Chromebook, and was able to smoothly set the machine up with no problems. Enabling developer mode was a bit scary since you end up facing an intimidating screen about OS verification being off on each boot, but as long as I didn’t press the space bar which would completely wipe the machine, I was good.&lt;/p&gt;
&lt;p&gt;Ubuntu works like...Ubuntu. There were no particularly unique snags. I quickly installed all my developer tools and configured an environment just as I would have on a Mac. You can even seamlessly switch between Chrome OS and Ubuntu by using the keyboard shortcut Ctrl+Alt+Shift+Back, which meant I didn’t lose Google Assistant or any other Chrome OS feature after all.&lt;/p&gt;
&lt;p&gt;From a day-to-day standpoint, I believe that Crouton makes Chromebooks a completely viable option for developers. Though I’m not a security expert, many others online warn of having developer mode on as a security risk, so use this option with that in mind.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The fifth and most recently possible option is to use &lt;a href=&quot;https://blog.google/products/chromebooks/linux-on-chromebooks/&quot;&gt;Chrome OS’s new Linux Apps feature&lt;/a&gt;, also known as &lt;a href=&quot;https://www.reddit.com/r/Crostini/&quot;&gt;Crostini&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;At Google I/O 2018, Google announced being able to now run a Linux VM within Chrome OS. I heard some tid bits of this on the &lt;a href=&quot;https://www.reddit.com/r/Crostini/&quot;&gt;Crostini subreddit&lt;/a&gt; before this was officially announced, and have been completely on board with this since.&lt;/p&gt;
&lt;p&gt;This feature is still under active development, but the Pixelbook (and a few other machines) can switch to the Chrome OS dev channel to begin using Linux apps now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;EDIT (September 26, 2018)&lt;/strong&gt;: Linux Apps are now available on the stable channel for Chrome OS, so this should be enable-able out of the box!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/mac-to-chromebook/linux.png&quot; alt=&quot;Linux Apps feature&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This takes the cake for me for my Chrome OS development setup. This is similar to having a Termux setup, except that the Linux environment allows sudo and is natively supported by the Chrome OS team. Furthermore, &lt;em&gt;you don’t need developer mode enabled&lt;/em&gt;, so you get all your security and peace of mind back.&lt;/p&gt;
&lt;p&gt;&lt;s&gt;Since Linux is running in a VM, accessing web apps requires you to use the VM’s IP address or simply just &lt;code&gt;penguin.linux.test&lt;/code&gt;, which is more convenient.&lt;/s&gt; As of stable versions of Chrome OS in early 2019, accessing &lt;code&gt;localhost&lt;/code&gt; now works as expected.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://nshki.com/assets/posts/mac-to-chromebook//crostini.png&quot; alt=&quot;Crostini&quot; /&gt;&lt;/p&gt;
&lt;p&gt;So far, I’ve run into no issues with my development environment or command line tools. I’m quite a happy camper.&lt;/p&gt;
&lt;h3&gt;Closing Thoughts&lt;/h3&gt;
&lt;p&gt;I’ve been really happy with my Pixelbook so far. I was able to consolidate my devices and put my money towards an ecosystem that I believe is only going to get better.&lt;/p&gt;
&lt;p&gt;Some might ask, why not the Surface? To be honest, I seriously considered the Surface Book as well. I used exclusively Windows via Bootcamp for a week to get a hang for the new Linux subsystem, but kind of like the Cloud9 trial, there were too many hoops to jump through to get an ideal development environment.&lt;/p&gt;
</content:encoded></item><item><title>Routing JavaScript in Rails</title><link>https://nshki.com/routing-javascript-in-rails/</link><guid isPermaLink="true">https://nshki.com/routing-javascript-in-rails/</guid><description>My take on extending Paul Irish&apos;s DOM-ready JS execution technique.</description><pubDate>Tue, 17 Jun 2014 12:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This post is inspired by Paul Irish’s &lt;a href=&quot;https://www.paulirish.com/2009/markup-based-unobtrusive-comprehensive-dom-ready-execution/&quot;&gt;original post on DOM-ready execution&lt;/a&gt; and also by my coworker &lt;a href=&quot;http://www.danielsellergren.com/&quot;&gt;Danny&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I’ve been working on JavaScript-heavy Rails projects lately, and it quickly became apparent that a sensible JavaScript architecture was needed to keep things in order. As I browsed through the repositories of other projects to see what others have done in the past, I found an interesting object called &lt;code&gt;UTIL&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;UTIL = {
  exec: function(controller, action) {
    var ns     = VAW,
        action = (action === undefined) ? &quot;init&quot; : action;

    if (controller !== &quot;&quot; &amp;amp;&amp;amp; ns[controller] &amp;amp;&amp;amp; typeof ns[controller][action] == &quot;function&quot;) {
      ns[controller][action]();
    }
  },

  init: function() {
    var body       = document.body,
        controller = body.getAttribute(&quot;data-controller&quot;),
        action     = body.getAttribute(&quot;data-action&quot;);

    UTIL.exec(controller, action);
  }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I was referred to an article written by Paul Irish from a few years ago, and was immediately intrigued. By tacking on data attributes on the body tag, you’re able to neatly “route” the execution of JavaScript using the &lt;code&gt;UTIL&lt;/code&gt; object:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;body data-controller=&quot;&amp;lt;%= controller_name %&amp;gt;&quot; data-action=&quot;&amp;lt;%= action_name %&amp;gt;&quot;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;var controllerName = {
  actionName: function() {
    // this code will only execute on controllerName#actionName!
  }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Genius.&lt;/p&gt;
&lt;p&gt;I wanted to take it one step further.&lt;/p&gt;
&lt;p&gt;This set up better organized JavaScript without the bloat of JavaScript frameworks, and because recently I’ve been pushing for components-based front end architectures, I wanted to incorporate this routing technique with modules.&lt;/p&gt;
&lt;p&gt;Modules in JavaScript are essentially a set of namespaced, re-usable functions that define the behavior of app components (a modal, for example), like so:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;var MyModal = {
  init: function() {
    // code...
  },

  show: function() {
    // code...
  },

  hide: function() {
    // code...
  }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;While Paul Irish’s routing technique encourages “modules” to an extent, it does not directly translate into re-usable bundles of code, since it’s possible to still have to repeat yourself across multiple actions.&lt;/p&gt;
&lt;p&gt;So, here’s my take on using modules with routing (in CoffeeScript). The &lt;code&gt;javascripts/&lt;/code&gt; directory should look like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;javascripts/
|_ config/
|  |_ namespace.coffee
|  |_ router.coffee
|  |_ routes.coffee
|
|_ modules/
|_ vendor/
|_ application.js.coffee
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;config/&lt;/code&gt; directory will contain three files: &lt;code&gt;namespace.coffee&lt;/code&gt;, &lt;code&gt;routing.coffee&lt;/code&gt;, and &lt;code&gt;router.coffee&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#===============================================================================
# namespace.coffee
#
# Defines a custom namespace for the application.
#===============================================================================

window.NS = {}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The namespace config file will simply define the primary namespace of the app. We’re assigning it to &lt;code&gt;window&lt;/code&gt; so that it’s accessible globally. &lt;code&gt;NS&lt;/code&gt; should be changed to the name of your app.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#===============================================================================
# router.coffee
#
# Routes execution of scripts based on controller-action pairs.
#===============================================================================

NS.router =

  # Run on document load. Gets controller and action of current page and
  # executes corresponding scripts.
  init: -&amp;gt;
    body       = document.body
    controller = body.getAttribute(&quot;data-controller&quot;)
    action     = body.getAttribute(&quot;data-action&quot;)
    this.exec(controller, action)

  # Executes a function in the application namespace.
  # @param {string} - controller
  #        {string} - action
  exec: (controller, action = &quot;init&quot;) -&amp;gt;
    if NS[controller] &amp;amp;&amp;amp; typeof NS[controller][action] == &quot;function&quot;
      NS[controller][action]()

# Initialize router
document.addEventListener &quot;DOMContentLoaded&quot;, -&amp;gt; NS.router.init()
document.addEventListener &quot;page:load&quot;,        -&amp;gt; NS.router.init()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a direct translation of Paul Irish’s UTIL object into CoffeeScript. I’m opting to use &lt;code&gt;addEventListener&lt;/code&gt; to add DOM-ready events to make this architecture independent of jQuery. The &lt;code&gt;page:load&lt;/code&gt; event is necessary only if you’re using Turbolinks.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#===============================================================================
# routes.coffee
#
# Defines custom routes for script execution.
#===============================================================================

NS.my_controller_name =
  my_action_name: -&amp;gt;
    NS.my_module_name.init()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, the routes file is where different modules are initialized on a per-action basis. This means that we can neatly separate modules into their own files without worrying about which actions it should be used on, then connect them using &lt;code&gt;routes.coffee&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;This has worked fairly well on the projects I’ve implemented this on.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;routes.coffee&lt;/code&gt; can get cluttered rather quickly, however, and that is something to be improved upon in the future.&lt;/p&gt;
</content:encoded></item><item><title>Comment, because people</title><link>https://nshki.com/comment-because-people/</link><guid isPermaLink="true">https://nshki.com/comment-because-people/</guid><description>Good code needs comments. Good code needs comments because people.</description><pubDate>Fri, 27 Dec 2013 12:00:00 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;&quot;Good code doesn’t need comments, it should be self-documenting and easy to understand.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is a quote from one of my former computer science professors in college. I’m referring to it here because, frankly, I completely disagree.&lt;/p&gt;
&lt;p&gt;Comments don’t get as much attention as they deserve. They are the key to clear understanding of what code snippets are supposed to do. While code should be written to be easily understood, people approach different problems in different ways. What’s obvious to one person may not be obvious to another, and a lot of problems can be solved if there was a simple one-liner somewhere that explains the need for some conditional or assignment statement.&lt;/p&gt;
&lt;p&gt;A lot of people, myself included, have often fallen into the trap of leaving comments as afterthoughts. This results in meaningless words that even the original commentator can have trouble figuring out. The tendency is to think, “I’ll finish figuring out the implementation before writing any comments,” when we should be figuring out the implementation in English before writing any code. Comments serve as a roadmap, both to the developer writing them and to the people reading them.&lt;/p&gt;
&lt;p&gt;We’re only human. We forget, we misunderstand, we make mistakes. Comments can serve as our primary weapon against that. They spare our future selves countless hours of tracing and they let our co-workers flip straight to the same page we’re on. We can save everybody time by just writing comments.&lt;/p&gt;
&lt;p&gt;Good code &lt;em&gt;needs&lt;/em&gt; comments. Good code needs comments &lt;em&gt;because people&lt;/em&gt;.&lt;/p&gt;
</content:encoded></item></channel></rss>