<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  <title>Ben Howdle</title>
  <subtitle>Blog posts from Ben Howdle - software engineer, writer, speaker</subtitle>
  <link href="https://benhowdle.im/feed.xml" rel="self" />
  <updated>2026-01-22T00:00:00.000Z</updated>
  <id>https://benhowdle.im</id>
  <author>
    <name>Ben Howdle</name>
  </author><entry>
    <title>Meet Somna</title>
    <link href="https://benhowdle.im/meet-somna.html" />
    <updated>2026-01-22T00:00:00.000Z</updated>
    <id>https://benhowdle.im/meet-somna.html</id>
    <content type="text">The Pattern Underneath the Pattern I&#39;ve spent fifteen years building software. In that time, I&#39;ve learned to pay attention to systems - how they fail, how they recover, how small repeated interactions compound into something much larger than any single event. At some point I started noticing the same dynamics in myself (it look a hot minute). Not in a dramatic way. More like: the same worry showing up in different clothes. The same tightness in my chest before certain conversations. The same impulse to check my phone when a feeling I don&#39;t like arrives. Loops, running quietly in the background. I&#39;ve tried a lot of things over the years. Journaling. Therapy. Meditation apps. Reading. All useful, in their way. But there was often a gap between understanding what was happening and actually feeling different when it happened again. I built Somna to explore and capture that gap. What actually repeats Here&#39;s what I noticed: when I felt stressed or reactive, it was rarely about the thing in front of me. There was usually a deeper structure underneath. Something happens (the trigger). A thought appears. Feelings rise. The body reacts - tight chest, clenched jaw, that familiar sinking. A behaviour follows - I scroll, I snap, I avoid. And underneath all of it, some belief gets quietly reinforced. I&#39;m not good enough. I&#39;ll be found out. I can&#39;t handle this. The belief is the hinge. When something feels true at that level, your whole system organises around proving it. The thoughts, the feelings, the reactions - they&#39;re all downstream. This isn&#39;t a flaw. These patterns made sense once. They were adaptations to environments that required them. The problem is they keep running long after the original situation has passed. Why noticing matters There&#39;s a concept in software called observability - the ability to understand what&#39;s happening inside a system by looking at its outputs. You can&#39;t fix what you can&#39;t see. The same thing applies here. Most of the time, we&#39;re inside the loop. The trigger arrives, the belief activates, the reaction happens, and we only realise later - if at all - that we were caught again. The pattern runs faster than our awareness of it. But something shifts when you slow it down. When you can name the trigger, notice the thought, feel what&#39;s happening in your body, see the behaviour you&#39;re reaching for, and identify the belief underneath - the loop loses some of its power. Not immediately. Not completely. But gradually. You&#39;re no longer just in the pattern. You&#39;re observing it. And observation changes things. What Somna actually does Somna is simple. When something&#39;s bothering you, you log it. The app walks you through each layer: What triggered this? What thought showed up? What are you feeling? What&#39;s happening in your body? What behaviour are you reaching for? What belief might be underneath? Over time, you start to see what keeps repeating. Not because the app tells you - because you notice it yourself. There&#39;s also a support mode for when things flare up. Simple breathing exercises, grounding techniques. Nothing complicated. Just ways to help your nervous system settle so you can think clearly again. That&#39;s it. No streaks. No gamification. No guilt if you don&#39;t use it for a week. What it isn&#39;t Somna isn&#39;t therapy. It&#39;s not a replacement for professional support if you need it. It&#39;s also not a productivity tool. There&#39;s no goal-setting, no habit tracking, no optimisation. And it&#39;s not about fixing yourself. The framing matters here. You&#39;re not broken. Your nervous system learned to protect you. These patterns exist for a reason. The point isn&#39;t to eliminate them - it&#39;s to meet them with a bit more awareness, so they don&#39;t run the show quite so automatically. Privacy as a feature Everything you type stays on your device. I never see it. This was a deliberate choice. The things you&#39;re logging are personal. They should stay that way. And honestly, you&#39;re more likely to be honest with yourself if you know no one else will ever read it. Why I built this The cynical answer is that I built it for myself. I wanted a tool that matched how I actually think about this stuff - patterns, systems, loops - without the aesthetic of most wellness apps. The less cynical answer is that I think a lot of people understand their patterns intellectually but still feel hijacked by them emotionally. That gap is frustrating. You know the thought isn&#39;t true. You know you don&#39;t need to react this way. But knowing doesn&#39;t seem to help in the moment. Somna is my attempt to bridge that gap. Not by adding more knowledge, but by building the habit of noticing - right when it&#39;s happening. The long game Loops don&#39;t break. They soften. Each time you notice a pattern, name it, and meet it without judgement, it loses a little charge. Not overnight. Not permanently. But gradually, like a well-worn path slowly growing over because you stopped walking it so automatically. That&#39;s the bet Somna makes. Not that you&#39;ll be fixed, but that you&#39;ll be a little more aware. And that awareness, compounded over time, changes something. Somna is free to download on iOS. If you try it, I&#39;d genuinely love to hear what you think. meetsomna.com</content>
  </entry><entry>
    <title>So You&#39;re About To Do Your First Conference Talk</title>
    <link href="https://benhowdle.im/first-conference-talk.html" />
    <updated>2026-01-05T00:00:00.000Z</updated>
    <id>https://benhowdle.im/first-conference-talk.html</id>
    <content type="text">A Letter to My Younger Self Before Their First Talk Dear 2013 me, You&#39;re about to step on stage, or...stand up in front a bunch of web developers in London. Your heart&#39;s racing, you&#39;ve had a gnawing feeling of dread in your stomach for the past week. Your mouth&#39;s a bit dry. That last cortado certainly did not help. You&#39;re wondering why you ever agreed to this. Let me tell you a few things I wish you&#39;d known before your first talk. 1. Your way of speaking is the point You don&#39;t need to become a different person on stage. You already have a mostly-measured, reasonably-thoughtful way of communicating. That is your style. Let it show. Audiences don&#39;t need another hype machine, that&#39;s not you - they respond to clarity, calm, and sincerity far more than polish. Making jokes is your ice-breaker, do that. Trying to perform someone else&#39;s energy is exhausting. Speaking as yourself is sustainable. 2. The audience is just...you Here&#39;s a secret: the room is full of nerds. Curious, slightly awkward, well-meaning nerds who showed up because they want to learn something or be mildly entertained for half an hour. You&#39;re one of them - you&#39;ve just been given a microphone. How would you keep yourself engaged? They&#39;re not there to catch you out. They&#39;re on your side. 3. The first minute is always wobbly That shaky feeling at the start? Completely normal. Your body is dumping adrenaline into your system because it thinks you&#39;re being chased by a tiger. Ride it out. The fog clears. Your voice steadies. Your brain comes back online. I promise: it passes every single time. 4. Change the question in your head When nerves hit, your brain asks the worst possible question: &amp;quot;Oh my god, everyone is looking at me.&amp;quot; Swap it for a better one: &amp;quot;What do I want the audience to leave knowing?&amp;quot; This reframes your role from performer under judgment to messenger with something useful. It pulls focus outward instead of inward - and that can calm your entire nervous system. 5. Time keeps moving (which is comforting) Here&#39;s a sobering but oddly reassuring thought: In 30 minutes - or an hour, or however long the talk is - it will be over. Completely done. Time doesn&#39;t pause to judge your slides. There&#39;s usually a huge sense of relief when it ends. Then a strange little high. And, occasionally, a hint of sadness that it&#39;s finished - because once the fear drops away, it turns out...it&#39;s actually quite fun. A final note You&#39;re going to do absolutely grand. Try - genuinely try - to enjoy it. This is a unique moment where you get uninterrupted attention to share something you care about. That&#39;s a privilege, not a threat. You&#39;ve got this. Stay weird. Ben Talks I’ve Done (So Far) I didn&#39;t start out confident on stage. I started out curious, slightly terrified, and hoping I wouldn&#39;t forget how to speak. London, Aug 2013 - Will the Real DOM Manipulator Please Stand Up FEL Front-end London My first proper meetup talk. A deep dive into modern JavaScript at the time - showing how native APIs were finally catching up with what jQuery had been abstracting for years. It was about demystifying the DOM, lowering barriers, and giving web developers confidence that they didn&#39;t need a library for everything. Looking back, it was less about the DOM and more about finding my voice. Bournemouth, UK, 2014 - iOS for Web Developers re:develop Conference This talk came from curiosity more than authority. I was working as a freelance developer - deep in Node.js, JavaScript, and the front-end stack - and starting to get excited about iOS and what it unlocked. The goal wasn&#39;t to turn web developers into iOS experts overnight, but to show that the barrier to entry wasn&#39;t as high as it felt. Different platform, same instinct: reduce fear, build intuition, and make the unfamiliar feel approachable. Austin, TX, Aug 2024 - Letting Go of Perfectionism in Distributed Systems fintech_devcon 2024 · Roadblock Talk By this point, the anxiety hadn&#39;t vanished - but it had changed shape. This talk explored the mental models engineers use when designing large, imperfect systems: anticipating failure, managing tech debt, and resisting the urge to over-optimize for theoretical perfection. Using Jurassic Park (specifically Ian Malcolm’s Chaos Theory) as a narrative lens, I talked about how overlooked details, unexpected actors, and brittle assumptions can bring complex systems down - and why resilience, observability, and graceful failure matter more than pristine architecture diagrams. The committee feedback pushed me to make it more practical. Denver, CO, Aug 2025 - Securing a Banking System’s UI &amp;amp; API fintech_devcon 2025 · Roadblock Talk This was the most grounded talk I&#39;ve given so far. Building on real systems I&#39;d worked on as CTO at Letter and WorkMade - and later on Clerk&#39;s Commerce team - I focused on concrete security patterns that actually held up in production: Masking database IDs so internal structure never leaks Designing backends around auditability and defensive defaults Avoiding foundational security mistakes that are hard to unwind later Baking authorization directly into request handling using RPC + RBAC The feedback this time wasn’t &amp;quot;add more examples&amp;quot; - it was &amp;quot;tell us what happened next.&amp;quot; What broke. What improved. What stuck. That felt like progress.</content>
  </entry><entry>
    <title>Amygdala&#39;s Determinism, Good Will Hunting &amp; Gut Feel: A Playbook for Developers Building With Intelligence</title>
    <link href="https://benhowdle.im/amygdalas-determinism-gut-feel.html" />
    <updated>2025-11-18T00:00:00.000Z</updated>
    <id>https://benhowdle.im/amygdalas-determinism-gut-feel.html</id>
    <content type="text">Introduction - The Accidental Era of &amp;quot;AI First&amp;quot; Over the last two years, you&#39;ve probably noticed a strange vibe in the software world: everyone is moving at impossible speed while simultaneously questioning if any of this will matter in five years. We&#39;ve crossed into a world where individuals can do what once required teams. A world where &amp;quot;engineering&amp;quot; sometimes means &amp;quot;describe the system you want and let an agent scaffold 70% of it.&amp;quot; A world where the differentiator is no longer what you can build, but how well you understand and present what you&#39;re building. This post is about that gap - not the gap between good engineers and bad engineers, but the widening gap between what AI can do and what humans still uniquely bring to the table. In 2025, the superpower isn&#39;t AI; it&#39;s AI plus a human who actually understands the problem...and understands humans. What Brains Do vs. What AI Does If you strip the biology away, a brain is three things: A prediction engine - constantly guessing what happens next based off of blending past experiences and the current situation. A context machine - soaking the world in messy, multimodal experience. A fear-driven risk assessor - optimised not for efficiency, but for survival. The brain doesn&#39;t distinguish &amp;quot;good&amp;quot; or &amp;quot;bad&amp;quot;, nor between &amp;quot;positive&amp;quot; or &amp;quot;negative&amp;quot;, simply &amp;quot;safety&amp;quot; and &amp;quot;uncertainty&amp;quot;. It&#39;s why people often opt for staying in a &amp;quot;familiar hell&amp;quot; vs. the risk of a &amp;quot;new heaven&amp;quot;. AI, ironically, is the inverse: A prediction engine - but trained on tokens, not lived moments. A context simulator - but only through text, images, embeddings, and approximations. No fear at all - and therefore no instinctive understanding of consequences. It can analyse code paths for execution, but can it remember that time it stayed up until 3am fixing production? Those moments get embedded and baked into your nervous system, propelling you to never to repeat the same experience. Humans have groundedness - the smell of the Sistine Chapel, the weight of a friend&#39;s head on your lap, the lived experience that can&#39;t be compressed into embeddings. AI has a kind of alien competence - breadth without personal cost, pattern recognition without pain. Where they meet is where the magic happens, but pretending they are the same thing is where developers go wrong. The Good Will Hunting Principle Why AI Will Never Know What the Sistine Chapel Smells Like There&#39;s a moment in Good Will Hunting that has become, for me, the perfect metaphor for AI: &amp;quot;If I asked you about art you could give me the skinny on every art book ever written… But you couldn&#39;t tell me what it smells like in the Sistine Chapel...&amp;quot; AI is the bench scene. It is Will: dazzlingly capable, encyclopaedic, high-bandwidth, sometimes correct before it&#39;s even finished reading the prompt. Humans are Sean. We carry the texture of life - loss, fear, joy, consequence. AI knows the syllabus. Humans know the stakes. This is why AI alone isn&#39;t the answer, especially in domains that matter. AI is an accelerant. The human is the combustion chamber. If you&#39;re singing off the same hymn sheet with AI, it can decipher the strange glyphs into what it should sound like, but it cannot feel the swelling vibration in your chest you associate with a choral climax. Determining Determinism Non-Deterministically How to Know When an AI Is &amp;quot;Correct&amp;quot; When It Cannot Be Repeated Exactly Humans secretly want AI to behave like traditional software: given X, produce Y, every time. But AI behaves like a human who took three espressos and read the entire internet: given X, produce Y, Z, or occasionally something that should be set on fire, each delivered with an earnest and confident &amp;quot;I&#39;ve done it! Here&#39;s what I can do next!&amp;quot;. So how do you measure correctness in a non-deterministic system? You don&#39;t evaluate the final output; you evaluate the space of outputs: Is it within the acceptable region? Is the variance meaningful? Is the deviation dangerous or merely creative? Would a domain expert disagree? And if so, why? How do we feed that domain expert feedback back into the initial loop, thus improving the overall process. Correctness becomes a spectrum. Safety becomes a gradient. Evaluation becomes probabilistic. This is why human involvement matters - not as an afterthought, but as a co-author of the output distribution. The Human-in-the-Loop Advantage - Lessons From WorkMade At WorkMade, I worked closely with our in-house tax expert - a man with 20+ years of experience reading IRS forms like other people read cereal boxes. We built systems that: Automatically classified user transactions into tax codes. Prepared IRS forms programmatically. Used AI to infer categories, reduce manual work, and accelerate decision-making. But here&#39;s the important part: we never once assumed the AI was &amp;quot;right.&amp;quot; Instead, we assumed it was fast. We built: Escape hatches. Review checkpoints. Overrides. Confidence thresholds. Explanations. Human re-routing mechanisms. Continuous correction loops guided by the tax expert. AI did the brute-force first pass. Our expert handled the nuance. The system didn&#39;t replace expertise; it scaled it. That&#39;s the pattern teams miss: AI doesn&#39;t eliminate the human. AI amplifies the human - if the system is designed correctly. The human is the core logic, AI is the serverless lambdas you scale horizontally to maximise throughput. The Amygdala Architect - How Fear Becomes a Feature People sometimes assume great architecture comes from calm, serene minds. Some of the best engineers I&#39;ve met - and, if I&#39;m honest, my best work too - has come from a kind of wired, hyper-attentive intensity. The kind that doesn&#39;t choose to analyse the edge case; it can&#39;t stop analysing it. My overactive amygdala means I&#39;m constantly scanning, anticipating, modelling. It&#39;s brilliant for building systems. It&#39;s exhausting for being a person. Almost certainly exhausting for my wife too... This intensity is a blessing and a curse. The Blessing It gives you the &amp;quot;architect brain&amp;quot;: You see the failure modes before anyone else even sees the system. You instinctively build in escape valves, retries, and guard rails. You predict downstream consequences with eerie accuracy. You design for resilience because your nervous system doesn&#39;t trust anything that can&#39;t survive an outage. You assume things will break - and then design systems that survive that assumption. This isn&#39;t pessimism. It&#39;s protective pattern-recognition. It&#39;s the same biological wiring that kept your ancestors alive, repurposed for distributed systems. The Curse But that same intensity doesn&#39;t switch off just because the meeting ends or the PR is merged. It means: Your brain keeps simulating failure in everyday life. You obsess over unlikely risks with the same seriousness as likely ones. Your nervous system sits closer to &amp;quot;alert&amp;quot; than &amp;quot;rest.&amp;quot; You carry architectural vigilance into places where it doesn&#39;t belong. It&#39;s the classic engineer&#39;s paradox: The things that make you great at your craft often make life feel heavier than it needs to be. And yet - and this is important - when directed correctly, that same intensity becomes your differentiator in an AI-first world. AI is calm. AI is neutral. AI has no instinct for danger and no felt sense of consequence. AI cannot be paranoid - and paranoia, in the right dose, is foresight. Your intensity isn&#39;t an inconvenience. It&#39;s the complementary ingredient AI lacks: Gut-level modelling of risk Emotional simulation of failure Embodied understanding of consequences An almost spiritual commitment to &amp;quot;this must not break&amp;quot; If AI is acceleration, intensity is directional caution. Together, they create systems that are fast and safe - and that combination is becoming rarer, and therefore more valuable. I plan for failure paths before success paths. I ask &amp;quot;what if this breaks?&amp;quot; before &amp;quot;what if this works?&amp;quot; I model worst-case scenarios instinctively. I design with escape valves, backpressure, retries, and dead-letter queues by default. Fear - in the right container - is foresight. AI cannot feel fear. It cannot anticipate catastrophic edge cases from internal anxiety. It cannot emotionally simulate &amp;quot;what happens if this fails and thousands of people rely on it.&amp;quot; But humans can. That is not a bug; it is the defining complement to AI&#39;s strengths. How Developers Should Use AI in 2025 AI is no longer &amp;quot;autocomplete but bigger.&amp;quot; It is now a junior engineer, a research assistant, a prototyper, a debugger, a pair programmer, a brainstorming partner, and a brute-force executor of detail. But the developer remains indispensable for: Problem definition - AI can solve almost anything; it cannot choose which thing is worth solving. Context building - grounding the problem in reality, not tokens. Risk modelling - thinking through failure modes AI cannot intuitively imagine. Domain alignment - translating expertise into constraints. Decision ownership - choosing what is &amp;quot;good enough,&amp;quot; what is &amp;quot;dangerous,&amp;quot; and what is &amp;quot;wrong.&amp;quot; Ethical navigation - determining where automation stops, where it&#39;s irresponsible to let AI continue, and where a human is required for care. AI accelerates execution. Humans determine direction. The Future - Hybrid Intelligence as a Competitive Advantage The founders who win in this new era won&#39;t be the ones who automate everything. They&#39;ll be the ones who: Understand the problem viscerally. Use AI as leverage, not outsourcing. Build human fallibility into the system. Build human wisdom into the loop. Recognise where AI ends and where experience begins. Treat the amygdala not as a glitch but as a strategic asset. AI is the engine. Humans are the steering wheel. You need both to get anywhere worth going. Closing - The Bench Scene Revisited In Good Will Hunting, Sean tells Will a truth AI will never understand: Experience is not data. Suffering is not a dataset. Love is not an embedding. Fear is not a token. Wisdom cannot be scraped from Wikipedia. AI is brilliant. But it has never smelled the Sistine Chapel. The future will belong to the builders who remember that.</content>
  </entry><entry>
    <title>Building Devdle: Notes from the Neon Arcade</title>
    <link href="https://benhowdle.im/building-devdle.html" />
    <updated>2025-10-15T00:00:00.000Z</updated>
    <id>https://benhowdle.im/building-devdle.html</id>
    <content type="text">Building Devdle: Notes from the Neon Arcade Devdle is a Wordle-inspired daily puzzle app I built for developers to solve coding challenges. It was a super fun project to build, and brand obnoxiously over-the-top, so I figured it would also be fun to do a deep-dive on how I built it, how it operates and where it&#39;s hosted. Enjoy. TL;DR Edge-first stack (Cloudflare Worker + Pages) keeps the daily puzzle snappy worldwide. Puzzles are generated on demand with OpenAI, validated locally, and cached in KV/D1. React + Vite powers a CRT-styled UI; all answer checking happens client-side with a public salt. Google Analytics events (app_open, play_click, first_attempt, guess, etc.) make DAU and funnel tracking explicit. Durable Object rate limiting + cron jobs keep the API steady without babysitting servers. What I Wanted One fresh coding puzzle per UTC day. Zero login, zero friction, works on a phone. A restrained scope I can support while everything else is on fire. So Devdle is intentionally small: neon pixels, six guesses, a shareable emoji card. No accounts, no leaderboard. Architecture Snapshot Layer Stack Why Frontend React 18, Vite, TypeScript, Tailwind Fast DX, easy animation, modern tooling. API Cloudflare Worker (Hono), KV, D1, Durable Object Edge latency, no cold starts, free-ish to run. Content OpenAI chat completions Flexible copy + code snippets with explanations. Hosting Cloudflare Pages Static deployment, automatic SSL, custom domain ready. Telemetry Google Analytics (gtag) + Worker logs Simple, keeps us honest. Everything sits under devdle.iamdevloper.io. Worker handles /api/*; Pages serves the SPA. Daily Puzzle Lifecycle Player hits the SPA → initAnalytics() → trackEvent(&#39;app_open&#39;). App calls GET /api/today. Worker tries: KV cache (12h TTL), otherwise D1 lookup, otherwise generate new puzzle. Generation pipeline: Pull recent puzzle metadata for variety. Call OpenAI with hard constraints: language ∈ {js, py, go}, type mcq, six choices, one answer. Validate with Zod → enforce output rules (single-line hashing). Run novelty guard (fingerprints, stems). The guard now logs and adjusts difficulty rather than throwing a tantrum. Persist to D1, hydrate KV, return puzzle minus the answer but include a short hash. Client hashes guesses (SHA-256(answer|salt), truncated 16 chars) to compare locally. Salt is public but rotated annually. Wins/losses, streaks, history saved in localStorage; share text built with emoji tiles. Backend: Cloudflare Worker KV – Hot cache for today&#39;s puzzle (keys: puzzle:YYYY-MM-DD). D1 – SQLite schema: puzzles(date TEXT PRIMARY KEY, json TEXT, created_at INTEGER). Durable Object – Token bucket per IP, 60 req/min (120 for the /today read path). Cron – Runs at 00/08/16 UTC. First call warms KV for current day (idempotent). Second call pre-generates +3 days to keep content ready. Secrets – VITE_API_ORIGIN, OPENAI_API_KEY, DAY_PUBLIC_SALT, SECRET_SALT managed via Wrangler. Worker code paths are heavily logged (console.log, console.warn, console.error). observability.enabled = true makes Cloudflare&#39;s dashboards useful when generation misbehaves. Frontend Bits App.tsx – Controls view switching (splash, play, stats). Persists attempts/streak history. first_attempt GA event fires the moment someone submits a guess. PuzzleScreen – Input modes: options → multiple choice, order → newly added drag/select UI with proper reordering controls, free → single-line (multiline for sequences), lines → spot-the-bug. Styling – Tailwind with a custom palette (ink, neonBlue, etc.). BackgroundScene.tsx uses Three.js to render the grid + holograms. Easter eggs – Console injection (window.devdle.coin(), devdle.konami()) plus the usual CRT scanlines. Share – Emoji tiles + UTMed link built via buildShareText. Analytics (GA4) Event Trigger app_open SPA load. play_click Player hits the Play button. first_attempt First guess for the day (captures date/type/language). guess Every guess: result (correct/wrong), attempt number, puzzle metadata. share_click From play or stats view. stats_click, stats_back Navigation between panels. page_view Splash/Play/Stats transitions. These keep DAU and funnel numbers straightforward when looking at GA. Content Guardrails Language rotation – always JS, Python, or Go. Hints steer the model away from overusing previous languages. conceptFingerprint – stored on puzzle metadata; helps avoid repeating array-filter-reduce. Single-answer enforcement – enforceOutputPuzzle now converts accidental output puzzles into MCQs on the fly. ensureFingerprint / enforceNovelty – logs when it repeats itself, downgrades trivial puzzles to difficulty 1. Choice normalization – Options padded to six, duplicates trimmed, correct answer guaranteed present. Operations &amp;amp; Testing TypeScript across both app and worker (npm run typecheck). Worker unit tests (node --test) cover output enforcement and novelty utilities. Manual QA for puzzle generation whenever prompt or enforcement logic changes. OG/Twitter card meta tags point to https://devdle.iamdevloper.io/og.png for consistent previews. Rate limiting DO protects from accidental firehoses (or excited players). Cron jobs are idempotent; re-generating a day will just update KV + D1 gracefully. Things Learned the Hard Way Validate everything – Zod schemas catch OpenAI drift. Single-output puzzles now auto-convert; no more failures just because the model picked the wrong type. Limit the surface area – Three languages, one puzzle type. Fancy variety can wait until I&#39;m comfortable with the base game. Telemetry early – GA events for first_attempt + guess live from day one so daily active players are obvious. Edge infra is friendly – KV/D1/Durable Objects cover caching, storage, and rate limiting without pet servers or cron boxes. Make puzzles answerable – Order mode got a proper UI, sequence mode gets hints, options are always six clickable tiles. Next Steps? Grow the puzzle backlog, add explainers for multi-choice answers. Explore optional leaderboards or streak sharing once base DAU is steady. Extend analytics with puzzle difficulty/time-to-solve metrics without creeping on the &amp;quot;no login&amp;quot; promise. For now, Devdle does exactly what I wanted: a five-minute daily diversion for the dev brain, low-maintenance to run, and easy to share.</content>
  </entry><entry>
    <title>fintech_devcon 2025 Talk: Securing a Banking Platform&#39;s API + UI</title>
    <link href="https://benhowdle.im/fintech_devcon-2025-securing-a-banking-platforms-api-and-ui.html" />
    <updated>2025-09-16T00:00:00.000Z</updated>
    <id>https://benhowdle.im/fintech_devcon-2025-securing-a-banking-platforms-api-and-ui.html</id>
    <content type="text">Securing a Banking System&#39;s UI and API The below was a talk delivered at fintech_devcon in August 2025 in Denver, CO. I&#39;ve adapted it into the blog post below... I&#39;ve spent the last several years building fintech products from the ground up. Today, though, I want to talk about something that often gets bolted on far too late: security. Not security as a checklist. This is about building systems that are secure by design - from the UI all the way to the backend - and doing it in a way that doesn&#39;t suck the life out of your product or your team. And let&#39;s be honest: systems are rarely, if ever, 100% secure. Once you accept this sobering fact, you can free yourself to focus on threat prioritisation over straight coverage. What I&#39;ll share here is how we made security an invisible but powerful layer in our systems. Not just for compliance, but to genuinely protect users without compromising developer experience… too much. Who Am I and Why This Talk? I&#39;m a consultant today, but previously I was CTO of two banking startups - Letter Bank and WorkMade. We built full-stack banking platforms, handling everything from onboarding to internal tools to user-facing dashboards. That period, roughly from 2019 to 2023, was a crash course in building secure financial systems. And unlike most 2025 talks, this one features absolutely zero AI. It&#39;s vintage, if you will. Across those startups, I spent five years leading engineering teams, deeply involved in every aspect of the product lifecycle. We weren&#39;t just shipping features - we were handling real money, real risk, and all the constraints that come with that. At the first company, I was engineer #1. I laid the foundations - including the security model - and saw firsthand how those early decisions could either set you up for success… or endless pain. That&#39;s the perspective I want to share: not just what we did, but why it mattered, and how it actually played out. What You&#39;ll Hear This is not theory. It&#39;s one giant case study of real production systems with real customers. There aren&#39;t many security talks that get to boast big wins. In this space, positive news is simply the absence of negative incidents. But for context: We passed pen tests. We didn&#39;t lose anyone&#39;s money. We didn&#39;t suffer breaches (no external ones, anyway - more on that later). So, this is a dump of lessons from five years of paranoid architecture and low-level background anxiety. The Three Big Takeaways If there are three things I&#39;d love for you to leave with, they&#39;re these: Mask every backend ID. Leaking internals is mostly downside, and concealing them is mostly upside - though it does come at a cost, which I&#39;ll get into. Declarative authorization. Centralise your rules, and bake them into functionality, network, and system layers. Immutability. For me, security is less about static state and more about narrative. I want to see the why and how events occurred over time, not just the what. Attack Vectors (or: Don&#39;t Build Death Stars with Exhaust Ports) Whenever I think about system security, I start with a simple question: how can this system be attacked? From which angles, in what ways, by who? If you&#39;re building the Death Star, you don&#39;t want to realise too late that you left an exposed exhaust port. Think about it: this was the Empire&#39;s most ambitious engineering project. Unlimited budget. Top talent. Built under Palpatine&#39;s iron grip. But even with all that control, they made one critical mistake. They underestimated what a single overlooked vulnerability could do. And because of that, the whole system collapsed. The Rebel Alliance didn&#39;t brute-force their way in. They exploited poor system design. That&#39;s what we&#39;re avoiding when we build defensible systems. You can have firewalls, encryption, auth - all the compliance boxes ticked - but if you leave one path exposed through bad assumptions or inconsistent patterns, there&#39;s your exhaust port. Your threat surfaces are everywhere: The UI (a hostile environment with random clicks on random devices). Webhooks (easy target unless hardened). The API (usually the most attacked). And staff (even the best-intentioned human can be phished). The point: map your attack vectors early. Zoom out. What&#39;s exposed? Where&#39;s your exhaust port? Trust Is Paramount You don&#39;t get many chances to lose customer trust - just the one. And it&#39;s almost impossible to win back. We weren&#39;t just handling 1s and 0s. We were handling payroll, rent, food. A single bug might mean someone&#39;s mortgage doesn&#39;t go through. So yeah, I&#39;d rather lose a little sleep over a false positive alert than wake up to a real incident. Money can move fast, but trust moves incredibly slow. Tech Stack Choices The stack we chose shaped how we approached security: GraphQL Acted as an intent parser: frontend asked for exactly what it needed, and nothing more. Tighter control of sensitive data and a cleaner audit trail. gRPC Fast, type-safe, schema-first. Perfect for money movement flows where contracts need to be unambiguous. Etcd Our rock-solid, immutable store. Every permission change, method registration, config update recorded immutably. Effectively an always-on audit ledger. TypeScript More than DX. Prevented runtime surprises in fintech, where undefined isn&#39;t just annoying - it could cost someone real money. YAML One source of truth for operations and auth rules. Compiled into TypeScript with RBAC baked in. Events as Communication and Data We made every meaningful action emit an event. That event became the source of truth and the message. It told the rest of the system what just happened. That gave us two for the price of one: comms and data. Microservices subscribed to the stream, and it also meant we had an audit trail by default. No extra logging layer, no mystery state. It&#39;s how we debugged things, and how we rebuilt state. Write vs. Read Split We split our system into a write path and a read path. The write path employed pure event-sourcing: every change was a new event written into Etcd&#39;s append-only log. That gave us immutability, a single source of truth, and a perfect audit log. The read path? A cache mirror in PostgreSQL, built by replaying those events. Fast, familiar, easy to query - but not the source of truth. This let us optimise for different guarantees: strong consistency on the write side, performance on the read side. Yes, that meant eventual consistency, but for most use cases - back-office dashboards, fraud review tooling - that was a fair trade. Immutability Everywhere We never removed or rewrote events. Every change was appended as a new event. That gave us a perfect audit log by default. By encapsulating user context and timestamps in the schema, we never had to guess later. The event itself was the story. Users weren&#39;t static objects. They were event streams: logins, password changes, address updates, permission grants. Like a Git repo, HEAD is useful, but the real truth is the commit history. Hiding Inner Workings One of our first principles was not leaking backend internals to the outside world. That meant no raw database IDs in URLs, no trusting clients to behave, no assumptions that internal structure should be public. We treated every internal object as something attackers might guess or misuse. Which brings me to one of my earliest painful lessons: an IDOR (insecure direct object reference) bug where a user could access someone else&#39;s account just by pasting their ID into the URL. I was mortified. We&#39;d built security into the system from day one - but missed this. The fix? Mask every ID. Masking IDs Instead of exposing IDs, we generated opaque, short-lived tokens. The mask was stored in Redis with a TTL, bundled with metadata like the original ID and who owned it. When reversed, it checked against the user&#39;s JWT. If it didn&#39;t match? Hard stop. Do not pass Go, do not collect $200. It wasn&#39;t free: tokens expired after an hour. That added some friction, but clients refreshed them automatically. Users never noticed, and we gained much tighter boundaries. Hard Shell, Gooey Center We designed with a hard shell. GraphQL had zero business logic. Its only job: parse intent and pass data forward. Execution lived behind the wall in RPC, with schema validation and auth checks baked in. Developers never touched business rules directly - they were enforced centrally, consistently, invisibly. This let us enforce logging, validation, and permissions in one place, while keeping the developer experience sane. Declarative Auth with YAML We defined operations in YAML, with RBAC roles attached. At build time, those were compiled into TypeScript middleware. That gave us: One source of truth. Easy Git diffs and reviews. Automatic scaffolding for handlers. Developers never wrote if (isAdmin) again. Security was declarative, boilerplate vanished, and the system enforced intent consistently. Mistrust as a Principle We treated every input with suspicion. Every request was validated. Even internal-only endpoints got zero-trust treatment. It&#39;s the same reason NASA applies a 1.4 factor of safety to every bolt. They don&#39;t do it because bolts always fail. They do it because the cost of failure is catastrophic. In fintech, trust is oxygen. Squander it, and it&#39;s game over. So we borrowed the principle: not “what&#39;s the minimum that works?” but “what&#39;s the minimum that doesn&#39;t fail?” Closing Principles To close things out, here are the principles that helped us avoid building Death Stars with exhaust ports: Avoid exposing backend internals. IDs are implementation details, not API design. Bake in validation everywhere. Assume input is wrong until proven otherwise. Prefer events over state. Logs give you the movie, not just the still frame. Prioritise security over raw speed. Users forgive milliseconds; they don&#39;t forgive breaches. Thank You And that&#39;s everything from me. Hopefully there was something useful here - or at least something you&#39;ll shamelessly copy-paste into your own systems. If you want to chat more about security, systems design, or just swap war stories, you can find me online or at the coffee stand. Thanks again.</content>
  </entry><entry>
    <title>AI Struggles To Help You Think</title>
    <link href="https://benhowdle.im/ai-struggles-to-help-you-think.html" />
    <updated>2025-07-04T00:00:00.000Z</updated>
    <id>https://benhowdle.im/ai-struggles-to-help-you-think.html</id>
    <content type="text">AI Can Help You Write Code - But It Can&#39;t Help You Think The hardest part of software engineering has never been the syntax. It&#39;s never been whether you used map or forEach, GraphQL or REST, Tailwind or BEM. The real difficulty is developing the ability to reason about an entire system - understanding how data moves through it, how changes ripple across it, and how one decision in a helper function might trigger cascading pain three services away. That reasoning muscle is what separates a competent developer from a great one. And we used to build it the hard way: by writing code line by line by navigating the thorns of state by realizing, too late, that the thing you built two weeks ago just broke the thing you wrote two months ago You weren&#39;t just shipping features - you were developing mental models of how the system fit together. Here&#39;s the problem: AI is hampering that learning loop. AI can write you code. Good code. Sometimes great code. But it doesn&#39;t help you build the mental model of the system you&#39;re adding to. It creates you the puzzle piece but doesn&#39;t help you see the entire jigsaw. It can autocomplete your function, scaffold your tests, write your regex, and even explain the commit you barely remember writing - but it can&#39;t tell you how that code affects the behaviour of a 50k-line production app that predates you by six years. What&#39;s worse: the very process that used to build your reasoning - writing code line-by-line while holding the system in your head - is now being outsourced. The production is supercharged. But the understanding is wavering. You&#39;re shipping more...but learning less. Developers today might produce more PRs than ever before. But that doesn&#39;t mean they understand the system better than last year&#39;s slower counterpart. KPI targets &amp;gt; cognitive cycles. In fact, in some cases, they understand it less - because they never had to wrestle with it. And you can&#39;t reason about something you&#39;ve never truly had to engage with. Sometimes, the deepest learning and growth comes from spending an uncomfortable amount of time in the discomfort. So what do we do? We can&#39;t just turn the AI off. We shouldn&#39;t* But we do need to design engineering workflows that preserve and protect system-level reasoning: Code reviews that ask &amp;quot;how does this interact with module X?&amp;quot;, not just &amp;quot;are the tests passing?&amp;quot; Internal docs that focus on why something exists, not just what it does Slowing down when debugging - not because the AI can&#39;t guess the fix, but because you need to learn why the bug emerged *one of the things I like about AI is that I don&#39;t feel bad about bugging it with more questions. I can be quite an annoying person, so to have a 24/7 entity around that I can interact with, who doesn&#39;t get annoyed, is quite valuable to me And we need to teach junior devs that the process of production was the process of learning - and that if AI takes the wheel too often, they&#39;ll miss the most important part of the ride. AI is here to stay. But your ability to reason about systems? That&#39;s still the rarest - and most valuable - skill in software engineering.</content>
  </entry><entry>
    <title>The Problem → Solution → Impact Gap</title>
    <link href="https://benhowdle.im/problem-solution-impact-gap.html" />
    <updated>2025-04-17T00:00:00.000Z</updated>
    <id>https://benhowdle.im/problem-solution-impact-gap.html</id>
    <content type="text">There&#39;s a strange delay in engineering leadership no one warns you about. It&#39;s the time between when you fix something and when you experience the tangible benefits of your actions. Fix a bug? You&#39;re a hero before lunch. Rebuild the monitoring system to detect drifts in a payment system? It might be weeks, months before those efforts come into fruition. That&#39;s the Problem → Solution → Impact Gap Everything Good Takes Longer Than You&#39;d Like When I joined a Letter (a US fintech startup) as founding engineer and CTO, I had a rare opportunity: greenfield architecture (hello!). No legacy baggage. No weird constraints. Just a blank canvas and a set of make-or-break goals - high availability, airtight data integrity, and compliance-grade auditability. I designed the platform around immutable event streams, strict type safety, and multi-region resilience. Every customer action-from transfers to tax calculations-became an event in a chain, keyed by ULID, stored immutably, and passed through a hardened RBAC system backed by GraphQL and RPC layers. At the time, most of this wasn&#39;t visible to users. They didn&#39;t see the YAML-driven schemas or the ephemeral Redis-based ID masking for security. But six months later? Faster shipping. Fewer bugs. More confident engineers. We weren&#39;t just unblocking ourselves-we&#39;d built momentum. The payoff came later. But the impact started the moment we planted the seeds. Leadership is full of moments like that: Where you do the right thing, and then...crickets. A lot of people fill that silence with doubt, or scurried reactivity. Don&#39;t. Trust the payoff. You&#39;re a Gardener Now I&#39;ve shipped developer tools that now process 150M+ requests a month. I&#39;ve built tax software that moves actual money to actual governments (hey sleepless nights!) I&#39;ve helped companies from bootstrapped indie outfits to Fortune 500s scale their platforms and their people. And in every one of those contexts, the same truth holds: Fast fixes get applause. Long-term bets build legacies. When I joined WorkMade, I started laying foundations; monitoring, deployment, categorisation pipelines, tax systems. Some shipped fast. Others? Still surfacing value today (I hope 🤞), a year later. The job becomes: Plant the seed Protect it from bad weather Wait Time Is a Force Multiplier The difference between good decisions and great ones? Time. Give your team better observability → They&#39;ll ship safer by default. Instil a culture of asking dumb questions → The juniors become seniors in record time. Removing fear as a blocker helps people flourish. Replace the 3-hour weekly standup with async check-ins → Watch productivity crawl back into your dev cycles. Each small improvement is a brick. Stack them consistently and you build something that doesn&#39;t fall over when the wind changes. Micro vs Macro Moves Every day, you get two questions: What&#39;s one thing I can do today to help the team? What&#39;s one thing I can do today that helps the product 6 months from now? Some days that&#39;s &amp;quot;unblock an engineer.&amp;quot; Other days it&#39;s &amp;quot;refactor our infra so we can handle more payments without drowning.&amp;quot; Early in your career, success feels like &amp;quot;look what I built.&amp;quot; Later, it becomes &amp;quot;look how fast and safe everyone else is building.&amp;quot; Lupe Fiasco wrote a banger in 2007 called &amp;quot;Superstar&amp;quot; - which contains the lyric Did you improve on the design? Did you do somethin&#39; new? - I think about it a lot. Not just because it&#39;s a great track, but because I now want to get in, improve the system in some small way, then get out of the way. Compounding Ownership The best engineers don&#39;t just write code. They own outcomes. That&#39;s what I hire for, what I encourage, what I try to model. Whether it&#39;s onboarding someone into a new React Native stack at MyTheresa, or helping Cisco scale across teams and timezones, the most impactful work has always been about instilling ownership. Not just &amp;quot;you broke it, you fix it.&amp;quot; More like &amp;quot;you saw it, you shaped it, now make it shine.&amp;quot; TL;DR You don&#39;t lead by doing more. You lead by doing the right things consistently, even when the results are invisible for a while. Fix things fast, but think long. Seed value daily, even if it sprouts next year. Don&#39;t build monuments. Build momentum. Solve problems. Ship solutions. Think in longer time spans than your innate need for immediate gratification is comfortable with. The impact is coming. Trust the gap. Hop in. The water&#39;s warm.</content>
  </entry><entry>
    <title>Principles &amp; Implementation: Lessons in Surviving the Transient Nature of Software Engineering</title>
    <link href="https://benhowdle.im/principles-and-implementation.html" />
    <updated>2025-04-02T00:00:00.000Z</updated>
    <id>https://benhowdle.im/principles-and-implementation.html</id>
    <content type="text">Or, &amp;quot;How I Learned to Stop Worrying and Love the Journey&amp;quot; There&#39;s a running joke in engineering circles that if you sit still long enough, someone will come along and replace your perfectly functional code with the latest framework named after a fruit, bird, or obscure Norse god. I&#39;ve seen the same thing happen at companies and startups when old contractors or staff leave, and new ones join: &amp;quot;oh no, this all needs to go&amp;quot; - 98% of the time, the real issue is: &amp;quot;its not how I would&#39;ve done it&amp;quot; (spoiler alert: it often won&#39;t be, but you weren&#39;t around for the original context, motivations, decisions, meetings, PRs, discussions, etc) In the past 15 years, I&#39;ve gone from writing Backbone.js for a UK accounting software startup to architecting real-time financial infra as a CTO. Along the way, I&#39;ve been a freelancer, a startup co-founder, a team lead, an idiot and a contractor parachuted in to untangle Things That Shouldn&#39;t Be Tangled™️. This post isn&#39;t a framework war cry or &amp;quot;X vs. Y in 2025&amp;quot; nonsense. It&#39;s just what I&#39;ve learned about building things that last-even as the tools and paradigms around them change at a dizzying pace. 1. Nothing Is Sacred. Not Even the Code You Wrote Yesterday. At KashFlow, I led a full rebuild of their accounting app&#39;s UI. We went from jQuery/ASP soup to a Backbone architecture that was clean, modular, and (at the time) &amp;quot;future-proof.&amp;quot; Less than five years later, React was stealing Backbone&#39;s lunch money, and we&#39;d all moved on. Did that mean the rewrite was a waste*? Not at all. I have no ego in this fight. The point wasn&#39;t to immortalise the code. It was to enable the business to move faster at the time. That&#39;s the job. Write code with one foot in today and one eye on tomorrow-but don&#39;t expect a monument. *no, KashFlow was acquired not long after (business &amp;gt; tech) 2. &amp;quot;Full-Stack&amp;quot; Isn&#39;t a Badge-It&#39;s a Necessity Sometimes When you&#39;re the only/first engineer at a startup (hi, Letter), &amp;quot;frontend&amp;quot; vs. &amp;quot;backend&amp;quot; becomes a bit of a silly distinction. One day you&#39;re designing schema for a Postgres DB, the next you&#39;re debugging AWS Lambda cold starts, and by Friday you&#39;re shipping a React Component that powers 90% of the user experience. Craig David had an easy 7 days by comparison... I&#39;ve worked on real-time WebSocket platforms (Cisco), built CLI tools to standardise frontend scaffolding (Soho House), and implemented secure tax infrastructure (WorkMade) that sends actual money to actual governments. The common thread? Problem-solving. Not titles. Being full-stack doesn&#39;t mean you&#39;re a generalist. It simply means you&#39;re quite useful. 3. CTO at a Startup Means &amp;quot;Can Still Write Code, Promise&amp;quot; I&#39;ve been CTO at two companies now. At both, I&#39;ve run hiring, managed performance reviews, sketched product roadmaps, and still ended up fixing typo bugs* in PRs. *my own The job isn&#39;t about sitting in strategy meetings all day. It&#39;s about being deeply technical and unblocking others. You&#39;re the router. The glue. The &amp;quot;yes, I know where that service lives&amp;quot; person. 4. The Tools Will Change. The Patterns Stay the Same. Back in the day, I used to write Backbone. Then React. Then React Native. These days it&#39;s TypeScript everywhere, AWS under the hood, and Prisma or raw SQL on the backend. More recently; Go at Clerk. Tomorrow? Who knows. No idea. Who cares. But you know what doesn&#39;t change? Protecting data integrity Building secure boundaries Designing APIs that make sense Thinking in flows, not files Good engineers don&#39;t fall in love with tools. They fall in love with solving problems-and learn whatever they need to do it well. 5. Hiring? Look for People Who Ship and Own Every time I&#39;ve built or inherited a team, one trait rises above all others: ownership. Do you ship? Do you own it when it breaks? Do you make the system better for the next person? At Letter, my team was async, remote, and often in wildly different timezones. The engineers who thrived weren&#39;t the loudest-they were the clearest communicators and the most reliable deliverers. That&#39;s who you want in your corner. Final Thought: Shipping Is the Job. The Rest Is Just Code. Frameworks change. APIs evolve. The tools we use today will look quaint tomorrow. But one thing won&#39;t change: The goal is to ship something that works, solves a problem, and doesn&#39;t fall over the second you go on holiday. I don&#39;t write Backbone.js anymore. But I&#39;m still solving the same problems. And I wouldn&#39;t have it any other way. Also: your old jQuery plugin still probably works.</content>
  </entry><entry>
    <title>Solving vs. Augmenting: The Developer&#39;s Dilemma in a New Codebase</title>
    <link href="https://benhowdle.im/find-vs-figure-out.html" />
    <updated>2025-03-13T00:00:00.000Z</updated>
    <id>https://benhowdle.im/find-vs-figure-out.html</id>
    <content type="text">Introduction No matter the size of the company or the maturity of the codebase, there&#39;s one question I&#39;ve found myself asking time and time again when tackling a new task: Is there an existing solution I need to augment, or do I need to solve this from scratch? At first glance, this might seem like a simple question-just check the README, right? But in reality, it&#39;s never that straightforward. This decision point isn&#39;t typically documented in an obvious way. It usually surfaces: Too late-during a PR review, when I&#39;ve already built a solution. By accident-when a more tenured engineer sees my approach and tells me about an existing pattern I wasn&#39;t aware of. With luck-if the task creator had the foresight (and technical depth) to flag an existing approach in the ticket. For new developers onboarding into a codebase, this lack of visibility can be frustrating and inefficient. Without deep insight into past decisions, how can we extract and pollinate that knowledge across the team-without relying on trial and error? The Cost of Reinventing the Wheel Every developer wants to build well-structured, maintainable systems. But when we lack visibility into existing solutions, we risk: Fragmentation – Multiple solutions to the same problem make code harder to navigate. Unnecessary Complexity – Reinventing logic that already exists adds cognitive overhead. Inconsistencies – Different implementations lead to unexpected behavior across features. Increased Maintenance Costs – More unique solutions mean more things to document, debug, and refactor later. The best developers don&#39;t just write great code; they write great code that fits the existing architecture. That means knowing when to extend an existing approach versus when to introduce something new. How to Answer the &amp;quot;Solve vs. Augment&amp;quot; Question So, how do we equip ourselves (and new engineers) to make this decision earlier and more effectively? Here are some practical steps I&#39;ve found useful: 1. Start by Asking the Right Questions Before writing a single line of code, I ask: Has this problem been solved before in our codebase? What similar features exist, and how were they implemented? Is there an existing pattern I should follow? If I build something new, will it conflict with anything? If you don&#39;t know the answers, that&#39;s a red flag to dig deeper. 2. Use Git to Investigate Past Solutions One of the most underused developer skills is code archeology-digging through history to understand why things exist. Search for related code: Use git grep, ack, or IDE-wide searches to find similar logic. Check commit history: git blame and git log -p show when and why certain decisions were made. Review past PRs: Older pull requests often contain discussions on trade-offs and alternative solutions. Instead of guessing, let the existing code tell you what the best approach might be. 3. Talk to Engineers Who Know the Codebase No matter how good the documentation is, the best knowledge still lives in people&#39;s heads. Ask senior engineers: &amp;quot;How have we handled this in the past?&amp;quot; Post in Slack or Discord: &amp;quot;Is there an existing utility/method for X?&amp;quot; Review code with a team member before implementing: Catch misalignments early. Making this a habit reduces silos and helps pollinate knowledge across the team. 4. Document &amp;quot;Precedent&amp;quot; Solutions One of the biggest challenges for new developers is not knowing which patterns are preferred. As a team, consider: Maintaining an &amp;quot;Architecture Decisions&amp;quot; doc – A simple guide to why certain approaches were taken. Creating a &amp;quot;Pattern Library&amp;quot; in the Wiki – Examples of common solutions (e.g., &amp;quot;How do we handle authentication?&amp;quot;). Standardizing PR templates – Include a checklist item: &amp;quot;Does this follow an existing pattern?&amp;quot; Making these implicit expectations explicit saves everyone time. 5. Make it Safe to Push Back on New Solutions In an ideal engineering culture, engineers should feel comfortable saying: &amp;quot;We already have a utility for this-let&#39;s use it instead.&amp;quot; &amp;quot;Can we solve this in a way that aligns with our existing architecture?&amp;quot; &amp;quot;Before merging, let&#39;s validate whether this fits with our conventions.&amp;quot; Encouraging pre-PR discussions and technical design reviews can prevent fragmentation before it happens. The Balance Between Innovation and Consistency Does this mean we should never introduce new solutions? Not at all. New solutions are necessary when: Existing approaches are outdated or unscalable. A fundamentally different requirement emerges. A better, more efficient pattern is discovered. The key is intentionality-new solutions should be introduced deliberately, not by accident. Before creating something new, always ask: Does this add unnecessary complexity? Should we refactor an existing solution instead? Is this the right time to introduce a new pattern? Consistency and innovation should coexist-but they work best when changes are made consciously, not reactively. Conclusion Every time I approach a new task, I ask myself: &amp;quot;Am I augmenting an existing solution, or am I creating something new?&amp;quot; This simple question has saved me from: Writing redundant code. Creating unnecessary complexity. Wasting time reinventing something that already works. For teams looking to scale effectively, surfacing these decisions earlier is crucial. By making existing solutions more discoverable, encouraging proactive discussions, and documenting precedent, we can build more maintainable, cohesive systems-while still leaving room for innovation. If you&#39;ve ever struggled with this dilemma, I&#39;d love to hear your approach. How does your team handle the balance between speed, consistency, and innovation?</content>
  </entry><entry>
    <title>How I Led a High-Performing Engineering Team Without Micromanaging</title>
    <link href="https://benhowdle.im/leading-an-engineering-team.html" />
    <updated>2025-03-05T00:00:00.000Z</updated>
    <id>https://benhowdle.im/leading-an-engineering-team.html</id>
    <content type="text">How I Led a High-Performing Engineering Team Without Micromanaging A great engineering team doesn&#39;t need constant oversight. It needs clarity, trust, and the right tools to execute. As an engineering manager, my philosophy was simple: ✅ Push the product forward. ✅ Unblock my team. ✅ Get out of their way. The result? A team that shipped fast, maintained high-quality code, and took full ownership of their work. Here&#39;s how I did it. 🚀 Laying the Groundwork for Speed One of the most effective ways to empower an engineering team is by removing friction from their workflow. For larger features, I often designed the initial architecture-laying the foundation so engineers could focus on implementation rather than figuring out structure from scratch. I built internal tools to eliminate repetitive work, such as a GraphQL schema generator that automatically created schemas from YAML. This meant engineers only had to define data structures once, rather than in multiple locations. The goal? Enable engineers to spend time solving real problems, not wrestling with boilerplate. 🔄 Trust + Autonomy = Ownership Micromanagement kills creativity and slows everything down. Instead, I built a culture of trust: Ownership from Start to Finish Engineers took charge of their own features. We&#39;d align in a design meeting, discuss trade-offs, and document key decisions. From there, they executed autonomously, with the design doc serving as both a reference and final documentation. Bidirectional, Ego-Free Feedback I never had emotional attachment to my code-if feedback improved the end product, I embraced it. This encouraged engineers to do the same, making code reviews more collaborative rather than combative. The result? Better code quality, faster iterations, and a team that truly cared about the product. 📢 Clear Vision = No Confusion A lack of clarity leads to bottlenecks. Engineers shouldn&#39;t need to ask, &amp;quot;Why are we doing this?&amp;quot; I made sure everyone knew the &amp;quot;why&amp;quot; behind our work-whether from me or directly from the CEO. Regular communication of product vision meant engineers didn&#39;t just wait for tasks to be assigned-they proactively planned their own roadmaps. When engineers understand the bigger picture, they stop seeing themselves as ticket-solvers and start thinking like product builders. 👀 The Result? ✅ A high-velocity team that delivered without unnecessary bottlenecks. ✅ Better code quality through ego-less feedback and collaboration. ✅ Engineers who felt empowered, not micromanaged. By focusing on clear goals, autonomy, and the right tooling, I helped my team move fast without breaking things-and without needing to micromanage. 💡 Final Thoughts Great engineering leadership isn&#39;t about dictating every move. It&#39;s about: Removing blockers Setting clear expectations Trusting your team to deliver If you get that right, everything else falls into place.</content>
  </entry><entry>
    <title>The Software Engineer Spectrum: Speed vs. Accuracy</title>
    <link href="https://benhowdle.im/software-engineer-spectrum.html" />
    <updated>2025-02-25T00:00:00.000Z</updated>
    <id>https://benhowdle.im/software-engineer-spectrum.html</id>
    <content type="text">Introduction After 15 years in software engineering and engineering leadership-ranging from IC (individual contributor) roles to CTO-I&#39;ve seen countless engineers thrive (or struggle) based on how well their working style aligns with the company&#39;s needs at the time, from the Cisco&#39;s and GoDaddy&#39;s of the world, to your favourite seed-stage startup, I&#39;ve seen different engineers suit very different working environments and company stages. Over the years, I&#39;ve spotted a pattern: all engineers exist on a spectrum between speed and accuracy. This spectrum isn&#39;t about skill or seniority-it&#39;s about how engineers naturally approach their work. Some lean towards speed, optimizing for fast iteration and progress, while others prioritize accuracy, ensuring long-term maintainability and scalability. Neither end of the spectrum is &amp;quot;better&amp;quot; than the other, but knowing where you sit-and understanding what kind of engineer your company actually needs-can be the difference between thriving in a role or feeling completely out of sync. The Speed vs. Accuracy Spectrum At one end of the spectrum, we have engineers who move fast, iterate quickly, and prioritize execution. At the other, we have engineers who optimize for correctness, long-term stability, and robustness. Here&#39;s how they compare: Attribute Speed-Oriented Engineers Accuracy-Oriented Engineers Mindset &amp;quot;Ship it now, improve it later&amp;quot; &amp;quot;Make sure it&#39;s right before it goes live&amp;quot; Strengths Quick iteration, adaptability, delivering MVPs fast Stability, scalability, long-term efficiency Challenges Tech debt, occasional instability, missing edge cases Slow delivery, risk of over-engineering Best suited for Startups, early-stage teams, hackathons Enterprise software, scaling startups, regulated industries Frustrated by Bureaucracy, slow decision-making, rigid processes, thorough review cycles Firefighting, rushed deadlines, unclear requirements In reality, no one is purely one or the other-everyone exists somewhere on this scale, shifting slightly depending on experience, team culture, and career growth. As an aside, I&#39;ve figured out I sit somewhat more towards the speed end of the spectrum, but I still endeavour to account for the future where possible (ie. asking myself questions like, &amp;quot;is this tech-debt I can live with for now, and re-visit at a later date?&amp;quot;), but I love to make ideas &amp;quot;real&amp;quot; as quickly as possible - I find features and products that stay too theoretical for too long frustrating and hard to progress, so I&#39;d much rather sketch out a first iteration (a prototype, MVP, etc...) and get it in front of everyone to pick apart. Why This Spectrum Matters Not all engineering roles are the same. The biggest mistake I&#39;ve seen-both from hiring managers and engineers themselves-is mismatching an engineer&#39;s natural working style with the company&#39;s needs. Let&#39;s break this down with real-world examples: 1. The Scrappy Startup (0 → 1 Phase) A newly launched startup needs to move fast, validate ideas, and iterate quickly. Speed-oriented engineers thrive here because: There&#39;s less red tape-no endless meetings or approval chains. You tend to wear multiple hats and can move swiftly through decisions, because there&#39;s very little review process in place. The goal is often to ship an MVP, not to perfect every function. Bugs or inefficiencies are acceptable trade-offs for momentum. A product that people love that has quirks is far more valuable than perfect code that no-one wants. A startup filled with accuracy-first engineers can struggle in this phase. If every decision requires a debate about scalability, the company may never get a product in front of users. Dead on arrival. Who excels here? Engineers who are comfortable cutting scope to deliver fast. Those who thrive in ambiguity and don&#39;t need a perfect spec. People who enjoy building and rebuilding as feedback comes in. Staying closely aligned with the founder/C-level people can be hugely beneficial here - make sure you understand the business goals and vision correctly, communicate your intentions, go build. 2. The Scaling Startup (1 → 10 Phase) Once a startup finds product-market fit, things shift. Growth means technical debt starts catching up, and what worked in the early days starts breaking. At this stage, accuracy starts to matter more. The company needs engineers who: Think beyond today&#39;s solution and plan for the next 6-12 months. Introduce better testing, automation, and architecture. Push back against reckless speed when it threatens stability. The engineers who thrived in the early chaos might struggle here. A speed-focused engineer who loved hacking together an MVP may find the new focus on documentation, testing, and code quality frustrating. Who excels here? Engineers who balance pragmatism with structure. People who enjoy building reliable systems rather than chasing constant new features. Those who can see the bigger picture and influence long-term decisions. At this stage, in my CTO roles, I&#39;ve stepped back and zoomed out, whilst other engineers have continued shipping features. It gives me the chance to add the stability to the product that was skipped over to &amp;quot;get it out there&amp;quot;. 3. The Enterprise or Regulated Industry (10 → 100 Phase) At enterprise scale, everything slows down. When a single bug could cause millions in losses or legal trouble, accuracy is king. Here, speed-focused engineers often feel handcuffed by bureaucracy. There&#39;s process, governance, and an expectation of predictable, well-tested releases. The best engineers in these environments: Love digging into complex systems and making them robust. Care deeply about consistency, compliance, and security. Accept that things take time and focus on minimizing risk. For engineers who are happiest when shipping fast and breaking things? This can feel like a slow-moving nightmare. Who excels here? Engineers who enjoy optimizing for scale and efficiency. Those with patience for detailed planning and process-heavy work. People who appreciate long-term code stability over quick wins. I personally have to completely re-align my mindset when working with these companies - it doesn&#39;t naturally suit my working style, so I have to consciously slow down and amend my own expectations for how much I&#39;m able to ship with the expedience I&#39;m accustomed to. Finding the Right Fit for You If you&#39;ve ever felt out of sync in a role, chances are it wasn&#39;t about skill-it was about fit. Questions to ask yourself: Do I get frustrated by slow decision-making? If yes, you likely lean toward the speed-focused side. Do I feel uncomfortable shipping something I know isn&#39;t perfect? If yes, you lean more towards accuracy. Do I prefer structured, well-defined work over ambiguity? Accuracy-focused engineers thrive on clear problem spaces, while speed-focused ones embrace chaos. What excites me more-shipping a quick prototype or refining a system over time? The first is speed; the second is accuracy. Recognizing your default mindset can help you find the right companies, teams, and roles where you&#39;ll thrive. If you can develop an awareness of your instinctive mindset, you can employ methods to consciously alter your working style, for the betterment of your own sanity and the success of the company. Advice for Engineering Leaders If you&#39;re hiring or managing engineers, understanding this spectrum is critical. The best teams blend both types of engineers strategically. Early-stage startups? Hire for speed, but ensure someone can clean up tech debt later. Scaling teams? Introduce structure without crushing momentum. Enterprise teams? Protect stability, but don&#39;t let process stifle innovation. A great engineering culture values both ends of the spectrum-and allows engineers to shift across it as their careers evolve. I&#39;ve managed and led teams where I&#39;ve had engineers at opposite ends of the spectrum - being aware of this polarity can help decide who might be best suited to various tasks and features when it comes to planning the product. Final Thoughts Whether you&#39;re an IC or a CTO, recognizing the speed vs. accuracy spectrum can help you: Find the right roles and companies that match your strengths. Adapt as your career progresses and new challenges arise. Build engineering teams that complement each other instead of clashing. The best engineers don&#39;t just write great code-they understand how to apply their strengths to the right problems, at the right time.</content>
  </entry><entry>
    <title>Migrating 160,000 Lines of Production Banking JavaScript to TypeScript with Zero Downtime</title>
    <link href="https://benhowdle.im/migrating-js-to-ts-zero-downtime.html" />
    <updated>2025-02-19T00:00:00.000Z</updated>
    <id>https://benhowdle.im/migrating-js-to-ts-zero-downtime.html</id>
    <content type="text">TypeScript has become the de facto standard for building robust, maintainable, and scalable JavaScript applications. Yet, migrating a large production codebase to TypeScript can be a daunting task, especially when you&#39;re dealing with 160,000 lines of mission-critical JavaScript and users with real money in your system, and their tax returns to submit. At WorkMade, I decided the pain would be worth the plunge (after founder buy-in!) and benefit us in the long-run, so we took on the challenge and migrated our entire codebase to TypeScript over six weeks – without any downtime. Here&#39;s how we did it, the challenges we faced, and the tools that made it possible. Shoutout to Anya Hargil and JD for joining me on this wild journey, and humouring me with my initial motivation and plan to carry this out... The Challenge: Migrating at Scale Without Disruption When working on a large-scale production application, the last thing you want is downtime. Our application was powering crucial financial and banking services, and any interruption would directly impact our customers. Yet, we knew the long-term benefits of TypeScript – better developer experience, increased maintainability, and a safer codebase – were worth the effort. The main challenges we faced were: Zero Downtime Requirement: We needed to continue deploying new features and fixes while the migration was underway. This was a seed-stage startup with the first crop of faithful users - ensuring their (stable, positive) continued experience was paramount; this was their livelihood we were handling. 160,000 Lines of Code: This wasn&#39;t just a weekend project – the sheer volume required a strategic approach. The API was a straight-up Node.js Express app serving a GraphQL API, with a Websocket server t&#39;boot. Multiple Environments: Our code was powering Lambda Workers in AWS and a Node.js GraphQL API running in ECS. A mistake could impact multiple services, and multiple users. The Strategy: Maintaining a Separate Branch The key to a smooth migration was maintaining a separate branch. Here&#39;s how we approached it: Create a Migration Branch: We created a dedicated branch where all .js files were renamed to .ts. This branch would be the playground where TypeScript would be gradually introduced. I was in the CTO role at the time, so I sat in the quarterback position, stepping back from feature/bug work, to orchestrate this migration. At seed-stage, it can be hard to hope off the SHIP IT NOW train and invest some time in a longer-term pay-off, but this is one I felt confident and sure about. Periodic Rebasing: To keep up with ongoing feature development on the main branch, we periodically rebased the migration branch. This ensured no conflicts when we eventually merged back. This meant that I only had to convert the new .js files into .ts files each time, because I&#39;d set a different output directory for the TypeScript files to compile into. Incremental Typing: We introduced TypeScript incrementally, starting with the most critical modules, progressively moving towards full typing. One of the more tedious tasks was updating all the existing require() calls to use the mode modern import / export syntax. I think one of the first TypeScript compiles I attempted has over a 1000 individual, mostly-unique, errors. In those moments, you just have to put your favourite album on and plough down the list monotously. Continuous Integration (CI) Validation: Every rebase triggered our CI pipeline, running all tests to validate the stability of the TypeScript version, including the Jest tests and Checkly monitoring tests. Testing and Staging: Ensuring Zero Downtime After TypeScript compiled without errors and all tests passed, we deployed the TypeScript version to our Staging environment. Here&#39;s what we did next: 2-3 Days of Solid Testing: We tested the application rigorously in the Staging environment, using Checkly for uptime monitoring to ensure no regression issues. I honestly couldn&#39;t imagine running an app in production without some form of uptime monitoring - and Checkly is a fantastic choice that I&#39;d evangelise at every opportunity. We not only had basic uptime monitoring for the API being up and healthy, I also built checks to assess specific operations in the system (signing up, logging in and money movement predominantly) - with the money movement checks only taking place in Staging, as to not create too much accounting nonsense and noise. You can use a simple egg-timer (picture the ones with the sand in, where you tip it upside down to trigger the count down) - set up two bank accounts, and transfer $1 back and forth between the accounts until one is empty, then start filling it back up from the other account - essentially creating your own money movement heartbeat. Environment Parity: We switched all Lambda Workers in AWS and the Node.js GraphQL API in ECS over to the TypeScript version in Staging. This mimicked the Production environment closely. This was a fun part. Our Lambda workers were pointing to the existing JavaScript source files to run, however, as soon as we deployed, they&#39;d no longer exist, so I set a conditional line in the CDK file to tell Docker how to run the new TypeScript-compiled-JavaScript file if it was there, or the original JavaScript source file as a fallback, to account for the multi-step deployment race condition. This looked something like: cmd: [ &#39;sh&#39;, &#39;-c&#39;, `[ -f src/workers/${props.folderName}/move-money.js ] &amp;amp;&amp;amp; node src/workers/${props.folderName}/move-money.js || node lib/workers/${props.folderName}/move-money.js`, ], This meant the workers would still be available and invokable in some fashion, even if it was still using the original JavaScript. The next time the deployment happened, we could safely remove the conditional line. Zero Downtime Deployment: Confident from the Staging tests, we pushed the changes to Production. Everything went smoothly, with no downtime or incidents. Yay for us... Results: Enhanced Developer Experience and Bug Fixes The results were immediately noticeable: Developer Experience (DX) Boost: Developer productivity and happiness skyrocketed thanks to TypeScript&#39;s enhanced editor support and better refactoring capabilities. Bug Detection: The TypeScript compiler caught several bugs that had gone unnoticed in the JavaScript version, increasing overall application stability. Win&#39;s from the off... Tools and Extensions That Made It Possible During the migration, several VSCode extensions and tools significantly eased the process: ESLint with TypeScript Support: For consistent code quality and linting. Typescript Hero and TypeScript Import Sorter: For automatically organizing imports. ts-migrate and jscodeshift: To automate some of the repetitive tasks. Checkly: For end-to-end monitoring and uptime checks during Staging tests. Conclusion: Was It Worth It? Absolutely. Migrating to TypeScript was a challenging but rewarding endeavor. Not only did it improve the developer experience, but it also enhanced the stability and maintainability of our codebase. The transition was seamless, thanks to strategic planning, a robust CI/CD pipeline, and the right set of tools. The feedback loop for catching silly bugs was reduced tenfold. If you&#39;re considering migrating a large-scale production application to TypeScript, take the plunge. The long-term benefits are well worth the initial investment. Give me a shout if you want any more tips!</content>
  </entry><entry>
    <title>Getting Real(about)Time</title>
    <link href="https://benhowdle.im/getting-real-about-time.html" />
    <updated>2025-02-12T00:00:00.000Z</updated>
    <id>https://benhowdle.im/getting-real-about-time.html</id>
    <content type="text">The Illusion of Realtime in Computing In software development, we often talk about &amp;quot;realtime&amp;quot; as if it&#39;s a definitive concept-messages sent instantly, updates reflected immediately, data synchronized across systems without delay. But what does &amp;quot;realtime&amp;quot; actually mean? And does it truly exist? From event ordering to WebSockets, let&#39;s break down what makes realtime computing possible, the illusion behind it, and why time itself is a tricky thing to grasp. No Two Events Happen at the Same Time At the fundamental level of physics, no two events can ever truly occur at the same moment. Everything exists within a sequence, even if that sequence is measured in nanoseconds. When we say two things happened “at the same time,” what we really mean is that they occurred so closely together that the difference is imperceptible to us. But if we zoom in far enough, even the fastest events are still sequential. Computers, which rely on clock cycles, registers, and atomic operations, inherently process instructions in sequence-even in massively parallel systems. There is always an ordering, even when it appears simultaneous. The challenge in &amp;quot;realtime&amp;quot; software is making this ordering invisible to users. Time: A Human Construct vs. a Physical Reality Time as we experience it is a human abstraction. We impose seconds, milliseconds, and timestamps on the world to make sense of sequences of events. Physically, however, time is relative-Einstein&#39;s theories tell us that time moves differently based on gravitational fields and relative velocities. In computing, time is represented in absolute terms (timestamps, system clocks, event logs), but these are just approximations. No two machines perfectly agree on time. Network latency, hardware drift, and CPU scheduling introduce tiny inconsistencies, which we have to work around when designing &amp;quot;realtime&amp;quot; systems. Realtime as &amp;quot;Fast Enough&amp;quot; In software, realtime is not a literal concept but a perception. What we call &amp;quot;realtime&amp;quot; is just events happening fast enough that the delay is imperceptible. For example, a messaging app feels realtime because: A user sends a message. The server receives it within milliseconds. The recipient sees the message pop up with no noticeable delay. But under the hood, there are network packets, database writes, event queues, and UI updates happening in sequence. The illusion of realtime exists because these steps happen faster than the threshold where humans would notice. Ordering Events in &amp;quot;Realtime&amp;quot; If no two events happen at the same time, how do we determine which event came first when they occur nearly simultaneously? This problem becomes critical in distributed systems, where different nodes operate with slightly different clocks. ULIDs: Unique and Sortable Event Identifiers One elegant solution is the Universally Unique Lexicographically Sortable Identifier (ULID). Unlike traditional UUIDs (which are unique but not ordered), ULIDs encode a timestamp as the first part of the identifier. This allows events to be sorted chronologically without relying on external timestamps, which may be unreliable. We used ULIDs as the foundation of our event system at Letter to establish order within a distributed (banking) system. If an event arrived into a system, but contained a ULID that was before the last stored event, we knew something in the system had become misaligned - we then employed an automated &amp;quot;refeed&amp;quot; system that kicked off a retrieval of the last N events, which could then be stored in the correct sequence. Imagine the danger of incorrectly storing a WITHDRAWAL event after a DEPOSIT event. Example: Generating a ULID in Node.js import { ulid } from &quot;ulid&quot;; const event1 = ulid(); const event2 = ulid(); console.log(event1, event2); // Always ordered chronologically Even if two events occur within the same millisecond, ULIDs ensure a predictable order, helping maintain a sense of sequence in distributed systems. Emulating Realtime with WebSockets Most &amp;quot;realtime&amp;quot; applications use WebSockets to push updates instantly between systems. Unlike traditional HTTP requests (which are pull-based), WebSockets keep an open connection, allowing data to flow bi-directionally in realtime. Example: A Simple WebSocket Server in Node.js import { WebSocketServer } from &quot;ws&quot;; const wss = new WebSocketServer({ port: 8080 }); wss.on(&quot;connection&quot;, (ws) =&gt; { ws.send(&quot;Welcome to the realtime world!&quot;); ws.on(&quot;message&quot;, (message) =&gt; console.log(`Received: ${message}`)); }); A client connects, the server sends an initial message, and the client can push data instantly. This enables chat apps, live notifications, and collaborative tools like Google Docs. The &amp;quot;Realtime&amp;quot; Illusion: One System, Many Views Consider a collaborative app where two users edit the same document. One user makes an edit, and the other sees it appear &amp;quot;instantly.&amp;quot; What actually happens? User A modifies the document in their browser. The change is sent via WebSocket to the server. The server processes the update. The server pushes the update to User B&#39;s browser. User B&#39;s UI updates. Each step takes time, but when executed within milliseconds, the illusion of simultaneity is maintained. To enhance this illusion, developers use: Optimistic UI updates: Show changes instantly before confirmation from the server. Rollback if something goes wrong on the server, or the update was actually rejected. Latency compensation: Apply predicted changes before actual events arrive. Conflict resolution: Merge simultaneous edits without overwriting data. Conclusion: Realtime Is Just Fast Sequential Events At the core, &amp;quot;realtime&amp;quot; computing is about: Ensuring a clear event order - (ULIDs, logical clocks) Minimizing delay between events - (WebSockets, event-driven architectures) Maintaining the illusion of simultaneity - (Optimistic UI, latency compensation) The next time you use a chat app or a live dashboard, remember: nothing is actually happening &amp;quot;at the same time&amp;quot;-it&#39;s just a well-crafted sequence of events, happening faster than you can notice. &amp;quot;The distinction between past, present, and future is only a stubbornly persistent illusion.&amp;quot; - Albert Einstein</content>
  </entry><entry>
    <title>Does it work, how you want it to, most of the time?</title>
    <link href="https://benhowdle.im/does-it-work-how-you-want-it-to-most-of-the-time.html" />
    <updated>2025-01-29T00:00:00.000Z</updated>
    <id>https://benhowdle.im/does-it-work-how-you-want-it-to-most-of-the-time.html</id>
    <content type="text">A structured approach to writing reliable software by progressing and ascending through three distinct levels of quality assurance. A software development quote I love is &amp;quot;Make It Work Make It Right Make It Fast&amp;quot; - to build on this pillar, I want to delve into my own personal mantra I employ when I&#39;ve come up with a solution to a software problem. I ask myself: &amp;quot;Does it work, how you want it to, most of the time?&amp;quot; Writing quality software isn&#39;t magic-it often follows a clear, linear progression. If you&#39;ve ever felt overwhelmed by testing, debugging, or making your code production-ready, it helps to create a mental model and break the problem down into three fundamental questions: Does it work? (Basic syntax and compilation) Does it work how you want it to? (Integration tests and expected behavior) Does it work most of the time? (Production reliability, resilience, and fault tolerance) Each stage represents a necessary milestone in writing reliable software. Let&#39;s walk through them one by one. 1. Does it work? (Basic syntax and compilation) The first step in any software project is ensuring the code actually runs. At this level, the concerns are rudimentary: Syntax errors: Typos, missing brackets, misplaced commas-basic mistakes that prevent compilation or execution. Runtime errors: Calling undefined variables, passing the wrong arguments, or referencing properties that don&#39;t exist. Static analysis: Linters and type checkers (TypeScript, ESLint) that help catch obvious issues before running the code. Example: Basic Syntax Errors Consider a simple function that fetches user data: const fetchUser = async (id) =&gt; { const response = await fetch(`https://api.example.com/users/${id}`); return response.json(); }; This code looks fine, but if we call fetchUser() without an id, or if fetch fails, it could break. A simple syntax check won&#39;t catch logical issues, but at least it ensures the code runs. 2. Does it work how you want it to? (Integration and Expected Behavior) Once the code executes without syntax errors, the next level ensures it behaves correctly. This involves: Unit tests: Does each function return the expected output? Integration tests: Do multiple components work together as expected? Edge cases: Does the code handle unexpected input or errors gracefully? Example: Adding Tests Let&#39;s improve fetchUser by validating input and handling errors: const fetchUser = async (id) =&gt; { if (!id) throw new Error(&quot;User ID is required&quot;); try { const response = await fetch(`https://api.example.com/users/${id}`); if (!response.ok) throw new Error(&quot;Failed to fetch user&quot;); return response.json(); } catch (error) { console.error(&quot;Error fetching user:&quot;, error); return null; } }; Now, we can write tests: test(&quot;fetchUser should return user data&quot;, async () =&gt; { global.fetch = jest.fn(() =&gt; Promise.resolve({ ok: true, json: () =&gt; Promise.resolve({ id: 1, name: &quot;Alice&quot; }), }), ); const user = await fetchUser(1); expect(user).toEqual({ id: 1, name: &quot;Alice&quot; }); }); test(&quot;fetchUser should handle missing ID&quot;, async () =&gt; { await expect(fetchUser()).rejects.toThrow(&quot;User ID is required&quot;); }); This step ensures the function behaves as expected in normal scenarios, but we&#39;re still not testing it under real-world conditions. 3. Does it work most of the time? (Production-Grade Reliability) Even if code passes tests, real-world failures can still occur. Production-ready software requires additional measures: Redundancy: Can we retry failed requests? Resilience: Can it handle network failures, high load, or unexpected inputs? Idempotency: Will it behave consistently if retried? Observability: Are logs and metrics available for debugging? Transactional safety: Does it avoid partial failures? These questions formed the basis of my post on Chaos Engineering: Jurassic Park &amp;amp; Distributed Systems about embracing failure as an inevitability, not a possibility - and to build fail-safes in from the start. Example: Making fetchUser More Resilient To improve reliability, we can add retries and logging: const fetchUser = async (id, retries = 3) =&gt; { if (!id) throw new Error(&quot;User ID is required&quot;); for (let attempt = 1; attempt &amp;lt;= retries; attempt++) { try { const response = await fetch(`https://api.example.com/users/${id}`); if (!response.ok) throw new Error(&quot;Failed to fetch user&quot;); return await response.json(); } catch (error) { console.warn(`Attempt ${attempt} failed:`, error); if (attempt === retries) throw error; } } }; In production, we might also: Implement circuit breakers to stop excessive retries. Use transactional guarantees if modifying data. Deploy observability tools like structured logs and tracing. Design for graceful degradation if dependent services go down. Conclusion Quality software isn&#39;t built overnight. It progresses through three key stages: Does it work? (Basic syntax and execution checks) Does it work how you want it to? (Behavioral tests and edge cases) Does it work most of the time? (Resilience, reliability, and production hardening) By approaching software development in this structured way, you can write code that not only works but also withstands real-world challenges. It&#39;s the difference between a script that runs on your machine and software that reliably powers a business. Next time you write code, ask yourself: Does it work, how you want it to, most of the time?</content>
  </entry><entry>
    <title>Securing a Banking UI &amp; API: Lessons From a Fintech CTO</title>
    <link href="https://benhowdle.im/securing-a-banking-system.html" />
    <updated>2025-01-28T00:00:00.000Z</updated>
    <id>https://benhowdle.im/securing-a-banking-system.html</id>
    <content type="text">Securing a Bank UI How I Didn&#39;t Lose Any Customer&#39;s Money And Retained My Sanity Throughout Before I joined Clerk, I was a fintech CTO for nearly 6 years, with 4 years at Letter and 2 years with WorkMade. Aside from the truck load of valuable lessons those experiences taught me, I also had the fun* challenge of securing people&#39;s money on a daily (hourly? MINUTELY??) basis. I pretty much poured my 15 years of overall experience into the following post: Chaos Engineering: Jurassic Park &amp;amp; Distributed Systems, which spun into the following talk: Letting go of perfectionism in distributed systems, given in Austin, Texas at fintech_devcon in 2024. However, I wanted to do a deep-dive on some of the more interesting ways I implemented security in a banking system, from the UI (React) through to the API (GraphQL) and beyond. *induced crippling anxiety Diving Deep in 3...2...1 Securing a user interface (UI) for a banking platform is a multifaceted challenge that demands careful attention to data integrity, system architecture, and user safety. A bank&#39;s UI is not just a visual layer - it is the point where customers interact with complex backend systems, access sensitive data, and initiate actions that require airtight security. Every touchpoint for your system exposes a surface area for weakness and attack and general nefariousness from ill-intentioned people. Designing a secure bank UI involves far more than aesthetics (although, cool animations are nice to look at); it is about creating an experience that instills trust while seamlessly safeguarding user data and system functionality. The Foundation: Event-Driven Architecture and Data Flow At the heart of the secure banking system I built at Letter lay an event-driven architecture. This approach allows actions initiated by users (GraphQL requests) to flow through the system in a controlled and traceable manner. For instance, when a customer performs a transaction, such as transferring funds, the action is first routed through a GraphQL layer that validates the request. The use of GraphQL provides a structured data contract, ensuring that only properly formatted and authorized requests make their way into the system. Once validated, the action is handed off to a dedicated business-logic microservice. This microservice is responsible for processing the request while adhering to stringent rules. For example, it might check the customer&#39;s account balance before approving a transfer request. If the transaction passes these checks, metadata describing the action is recorded as a schema-driven JSON payload. This payload was stored in a distributed database (Etcd), and broadcast across the system in a pubsub fashion. This event-driven pattern not only ensures data consistency but also enables ancillary microservices to process events in parallel. These services might update their own data stores, such as PostgreSQL, with normalized data for efficient querying. This division of responsibilities creates a modular, scalable system capable of handling the high demand of banking operations. We had the GraphQL act as a traffic router of sorts; if you wanted to change something in the system, your request was routed to the business logic and validated before Etcd insertion, however if you simply wanted to read information from the system, your requests was routed to a read-only cache service which queried a PostgreSQL database. The business logic microservice was the ruler of system lore; if it allowed the action, the action was written into Etcd and never altered, ie. UserCreated - the ancilliary microservices took this new event as gospel and updated their own stores dilligently, ready to be read by the UI. Here&#39;s that very same ancilliary microservice listening for a new user: events.addEventWatcher(async (event) =&gt; { const { revision } = event.header(); // Ensure the event contains a revision if (!revision) { throw new DataError(DataErrorKind.RevisionNotDefined); } // Check if the event type is UserCreated if (UserCreated.is(event)) { const { timestamp } = event.header(); const payload = event.payload(); // Destructure relevant data from the event payload const { id, firstName, lastName, email, phone } = payload; // Prepare user creation parameters const createParams = { firstName, lastName, email, phone, externalId: id, notifications: { receivePush: false, receiveSms: false, }, createdAt: timestamp, revision, }; try { // Attempt to create the user in the database await this.create(createParams); // Publish a UserCreateCommitted event upon successful creation const evt = UserCreateCommitted.create({ id, email }, event.header()); await events.inTransaction((state) =&gt; state.publishEvent(evt)); } catch (error) { if (error?.code === SYSTEM.PostgresErrorCodes.DUPLICATE) { // Log duplicate record attempts at trace level ctx.logger.trace({ event: &quot;user insert duplicate&quot;, error }); } else { // Log other errors at error level ctx.logger.error({ event: &quot;user insert&quot;, error }); } } } }); We used the event revision as a sequencing marker, to ensure events are stored in order - how do you ensure order in the chaos of a distributed system? - the answer; with great difficulty. Immutable Event Storage for Auditability A key principle in securing a bank UI is maintaining a comprehensive audit trail of user actions. To achieve this, every customer action is stored as an immutable JSON payload. Each event is uniquely keyed using ULIDs, which chain the events together in sequence. This immutability ensures that the historical record of actions cannot be altered, providing transparency and trustworthiness in the system&#39;s operation. If a customer wishes to undo an action, the system writes a new event to denote the subsequent change rather than modifying the original record. This approach preserves the integrity of the event log while accommodating user-driven changes. Whether for internal auditing or regulatory compliance, this design ensures that every action is traceable and verifiable. Leveraging GraphQL and TypeScript for Data Integrity Data integrity is non-negotiable in a banking system, and the combination of GraphQL and TypeScript plays a critical role in ensuring it. GraphQL&#39;s strict data-contract capabilities enforce consistent schemas, preventing malformed or unauthorized requests from being processed. TypeScript complements this by adding static typing and compile-time checks, reducing the risk of runtime errors. This combination of technologies also streamlines development. Using TypeScript across both client-side and backend implementations ensures a unified codebase, making it easier to maintain and extend the system. Additionally, the widespread adoption of TypeScript improves hiring flexibility, as skilled developers familiar with the language are readily available. Masking Client-Facing IDs for Enhanced Security One of the most significant security risks in any system is the exposure of sensitive data in client-facing environments. To mitigate this, I implemented a mechanism to mask and secure database IDs before they were ever sent to the client. This design prevents unauthorized access and shields backend details from potential attackers. This means there were zero database IDs exposed in any UI or API response - which generally felt a lot safer. Here&#39;s how the process works: When an object is retrieved from the database, all sensitive IDs are replaced with randomly generated, URL-friendly keys. These keys are stored in a Redis database, alongside the ID of the requesting user and the real object ID. Each key is assigned a time-to-live (TTL) of 60 minutes, ensuring that client-facing IDs are ephemeral and short-lived. When a user interacts with an object, the system performs a reverse lookup to retrieve the real ID. If the key has expired or the requesting user&#39;s ID does not match the stored record, an authentication error is returned. This ensures that only authorized users can access the data, even if a URL is shared or intercepted. This means you could share the following URL with someone else: https://bankservice.com/accounts/hfyubu1, but because the receiving user&#39;s User ID wasn&#39;t used to create that unique key, the system would reject the whole request (after the deserialization failed) - thus tying URLs to the logged-in user, and only for 30 minutes. As you might&#39;ve already guessed, this does have real-world implications, and some are negative (but can be mitigated and alleviated) - if a user had an issue with a particular Account ID, it made it very difficult for them to talk to support about a particular account, as the ID in the URL was ephemeral and tied to the user&#39;s session. For this, I came up with a way for Letter&#39;s CX to securely log in to an internal portal (protected by VPN and Google OAuth) and paste the unique ID into a decrypter of sorts - which would only allow a reverse lookup to the real account ID using the staff member&#39;s own session JWT. To better illustrate the above, here&#39;s (pretty much) the full implementation of the little library: const config = applyDefaults(ctx, { secureMasking: { ttl: 60, // 60 minutes by default namespace: &quot;secure-mask.&quot;, prefix: &quot;sec-&quot;, }, }); const secureMask = { /** * Masks a sensitive ID by generating a random key and storing it in Redis. */ mask: async (id: string, userId: string): Promise&amp;lt;string&gt; =&gt; { if (!id) { throw new AppError(&quot;ID must be a valid plaintext value.&quot;); } // If already masked, return the key as is if (id.startsWith(config.secureMasking.prefix)) return id; const maskedKey = `${config.secureMasking.prefix}${await generateRandomKey( 10, )}`; await ctx.redis.set( `${config.secureMasking.namespace}${maskedKey}`, JSON.stringify({ userId, id }), &quot;ex&quot;, config.secureMasking.ttl * 60, // Convert minutes to seconds ); return maskedKey; }, /** * Retrieves the original ID from a masked key. */ retrieve: async (maskedKey: string, userId: string): Promise&amp;lt;string&gt; =&gt; { if (!maskedKey.startsWith(config.secureMasking.prefix)) { return maskedKey; // Return plaintext ID if not masked } const data = await ctx.redis.get( `${config.secureMasking.namespace}${maskedKey}`, ); if (!data) { throw new AppError(&quot;Masked key not found or has expired.&quot;); } const { userId: storedUserId, id } = JSON.parse(data); if (storedUserId !== userId) { throw new AppError(&quot;Access denied: User ID mismatch.&quot;); } return id; }, }; And here&#39;s how we were using it within a GraphQL Resolver, in this example, masking an Account ID: id: authenticatedResolver(async ({ id }, _, { ctx, userCtx }) =&gt; { const mask = await ctx.masque.mask( id, userCtx.userId, EphemeralShortCode.Account, ); return mask; }); And to retrieve the same ID after the user requested to perform an action: let id: string; try { id = await ctx.masque.retrieve(accountId, userCtx.userId); } catch (error) { ctx.logger.error({ error, text: &quot;Unmasking account ID failed&quot;, userCtx }); throw new UserInputError(AccountErrors.AccountFetchFail); } Building Trust Through Secure Design A secure bank UI is not just about protecting data; it&#39;s about building trust with users. Every design decision-from the event-driven architecture to immutable event storage and masked IDs-works toward creating a system that customers can rely on. By prioritizing data integrity, transparency, and user safety, we created a banking experience that balances security with usability. Securing a bank UI is a continuous process that evolves alongside emerging threats and technological advancements. However, by focusing on foundational principles and leveraging modern technologies like GraphQL, TypeScript, and Redis, we can build systems that are not only secure but also scalable and user-friendly. As I&#39;m now out of those roles, I can safely say that we didn&#39;t face any damaging attacks or loss of user funds during my tenure as CTO for either company, and it only cost me a few years off the end of my life...</content>
  </entry><entry>
    <title>10 Ways I Approach Engineering Leadership That Doesn&#39;t Leave Me Hating Myself</title>
    <link href="https://benhowdle.im/engineering-leadership.html" />
    <updated>2025-01-14T00:00:00.000Z</updated>
    <id>https://benhowdle.im/engineering-leadership.html</id>
    <content type="text">10 Ways I Approach Engineering Leadership That Doesn&#39;t Leave Me Hating Myself Leadership in engineering isn&#39;t just about writing code or managing timelines-it&#39;s about creating an environment where teams thrive, products succeed, and everyone grows along the way. Over my career, I&#39;ve had the privilege of working with incredible teams and companies, and I&#39;ve learned some key lessons on what makes leadership effective (and doesn&#39;t make people sad). Here&#39;s my take: Balance Speed with Stability Building quickly doesn&#39;t mean compromising reliability. At WorkMade, I focused on monitoring and deployment while still delivering new features fast. &amp;quot;Ben has a keen mind for evaluating and making responsible eng tradeoffs to balance speed with stability and reliability which is invaluable in a startup environment.&amp;quot; - Anya Hargil Know Where to Cut Corners (and Where Not To) Chase Croft put it best: “Ben intuitively knows where to cut corners for speed and where shortcuts would come back to haunt us.” Threading this needle can make or break a startup. Lead by Mentoring, Not Micromanaging I believe in mentoring engineers-helping them grow their skills while giving them space to solve problems independently. Regular, but optional 1:1s. Prioritise Communication Whether it&#39;s guiding a team through best practices or aligning on architectural decisions, great communication underpins great leadership. Create Foundations for Long-Term Success Building scalable, maintainable systems isn&#39;t optional. &amp;quot;At Mytheresa Ben was paramount in crafting the architectural foundations of a React Native application, while also training and guiding the other engineers&amp;quot; - Philipp Barthold Be Hands-On When Needed A CTO isn&#39;t just a title. It&#39;s being ready to dive into infrastructure spikes or squash bugs when the team needs you. I don&#39;t believe in leading from a distance. Focus on the User, Always Whether it&#39;s launching a React Native app or revamping an aging SaaS platform, keeping the user&#39;s needs front and center drives meaningful results. Teach, Share, Elevate Teaching isn&#39;t just for workshops. &amp;quot;He is an excellent teacher and more than willing to share his knowledge building and scaling web applications&amp;quot; - Svilen Gospodinov Sharing knowledge builds stronger teams and better outcomes. Be Personable, Always Leadership isn&#39;t just technical-it&#39;s human. I want the people I work with to enjoy the process and feel valued. &amp;quot;Unlike your stereotypical uber-techie, Ben is very personable, and everyone enjoyed working with him.&amp;quot; - Duane Jackson Leave Teams Better Than You Found Them Leadership is about legacy. Helping teams adopt best practices and frameworks that endure after I&#39;ve left is something I strive for on every project. Leadership that doesn&#39;t suck is about balance, empathy, and constant learning. These principles have guided me as an engineer, CTO, and advisor. What&#39;s one thing you&#39;ve seen great leaders do?</content>
  </entry><entry>
    <title>Entangled Interfaces: A Quantum Leap in UI Engineering with React</title>
    <link href="https://benhowdle.im/entangled-interfaces.html" />
    <updated>2025-01-10T00:00:00.000Z</updated>
    <id>https://benhowdle.im/entangled-interfaces.html</id>
    <content type="text">&amp;quot;Alright, five years ago, right before Thanos, I was in a place called the Quantum Realm. The Quantum Realm is like its own microscopic universe&amp;quot; - Scott Lang, Avengers: Endgame Every line of code we write is an act of creation-a way of collapsing the infinite potential of an idea into something real. Yet, we rarely stop to consider: what if our UI components themselves could exist in a state of quantum potential? What if the principles of the quantum realm-superposition, entanglement, and collapse-could inspire how we design and build interfaces? This is not a post about buzzwords (although, granted, it&#39;s rife with them). It&#39;s about rethinking the way we approach interface development. By applying quantum mechanics as a lens for UI design, we can unlock new ways to think about state, interaction, and composition. And perhaps-just perhaps-we can glimpse the future of what user interfaces might become. Superposition: Designing for Potential, Not Certainty In the quantum world, superposition describes a system existing in all possible states simultaneously until observed. In development, we often approach UI components as static entities: a button is either hovered or not, clicked or not, active or not. But what if we thought of them as systems of potential-existing in multiple states at once until the user&#39;s interaction resolves them? Let&#39;s imagine a quantum-inspired button: // MultiStateButton.tsx import React, { useState } from &quot;react&quot;; type ButtonState = &quot;default&quot; | &quot;hovered&quot; | &quot;clicked&quot; | &quot;disabled&quot;; export const MultiStateButton: React.FC = () =&gt; { const [state, setState] = useState&amp;lt;ButtonState&gt;(&quot;default&quot;); const handleMouseEnter = () =&gt; state === &quot;default&quot; &amp;amp;&amp;amp; setState(&quot;hovered&quot;); const handleMouseLeave = () =&gt; state === &quot;hovered&quot; &amp;amp;&amp;amp; setState(&quot;default&quot;); const handleClick = () =&gt; setState((prev) =&gt; (prev === &quot;disabled&quot; ? prev : &quot;clicked&quot;)); return ( &amp;lt;button className={`multi-state-button ${state}`} onMouseEnter={handleMouseEnter} onMouseLeave={handleMouseLeave} onClick={handleClick} disabled={state === &quot;disabled&quot;} &gt; {state === &quot;clicked&quot; ? &quot;Clicked!&quot; : &quot;Hover or Click Me&quot;} &amp;lt;/button&gt; ); }; This button doesn&#39;t assume its role-it waits for interaction to decide its state. The user is the observer who collapses the possibilities into a rendered interface. While this might feel contrived, it challenges us to consider: What if all components were designed to reflect dynamic, contextual possibilities instead of fixed states? For example: A product card could display multiple prices (discounted, full price, subscription) simultaneously-resolved only when the user&#39;s context (location, preferences, or membership) becomes known. Superposition invites us to design for ambiguity, to embrace potential over certainty. Entanglement: UI Components Beyond Isolation In the quantum realm, entanglement binds particles so deeply that a change to one immediately affects the other, no matter the distance. In UI development, we often view components as modular and isolated. But real-world interactions are rarely that clean. How can we embrace a model where components are deeply and intrinsically connected? Consider this: a collaborative text editor. An input field on one user&#39;s screen is entangled with another user&#39;s display. Any change ripples through the system instantaneously, not constricted by time or space or location: // quantumState.tsx import React, { createContext, useContext, useState } from &quot;react&quot;; type QuantumState = { value: string; setValue: (newValue: string) =&gt; void; }; const QuantumContext = createContext&amp;lt;QuantumState | null&gt;(null); export const QuantumProvider: React.FC&amp;lt;{ children: React.ReactNode }&gt; = ({ children, }) =&gt; { const [value, setValue] = useState(&quot;Initial Value&quot;); return ( &amp;lt;QuantumContext.Provider value={{ value, setValue }}&gt; {children} &amp;lt;/QuantumContext.Provider&gt; ); }; export const useQuantumState = () =&gt; { const context = useContext(QuantumContext); if (!context) throw new Error(&quot;useQuantumState must be used within QuantumProvider&quot;); return context; }; Now, two components-EntangledInput and EntangledDisplay-can share this state: // EntangledInput.tsx import React from &quot;react&quot;; import { useQuantumState } from &quot;./quantumState&quot;; export const EntangledInput: React.FC = () =&gt; { const { value, setValue } = useQuantumState(); return ( &amp;lt;input value={value} onChange={(e) =&gt; setValue(e.target.value)} placeholder=&quot;Type here, watch the magic&quot; /&gt; ); }; // EntangledDisplay.tsx import React from &quot;react&quot;; import { useQuantumState } from &quot;./quantumState&quot;; export const EntangledDisplay: React.FC = () =&gt; { const { value } = useQuantumState(); return &amp;lt;p&gt;Entangled Value: {value}&amp;lt;/p&gt;; }; This isn&#39;t just state-sharing-it&#39;s a philosophy. Entangled components move us away from &amp;quot;atomic design&amp;quot; to something more organic, more relational. Think of a global notification system: a user dismissing a banner updates not just their view, but every connected context where that banner appears. The connection becomes the system&#39;s lifeblood, agnostic to recipient or sender. Quantum Collapse: The Power of Interaction In quantum mechanics, collapse occurs when a system&#39;s infinite possibilities reduce to a single, observable state. For interfaces, this mirrors how interactions-clicks, taps, scrolls-determine what the user experiences next. But what if we stopped thinking of &amp;quot;conditional rendering&amp;quot; as a utility and started treating it as an art, or....a science? Take a personalized onboarding wizard: // OnboardingCard.tsx import React, { useState } from &quot;react&quot;; export const OnboardingCard: React.FC = () =&gt; { const [step, setStep] = useState(0); const handleNext = () =&gt; setStep((prev) =&gt; prev + 1); return ( &amp;lt;div className=&quot;onboarding-card&quot;&gt; {step === 0 &amp;amp;&amp;amp; &amp;lt;p&gt;Welcome! Let&#39;s get started.&amp;lt;/p&gt;} {step === 1 &amp;amp;&amp;amp; &amp;lt;p&gt;Tell us more about yourself.&amp;lt;/p&gt;} {step === 2 &amp;amp;&amp;amp; &amp;lt;p&gt;You&#39;re all set! Enjoy the journey.&amp;lt;/p&gt;} &amp;lt;button onClick={handleNext}&gt;{step &amp;lt; 2 ? &quot;Next&quot; : &quot;Finish&quot;}&amp;lt;/button&gt; &amp;lt;/div&gt; ); }; Here, each interaction collapses the UI into a new state. But what if this collapse wasn&#39;t linear? What if the UI responded probabilistically, surfacing the most relevant next step based on incomplete data? That&#39;s where machine learning meets quantum-inspired design, and the possibilities explode. Turning Theory into Introspection This isn&#39;t just a thought experiment. By applying quantum principles to React, we can reimagine how we approach state, interaction, and composition: Superposition asks us to design for ambiguity and adaptability. Entanglement challenges us to think relationally, connecting components at a deeper level. Collapse reminds us that interaction isn&#39;t just a transition-it&#39;s a moment of transformation. What&#39;s more exciting is what we haven&#39;t explored. How might quantum-first interfaces look in the age of quantum computing? Could UIs leverage probabilistic algorithms to predict user intent? Could they dynamically adapt based on entangled user profiles across systems? These are the questions worth asking-not just for React developers, but for anyone rethinking what a user interface can be. Ultimately, quantum UI design isn&#39;t about the technology-it&#39;s about the mindset. It&#39;s an invitation to create systems that are dynamic, responsive, and deeply human. And the best part? This isn&#39;t theoretical. It&#39;s happening now, in the decisions you make every day. A Quantum leaping off point Below is a conceptual React library designed to bring quantum-inspired principles into UI development. It introduces abstractions like Superposition, Entanglement, and Collapse to help developers create more dynamic, adaptive, and interconnected components. Superposition allows components to represent multiple states simultaneously, resolving into one based on runtime conditions. This abstraction is perfect for dynamic content that depends on user context or probabilistic logic. Entanglement enables seamless state sharing between components, creating deeply interconnected systems. This is ideal for collaborative features or global state synchronization. Collapse treats conditional rendering as a transformative interaction, enabling UIs to elegantly evolve based on user input. By applying these abstractions, we can simplify complex state transitions, enhances the fluidity of shared data, and empowers developers to design UIs that feel alive and responsive to user intent. // src/quantum/Superposition.tsx import React, { ReactNode, useState, useEffect } from &quot;react&quot;; /** * Superposition: A wrapper that accepts multiple states and resolves to one based on a condition. */ type SuperpositionProps&amp;lt;T&gt; = { states: T[]; resolver: (states: T[]) =&gt; T; children: (state: T) =&gt; ReactNode; }; export function Superposition&amp;lt;T&gt;({ states, resolver, children, }: SuperpositionProps&amp;lt;T&gt;) { const [resolvedState, setResolvedState] = useState&amp;lt;T&gt;(states[0]); useEffect(() =&gt; { const result = resolver(states); setResolvedState(result); }, [states, resolver]); return &amp;lt;&gt;{children(resolvedState)}&amp;lt;/&gt;; } // src/quantum/Entanglement.tsx import React, { createContext, useContext, useState, ReactNode } from &quot;react&quot;; /** * EntanglementContext for shared state */ type EntanglementContextValue&amp;lt;T&gt; = { value: T; setValue: (newValue: T) =&gt; void; }; const createEntanglement = &amp;lt;T,&gt;(defaultValue: T) =&gt; { const EntanglementContext = createContext&amp;lt;EntanglementContextValue&amp;lt;T&gt; | null&gt;( null, ); const EntanglementProvider: React.FC&amp;lt;{ children: ReactNode }&gt; = ({ children, }) =&gt; { const [value, setValue] = useState(defaultValue); return ( &amp;lt;EntanglementContext.Provider value={{ value, setValue }}&gt; {children} &amp;lt;/EntanglementContext.Provider&gt; ); }; const useEntangledState = (): EntanglementContextValue&amp;lt;T&gt; =&gt; { const context = useContext(EntanglementContext); if (!context) { throw new Error( &quot;useEntangledState must be used within an EntanglementProvider&quot;, ); } return context; }; return { EntanglementProvider, useEntangledState }; }; export const Entanglement = createEntanglement; // src/quantum/Collapse.tsx import React, { ReactNode, useState } from &quot;react&quot;; /** * Collapse: Renders a component based on user interaction or conditions. */ type CollapseProps = { trigger: ReactNode; children: ReactNode; }; export const Collapse: React.FC&amp;lt;CollapseProps&gt; = ({ trigger, children }) =&gt; { const [isOpen, setIsOpen] = useState(false); return ( &amp;lt;div&gt; &amp;lt;div onClick={() =&gt; setIsOpen((prev) =&gt; !prev)}&gt;{trigger}&amp;lt;/div&gt; {isOpen &amp;amp;&amp;amp; &amp;lt;div&gt;{children}&amp;lt;/div&gt;} &amp;lt;/div&gt; ); }; // Example usage in App.tsx import React from &quot;react&quot;; import { Superposition } from &quot;./quantum/Superposition&quot;; import { Collapse } from &quot;./quantum/Collapse&quot;; import { Entanglement } from &quot;./quantum/Entanglement&quot;; const { EntanglementProvider, useEntangledState } = Entanglement&amp;lt;string&gt;(&quot;Default Value&quot;); const InputComponent: React.FC = () =&gt; { const { value, setValue } = useEntangledState(); return &amp;lt;input value={value} onChange={(e) =&gt; setValue(e.target.value)} /&gt;; }; const DisplayComponent: React.FC = () =&gt; { const { value } = useEntangledState(); return &amp;lt;p&gt;Entangled Value: {value}&amp;lt;/p&gt;; }; export const App: React.FC = () =&gt; ( &amp;lt;div&gt; &amp;lt;h1&gt;Superposition Example&amp;lt;/h1&gt; &amp;lt;Superposition states={[&quot;Loading...&quot;, &quot;Welcome!&quot;, &quot;Goodbye!&quot;]} resolver={(states) =&gt; states[Math.floor(Math.random() * states.length)]} &gt; {(state) =&gt; &amp;lt;h2&gt;{state}&amp;lt;/h2&gt;} &amp;lt;/Superposition&gt; &amp;lt;h1&gt;Entanglement Example&amp;lt;/h1&gt; &amp;lt;EntanglementProvider&gt; &amp;lt;InputComponent /&gt; &amp;lt;DisplayComponent /&gt; &amp;lt;/EntanglementProvider&gt; &amp;lt;h1&gt;Collapse Example&amp;lt;/h1&gt; &amp;lt;Collapse trigger={&amp;lt;button&gt;Show More&amp;lt;/button&gt;}&gt; &amp;lt;p&gt;This content is revealed when the collapse is triggered!&amp;lt;/p&gt; &amp;lt;/Collapse&gt; &amp;lt;/div&gt; ); Quantum Leapt Quantum mechanics has long been a source of wonder, mystery, and paradigm-shifting insights into the nature of reality. By borrowing its principles to inspire how we build interfaces, we unlock new ways to think about the systems we create. Superposition teaches us to embrace ambiguity and design for potential, entanglement reminds us of the interconnectedness of data and experience, and collapse challenges us to view interaction as transformational rather than transactional. As developers, we are not just building UIs-we are crafting experiences that shape how users interact with technology and the world. The real power of these quantum-inspired ideas isn&#39;t in the code itself, but in the mindset they cultivate: one that values adaptability, connection, and endless possibility. The question now isn&#39;t whether to embrace these concepts, but how far they can take us.</content>
  </entry><entry>
    <title>Understanding Systems: A Journey Through Chaos, Meaning, and Observation</title>
    <link href="https://benhowdle.im/understanding-systems.html" />
    <updated>2024-12-11T00:00:00.000Z</updated>
    <id>https://benhowdle.im/understanding-systems.html</id>
    <content type="text">&amp;quot;We do not ‘come into’ this world; we come out of it, as leaves from a tree.&amp;quot; – Alan Watts The world is built on systems. Some are physical, like the weather or a Kubernetes cluster. Others are abstract, like the algorithms behind a recommendation engine or the thoughts racing through your mind during a late-night coding/anxiety* session. Systems surround us, often unnoticed, until they break. Then, they demand our attention. *same thing in my eyes But systems don’t work the way we expect-they work the way they do. Learning to understand them means stepping back, observing, and letting go of the urge to impose our pre-existing, oft faulty, frameworks on their behavior. Whether it’s a distributed system, a social dynamic, or our own thought patterns, systems reveal their truths to those who pay attention. Observing without interference or judgement is the first step. Watching Systems: The Art of Non-Interference Alan Watts once talked about letting the system show you how it works. This idea is as relevant and important to debugging software as it is to understanding the broader world. When something goes wrong, the temptation and instinct to act immediately is strong. But rushing to fix a system without first observing it often leads to blind spots and unnecessary complexity. We cannot impose fixes on systems we don’t fully or correctly understand. You had one problem before, congratulations; you’ve now created another! Consider a production system throwing errors. Logs are piling up, the team is tense, and solutions are being tossed around like hot potatoes. It’s tempting to jump in and start tweaking, but this knee-jerk reaction can obfuscate the real issue. By letting the system run, watching carefully, and gathering data, you’ll start to spot patterns-specific inputs triggering failures, timing issues, or resource bottlenecks. Observation often reveals what frantic action cannot. And here’s the connection: systems evolve over time, often in unexpected ways. To truly understand them, we need to move beyond the urge to fix and toward the patience to observe their states as they unfold. Time and State: The Illusion of Progress What we call &amp;quot;time&amp;quot; is often just the progression of states in a system. A server doesn’t move through time-it processes requests, updates its state, and responds. A database doesn’t age-it stores snapshots of information, and its configuration changes based on the queries it processes. Time, as we perceive it, is nothing more than the reconfiguration of a system over and over again. Imagine debugging a data pipeline. Yesterday, everything was fine, but today, a downstream process is failing. It feels like time has introduced the issue, but it hasn’t. Instead, the system’s state has changed-an upstream dependency was modified, or an input file had unexpected data. The illusion of time can distract from what really matters: understanding how the current state came to be. This same idea applies to life. Rather than seeing ourselves as &amp;quot;moving through time,&amp;quot; it’s more accurate to think of life as a system constantly reconfiguring. And like any system, the patterns of these configurations hold the key to making sense of what appears chaotic. Patterns in Chaos: The Laws of Systems What looks like chaos is often just a lack of pattern recognition. Richard Osman’s observation that &amp;quot;Deal or No Deal&amp;quot; reflects humanity’s discomfort with chaos captures this perfectly. When faced with randomness, we instinctively attempt to impose meaning to create a sense of control (by picking random boxes in some sort of &amp;quot;meaningful&amp;quot; order). But systems don’t need our imposed narratives-they already operate by their own internal rules. Take distributed systems, for example. Logs from asynchronous microservices can look like a chaotic mess: out-of-order events, retries piling up, errors surfacing sporadically. But dig deeper, and you’ll see patterns emerge. Maybe it’s a service broadcasting events twice due to a race condition, or a misconfigured load balancer throttling requests. Chaos starts to dissolve when you identify the patterns underpinning and governing the system. Once we see that chaos is just order we don’t yet understand, we can begin to analyze systems more effectively. This sets the stage for the next crucial insight: how the act of observing a system changes it. The Observer Effect: How Perception Shapes Reality In quantum mechanics, the double-slit experiment revealed that observation changes outcomes. The same principle applies to software systems. The moment you attach a debugger to a running application, it alters the behavior-perhaps slowing execution or changing timing-sensitive operations. Bugs that are easy to replicate in production may disappear in a debug environment. Observation itself becomes a factor in the system’s behavior. No two operations have ever played out under the exact same configurations and conditions (good luck debugging!) The same dynamic plays out in broader systems. Monitoring tools, for instance, can inadvertently add latency or consume resources. In social contexts, simply watching someone can alter their actions. Systems respond to being observed, and this response must be factored into our understanding. By recognizing the influence of observation, we can better account for its impact. From here, the focus shifts to how we interpret the events we observe. Neutrality: The Stories We Tell About Events Events in a system are neutral. A server crash, a deployment failure, or an unexpected spike in CPU usage doesn’t carry inherent meaning-it simply is. The meaning arises from the narrative we impose on it. In programming, a 500 error isn’t a catastrophe; it’s a breadcrumb pointing toward a misconfigured service or an edge case that wasn’t handled. In systems, events are clues, not judgments; they’re pieces of a larger puzzle we sometimes haven’t stepped back far enough to see the whole of. When we treat events as neutral, they become opportunities to learn rather than failures to avoid. This mindset makes it easier to approach problems with clarity. By detaching meaning, we gain the freedom to explore the system on its own terms, without the baggage of imposed interpretations. This detachment naturally leads to the next layer of understanding: removing resistance to how systems behave. Resistance: The Barrier to Understanding A system has no obligation to align with our expectations. It operates based on its design, its inputs, and the interactions of its components. Resistance to this reality often leads to misunderstanding. The harder we cling to assumptions, the more likely we are to miss the real story. In software, resistance looks like stubbornly believing the bug must be in the service you’re most familiar with, even as logs point elsewhere. In distributed systems, it’s assuming linear causality in a system inherently built for asynchronous operations. Resistance is the mental friction that keeps us from seeing what’s actually happening. By releasing resistance, we open ourselves to the system’s truth. This shift in perspective also applies to the design and evolution of systems, which are often shaped by thought. Systems as Manifested Thought At their core, systems are the manifestation of ideas. Software reflects the logic and creativity of its developers. Society mirrors the values and priorities of its participants. Even our lives can be seen as systems, shaped by the stories we tell ourselves and the beliefs we hold. This connection is a reminder of the power we have to influence systems. Flaws in a system often trace back to the ideas that created it. Simplifying a convoluted codebase or reworking a brittle process requires rethinking the foundational logic. The same applies to personal systems-revising a limiting belief can transform how you approach challenges. By understanding systems as expressions of thought, we gain the ability to reshape them with intention. Final Thoughts: Finding Comfort in the Discomfort Systems don’t need to make sense to us to function. Their behavior, even when chaotic, is governed by patterns and rules waiting to be uncovered. Understanding them isn’t about imposing control but about observing, learning, and adapting. Whether debugging a flaky API or navigating the complexities of everyday life, the principles remain the same: let the system teach you how it works. Great systems thrive not on rigid frameworks but on flexibility and resilience. The more we engage with systems on their own terms, the more effectively we can work with them, rather than against them. Closing Notes: Listening to Systems Every system has something to reveal. Logs, workflows, and behaviors all carry insights into how the system operates. By watching closely, without interference or resistance, those insights come into focus. Understanding follows patience, and clarity emerges when we’re willing to let systems speak for themselves. “You are an aperture through which the universe is looking at and exploring itself.&amp;quot; – Alan Watts Every system, every moment, and every interaction is part of that exploration. Systems reveal themselves when we watch, learn, and listen-without interference, without resistance, and with curiosity. The answers are always there, waiting for us to see them.</content>
  </entry><entry>
    <title>Chaos Engineering: Jurassic Park &amp; Distributed Systems</title>
    <link href="https://benhowdle.im/chaos-engineering-applied.html" />
    <updated>2023-03-28T00:00:00.000Z</updated>
    <id>https://benhowdle.im/chaos-engineering-applied.html</id>
    <content type="text">Predicting Unpredictable Bugs &amp;amp; Testing Distributed Systems Or, &amp;quot;No-One Expected Dennis Nedry But Maybe Could Have Better Prepared For His Actions&amp;quot; Dr. Ian Malcolm For most people, Jurassic Park was the first time they&#39;d been exposed to the term &amp;quot;Chaos Theory&amp;quot;: Malcolm: &amp;quot;You see? The tyrannosaur doesn’t obey set patterns or park schedules. The essence of Chaos.&amp;quot;Ellie: &amp;quot;I’m still not clear on Chaos.&amp;quot;Malcolm: &amp;quot;It simply deals with unpredictability in complex systems. The shorthand is the Butterfly Effect. A butterfly can flap its wings in Peking and in Central Park you get rain instead of sunshine.&amp;quot; – Jurassic Park (1993) Chaos theory states that small changes in complex systems can have big, unpredictable effects. Although not a small change, Dennis Nedry&#39;s actions were unforeseen, they brought about dire consequences. Chaos theory says that when you deal with very complicated situations, unexpected things are going to happen. When you build software, even the simplest of applications bring a seemingly disproportionate amount of hidden complexity. What chaos theory tells you is that you cannot predict very far ahead successfully. You have to expect things to fail, and you’ve got to be ready to change. This might sound terrifying, but on the other side of the coin; freeing. Trying to control every single unpredictable outcome in a system is unrealistic and fruitless. Rather, aim to relinquish that control and instead concentrate on building awareness and confidence in your system, making flexibilty and resilliency your new targets. A better, healthier way to look at it is: complex systems will produce unpredictable and unexpected effects, so accept failure as inevitable but employ practical methods to gracefully handle the fallout. Testing 101: Why? The aim of testing is essentially to avoid and reduce bugs, which leads to us having more confidence that our system will behave in the ways we expect. Notice I said &amp;quot;more confidence&amp;quot;, not &amp;quot;total confidence&amp;quot;. Perfection is a myth and unattainable. In practical terms, there&#39;s two main objectives for testing: Maintaining data integrity The data within a system is the golden source of truth for how our system will treat a customer, it&#39;s also permanent (to a degree), so allowing contaminated data into our databases is setting ourselves up for a bad time. It&#39;s often subtle and insidious when bad data is stored, and it&#39;s often harder to recover from. Upholding an acceptable user experience We&#39;ll never reach the nirvana of catching every bug before it hits production, but we want to reduce the chances of a customer finding a bug as much as we can. It leads to a poor user experience and might deter a customer from returning. If we look at our system in two distinct parts: data and state. The state of a system can often be easier to recover from without much (if any) lasting damage. As long as bad data wasn&#39;t stored, we can usually restart a process that&#39;s stuck or crashed, and the user can retry an action. While less damaging in the long run, we&#39;d still rather avoid this subpar experience. Knowing your system Proof is in the pudding One devishly straightforward method of building confidence in your system is to prove its behaviour. Identifying key methods and critical functionality, whether in terms of importance or complexity, is the first part of the puzzle. This logic should ideally be extracted into pure, isolated functions, which can then be unit tested. We want minimal-effort, high-value testing. Identifying the critical flows a user or data will take through our system is paramount. If we&#39;re dealing with a distributed system, that can be mighty tricky to test. To keep things extremely practical, identify API endpoints in your system that 1) are critical to the functionality of your system (creating users might be one) and 2) if requested, will touch many services in your system. By doing this, you&#39;re proving the functionality of multiple services, methods and processes in one API request. You can trigger this request programmatically and assert the response is what you expect it to be. Being able to run your system in an isolated, controlled and ephemeral environment is going to be hugely advantageous and make possible the sort of end-to-end test mentioned above. Functionality transparency The more awareness you can build, the better chance you have to foresee and guard against unexpected outcomes. We have to strike a balance between DRY, reusable code and too many layers of abstraction. Too many layers of abstraction tend to make the system hard to reason about, increases cognitive load and therefore makes it harder to spot hidden complexity. Any hidden complexity has the possibility of unexpected results. You can hide an awful lot of unwieldy functionality behind a neatly exposed interface. Tribal knowledge In theory, no-one knows the intention, context or inner workings of a piece of code better than its author (in theory). Unless documented sufficiently, this can lead to fragmented, system-wide information and context living inside multiple engineer&#39;s brains. I may know Service A like the back of my hand, and you may know Service B extremely well, but unless we coordinate, a subtle behaviour from Service A might have unforeseen impact on Service B&#39;s functionality. Everyone knows developers love writing docum-hang on, sorry, hate writing documentation. It&#39;s key to foster a flexible, low-friction, easy-to-maintain method for developers to extract and record the subtleties of their own code. This might be a &amp;quot;User Manual&amp;quot; section in a Service&#39;s README.md, or it might be an attached walkthrough recording in a PR - whatever suits your team, it just has to be consistent, visible and kept up-to-date. Unexpected side-effects One part of a system directly calling another part of a system is fairly straightforward to reason about. As soon as we introduce asynchronous operations, background queues, events or any other type of patterns commonly utilised in distributed systems, we increase the cognitive load required to parse a piece of code or functionality just by looking at it. More crucial documentation in a distributed system is mapping out inter-service communication and their dependant interactions. This might look like: SERVICE ABroadcasts: USER|CREATED, USER|REMOVEDListens for: PASSWORD|GENERATED SERVICE BBroadcasts: EMAIL|SENTListens for: USER|CREATED, USER|REMOVED Unknown unknowns &amp;quot;…there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns - the ones we don’t know we don’t know.&amp;quot; – Donald Rumsfeld In a complex distributed system, there are complexities in functionality we are aware of and also understand - so we write tests and can predict the outcomes fairly confidently - this is the top left quadrant, the green, safe zone. The more functionality we can keep in this zone, the better. Anything outside of this quadrant poses a potential risk we need to spend time mitigating (or at least increasing our understanding so we know exactly how much mitigation is necessary) Three of the main issues arising from the more nefarious quadrants can be categorised in the following way: Lack of awareness It&#39;s likely the developer is not aware of a vulnerability and how much of risk represents. You can&#39;t begin to figure out a fix for something you have no idea exists, let alone whether it might poses future problems. To remedy this, we need to build awareness of the system we&#39;re working in. Three solid way to begin this process are as followed: Fix an existing bug - it might not be immediately obvious which part of the system to begin debugging in, so it can be a beneficial adventure down a rabbit-hole which only leads to greater understanding along the way. Add functionality - Pick a feature which will require end-to-end changes, i.e. from database -&amp;gt; API. Add a new column to a data store and expose it via the API, which (likely) requires touching various files, methods and schemas. For new codebases - Set the codebase up locally, strictly following the steps in the README. Amend the README with missing or incorrect steps for the next developer. This is a handy way to not only increase understanding, but also keeps documentation synced with the actual system. This can also be a good way to tighten up the experience for new hires; periodically delete your local codebase and environment, and re-clone your remote repository, following the README as above. Faulty assessment The developer is aware of vulnerability but doesn’t understand its potential impact. Mildly better in that we can see a potential bug, we just don&#39;t understand the blast radius it would have if it blew up. Homework time! Get to know the existing functionality or solution as much as you can, which may be reading 3rd party, and often internal, documentation. Speak to the original author of the code if possible. If that&#39;s not possible, and you&#39;ve inherited some code you don&#39;t understand, try as best you can to simplify the existing solution. That may be rewriting it, or even reworking that flow within the system. A powerful wand is deadly in the wrong hands (see: Voldemort), so if you&#39;re struggling to understand the magic, restrict yourself to using a less powerful wand. Inability to act The developer is aware of vulnerability but doesn’t have the skill or know-how to address it. In this frustrating scenario, you&#39;re often left with trying to contain the blast radius if it does blow up. This is often achieved by isolating the negative path or impact of the complex functionality as best you can. You may not understand how to address the vulnerability itself, but you may be able to limit the collateral damage. Initial system design Conscientious developers are unlikely to ignore known potential bugs, so we have to presume a baseline of developer morality here. Just by reading this, I will presume you exceed that baseline. Premature optimisation vs future-proofing Scaling issues are a Good Thing, because it means your system is getting more usage than it&#39;s been designed for, but there&#39;s little value in spending copious time and effort implementing a highly-scalable system for zero users. Spend more time and effort attracting the users to your system, which doesn&#39;t mean ignore scaling entirely until you&#39;ve hit that capacity. Try designing the system with flexibility and scale in mind, but save the implementation effort until it&#39;s time. Then it&#39;s simply a case of dialling up those knobs, or making minor code changes, to activate the previous future-proofing work you did. Moving fast in the right direction We&#39;d all love to build the Perfect system, but 1) that doesn&#39;t exist 2) it won&#39;t fit your needs. Instead, establish the high-value needs for your system upfront and aggressively design the system with those needs at the forefront. A shopping list app is going to have vastly different needs than a real-time chat app which is going to have vastly different needs than a medical platform with highly-sensitive patient information. Most applications have a baseline need for performance, availability, security and data integrity, so figure out which of those to skew your system design towards. Quantified risk Building awareness of your system will give you the option of coming back to a potential bug at a later date. For example, it&#39;s a great idea to build a library to DRY up and centralise your data access methods (i.e. database connection, querying, etc). It&#39;s also a good idea to impose an implicit default LIMIT to all SELECT queries, i.e. 100 rows max, which would be implicitly set by the library and exposed as a .find() method. This took very little time and effort and keeps scaling needs in mind when designing the library. However, it&#39;s extremely likely we&#39;ll have scenarios where we want to return more than 100 rows in the future. Knowing your system inside out, and monitoring its usage, will let you know when it&#39;s time to build pagination into your data access library. This has kept our system future proofed, but we didn&#39;t fall into the premature optimisation trap. Netflix chaos monkeys &amp;quot;We created Chaos Monkey to randomly choose servers in our production environment and turn them off during business hours&amp;quot; - Netflix Technology Blog Simulating negative scenarios Why would we want to intentionally break our system? Because we&#39;d rather uncover that weakness by stress-testing our system, vs. letting a customer happen upon it themselves. Here&#39;s a starter point: Replicate a DDoS attack with nefarious use of the loop in your favourite language Set an obnoxiously high iteration count Hammer your method or interface Record the point it starts to slow and strain Record the point it crashes and times out Use those findings to refactor your code and improve performance for specific scenarios, vs blindly refactoring for unknown performance targets Use those break points as an opportunity to gracefully handle timeouts and crashes in your system Guarding against them If a key component in your system dies, make sure bad data can&#39;t make its way into your data stores. We can alleviate this by ensuring we have strict data validation at the data model level (at the point the data would be inserted or updated in a database). Make sure you have readily-accessible, well-documented ways for engineers to recover parts of the system that have fallen into an irrecoverable state, i.e. a runbook containing exact steps to access, for example, the production Kubernetes cluster, which kubectl commands to run, etc... If the system has been designed with flexibility in mind, it should be relatively safe to allow engineers to restart parts of the system with ease. Testing in production audible gasp... There&#39;s a very short list of operations that can truly only be tested in production. Making a payment with real money is on that very short list. However, we want to be sure we&#39;ve covered as much pre-production ground as possible. We should be utilising sandbox environments from 3rd party APIs, testing all the various flows and responses we may receive. Hammer every single negative scenario and sad path for the relevant and surrounding functionality regarding the payment: What happens if the payment fails? Does the user know? Are corresponding account balances incorrectly updated? Can the payment be retried? Think Production Design your functionality with multiple instances in mind: If 3 versions of my code were running simultaneously, would 3 payments be made? What mechanism decides which instance processes the payment operation? If the payment succeeds, are 3 events broadcast through the system? Exposing weaknesses and brittleness Rarely does a user use your system 100% in the way it was intended. We create these scenarios under controlled, isolated environments so we can stress-test our functionality in ways we might not have considered when building it. Happy vs sad path Most engineers build functionality to cater for the happy path, i.e. the way the code will operate if it&#39;s executed the way it&#39;s been expected and designed for, and produces the behaviour the engineer expects. However, most engineers (hi) neglect the sad path, i.e. the branches of code that should handle validation errors, timeouts, instance crashes, loss of database connection, loss of network connection, etc. It&#39;s less fun and quite frankly; it&#39;s boring. Pop quiz, hotshot: sprinkle in some random throw new Error()&#39;s into your code and see what happens. Was the error handled gracefully? Did subsequent operations continue to execute when maybe they should&#39;t have? Atomic operations If an error is thrown in operation A, should B continue to run, or should the entire flow bail, or was A&#39;s failure acceptable in the bigger picture? Working scenario: If a user signs up to our system, we want to send them a welcome email and also insert them into the database. We likely don&#39;t want to prevent the database insertion if the email failed to send. Two ways we can improve this: Isolated error handling - If the email fails to send, contain the failure, log the error, continue to insert the user into the database. Make it all asynchronous - Set a listener in the email-sending-part of the system to send an email whenever it receives a USER|CREATED event. Insert the user into the database immediately after signup. Broadcast a USER|CREATED event asynchronously, this allows you to respond back to the user signing up vs. waiting for the email to send. Rollbacks, Migrations &amp;amp; Restarts If your system has found itself to be in a bad state, you normally have 3 options depending on what bad state it&#39;s in: A bug has found its way into the latest deployment - Rollback to a previously working deployment If the bug-ridden deployment changed database schemas, ensure your database migrations are set up correctly to rollback the schema changes Fix the code bug and re-deploy Bad data has made its way into a database - Cleanse the data manually Plug the code gap that allowed for the contamination to occur Parts of the system are unresponsive, &amp;quot;stuck&amp;quot;, crashing, or timing out &amp;quot;Have you turned it off and on again?&amp;quot; Simply restart the stuck process (i.e. if using Kubernetes, kill the naughty pods in question and they&#39;ll be recreated and hopefully behave better) ...Ok, you killed the naughty process, but it&#39;s stuck in a crash loop on startup, what now? A bug was recently introduced? Find it and destroy it Production configuration recently changed? Audit the current config for changes, i.e. a database URL might need updating to a new host The naughty process itself would be working fine, but it&#39;s dependent on a downstream service being available: Decide if the downstream service is 100% crucial to the operation of the aforementioned naughty process If it is: handle the crash/error gracefully, put the system in maintenance mode to avoid bad data leaking in and let customer&#39;s know what&#39;s happening If it&#39;s not: log the error then wrap the downstream service connection code in tighter isolation, one that doesn&#39;t allow for such widespread chaos if it breaks Finding comfort in the discomfort Assessing acceptable risk Each system will have different needs; spend acceptable time optimising for the higher-priority needs. Assess which sad paths can be revisited at a later date and which need addressing upfront in a future-proofing fashion. Protect the data stores at all costs from bad data finding its way in. Bad state can be recovered fairly easily; bad data is harder to track down and recover from. Graceful error handling A system shouldn&#39;t completely melt if one of its components dies. Design with resiliency in mind; paying close attention to how services communicate and how each dependent service behaves if a downstream service fails to respond or crashes. Atomic operations Decide which are the mission critical steps in any given user or data flow in your system. Losing a user&#39;s signup information because a welcome email failed to send is Not Good. Bailing an entire flow because a critical step has failed is a good idea, especially if the flow can be retried idempotently. Hold onto your butts Remember, no-one wants to be sat trying to debug two million lines of code while production is on fire...or dinosaurs are escaping. No-one saw Dennis Nedry coming, but maybe allowing an entire system to live inside one developer&#39;s brain with no documentation in sight could have been avoided. Maybe expense had been spared...</content>
  </entry><entry>
    <title>The 3-Phase Time-off Schedule for Contractors</title>
    <link href="https://benhowdle.im/3-phase-time-off-schedule-for-contractors.html" />
    <updated>2020-11-05T00:00:00.000Z</updated>
    <id>https://benhowdle.im/3-phase-time-off-schedule-for-contractors.html</id>
    <content type="text">How I manage time off and schedule in new work AKA &amp;quot;Asynchronous Pipeline Building&amp;quot;, but more on that below... Read this bit first I&#39;ve made this post with contractors in mind for two reasons: 1 - I&#39;ve been one for 7 years now, so I feel a little more confident talking about it 2 - My definition of contracting is basically &amp;quot;staff augmentation for one company at a time&amp;quot;, where you&#39;re just paid differently (via invoices and such) and don&#39;t get company benefits, so its easier to talk about time off around one client at a time (vs freelancers who might be working with various clients at once) A lot of this post will be applicable to freelancers and consultants, but I&#39;ve based most of it on my, and other contractors I&#39;ve known, experience. Intra-contract time off As a contractor, depending on the exact terms you sign, you actually get quite a bit of flexibility on how and when you work. To simplify things dramatically, a usual relationship between client and contractor is: client: hello contractor, here&#39;s what we need doing contractor: ok, that task/feature will take X time Things like time on-site, hours, meeting frequency are usually up for grabs and can be written into contracts. If I, as a contractor, inform a client that I&#39;m going to be offline on a particular day, or out-of-the-office, then one of two things will happen: 1 - absolutely nothing, because you didn&#39;t base your fees on &amp;quot;time spent&amp;quot; but &amp;quot;value created&amp;quot; and therefore as long as you&#39;re outputting and delivering as agreed, nothing needs to change 2 - you may lose a billable day, and the amount you invoice for that week/month will just be reduced. So this post isn&#39;t really about those one off days off during a contract, but it&#39;s how to manage actual time off (holidays, trips) and how you can schedule in work for after you&#39;re finished sitting in your jim-jams. Forcing a gap Confession: I&#39;m terrible at remembering to take actual solid blocks of time off. Because I personally find contracting to be quite flexible in terms of how I spend and plan my work day, I find I&#39;m pretty good with continual work without experiencing burnout. I find I need to actually plan a proper trip and stick it in my calendar to block out some time off. Now, for those proper blocks of time off, I&#39;m talking anything over 2-3 days, how does one go about doing that during a contract? Well, that could honestly be tricky. If I were to have a 1-2 week holiday planned, I would make any new clients fully aware upfront, so it can be planned around. But the real fun comes when you&#39;re coming to the end of a contract; because this is your golden opportunity for a recharge and a break. How much of a gap to put in So you&#39;re coming to the end of a contract with a client, and you&#39;ve been head down for 6 months and need a break. Ok, how much of a break do you want? How much can you afford to take? Deciding on how much of a break you want is really going to depend on you, the individual and your own situation. Do you have kids and are looking at 2 weeks in a sunny climate? Are you looking to go traveling solo around Europe for 6 months? Do you want to go spend 7 days in a cabin in the woods? Or do you not want to go away at all, are you just looking for a &amp;quot;staycation&amp;quot; and enjoy the freedom of not setting an alarm for a month, and not needing to log in for a 9am standup on Zoom? I hear you. Now, for the amount of time you can afford to take off, I&#39;m going to keep things offensively simple and also you&#39;re gonna need to meet me halfway and apply your own financial situation here, I don&#39;t know how you&#39;ve got things set up. I&#39;m not an accountant and this isn&#39;t financial advice For a typical contractor who&#39;s the sole director and shareholder of their own LTD company, there are some basic assumptions I&#39;m going to take the liberty of assuming: it&#39;s likely that you have a business bank account, your pay yourself salary + dividends once a month and you pay corporation tax once a year. Knowing these things, we can do some back-of-envelope calculations to keep in the back of your mind so you can quickly work out how long you can afford to take off. Also I&#39;m going to use nice easy round whole numbers here and they are completely fictional so don&#39;t yell at me: Your starting business bank account balance: £20,000 Allocated for corporation tax bill: 19% of £50,000 profit from previous year = £9,500 (which is the highest amount it will be, because a good accountant will be able to reduce your taxable profits through the expenses you&#39;ve put through, ie. new equipment related to your business) Your own personal monthly income you pay yourself: £1,041.66 for salary + £2,500 in dividends (google these magic and very specific amounts) So, 20000 - 9500 - ((1041.66+2500) * 2) = 3416.68 or (starting balance) - (corp tax bill) - ((salary + dividends) * 2 months) What this means is with a business bank balance of £20,000, you can afford to have 2 months off, without needing to bring in an income, still pay yourself, and still end up with a cushion of £3,416.68 in your account. Cashflow at a glance As an FYI - here is the structure I currently have set up in my Monzo business account, which allows me to see at a quick glance how the cash is split in my business account. Oh yeah...payment terms Yay, 2 months off! How ace. Wait... Don&#39;t fall into the trap I have before: that&#39;s 2 months off but needing to have your next invoice being paid 1 day immediately after those 2 months. That&#39;s not, &amp;quot;I&#39;ll take 2 months off and then start looking for more work&amp;quot;. That&#39;s not even &amp;quot;I&#39;ll take 2 months off, but line up a gig for right after&amp;quot; because with most contracts you&#39;re going to be looking at 30 day payment terms. This means you won&#39;t have your first invoice paid until 60 days after those 2 months off. Ouch. You can of course try and negotiate payment upfront, or 0 day payment terms, and if you can; good for you. I&#39;m proud. So armed with all of the above knowledge, here&#39;s how I would personally structure time off and lining up new work so you&#39;re not left short in your business account and unable to pay bills (tried and tested personally), or eat into your allocated corporation tax bill money. The 3-Phase Time-off Schedule for Contractors: This is a system I&#39;ve used to plan, manage and structure my own time off as a contractor, and it takes into account finding new work. The beauty with this system is, well, it&#39;s a system, so I would apply it for 3 weeks or 3 months off, just split each phase into 1/3 into the total time off period. FUN Phase #1. This is it buddy. You did it. You closed your laptop, logged out of Slack and stuck a middle finger up to Zoom. Turn that alarm off and just..... be. This is for that trip you&#39;ve planned, or for sitting in your jimmies at 11am reading that book you&#39;ve not gotten round to. I personally use this time to enjoy not having to sit behind a laptop for extended periods of time. I would not even be thinking about your next gig. Put it out of your head. FEELERS Now, phase #2. We&#39;re still in P-A-R-T-Y? (cus&#39; I gotta) mode, but we&#39;re gonna sprinkle in some Asynchronous Pipeline Building (I talk about this more in my previous article on contracting from 2017, under &amp;quot;Asynchronously looking for work&amp;quot;). Asynchronous Pipeline Building is a fancy schmancy way of me saying: now can be the time for you to progress that neglected side project, polish your portfolio site, write those blog posts that have been sat in draft in Notion, contribute to some open source, etc. Essentially, it&#39;s all the soft meta work that&#39;s involved in pushing yourself onto people&#39;s radars as an Expert In Your Field™. Repeat business I would also be using this time to chat to other contractors I&#39;ve worked with before, see where they&#39;re at. Many of my gigs in the past have come from word-of-mouth from a contractor I&#39;ve worked with who&#39;s on the lookout for someone else to join the company they&#39;re currently contracting with, or they&#39;re moving on and need to replace themselves. Emailing old clients and seeing how their project&#39;s going, did it ever get launched, do they have any problems they&#39;re currently facing. Again, a few of my &amp;quot;new gigs&amp;quot; were just old gigs with new terms. You never really know what a lot of this &amp;quot;soft&amp;quot; communication can lead to, because there&#39;s an enormous amount of value in repeat business. They&#39;re already sold on you, you&#39;ve already laid the groundwork. Whilst you&#39;ve been pushing updates on your side project or publishing blog posts, Joe Schmo has seen your work and passed it onto their boss and suddenly you&#39;ve got a very exciting lead in your inbox. Great! So whilst I wouldn&#39;t be on the hardcore hunt for locking in gigs, this phase is nice because there&#39;s no pressure to take the first thing that comes in just for the sake of locking in some income. You have the freedom to see where fun, interesting and exciting opportunities take you. At this point, I would still be mostly enjoying time away from my desk/laptop if I could help it... TERMS Now, phase #3. It&#39;s getting time to turn the screws on locking in your next invoice payment. Hopefully by now, you might have some idea of who or what you&#39;re going to be working on for your next contract. If not, I&#39;d try and get that in mind earlier on in this phase. The earlier you have that lead, the earlier you can be negotiating terms, especially payment terms. Cashflow is king. All of the self-marketing we did in phase #2 should mean that we have a fairly healthy stream of leads or conversations in our inbox. If not, ramp it up: pester your own network, call in favours, spam your Twitter feed, etc The great thing about contracting is that even if you have a slight gap in income, your rates and fees tends to be a little higher than an employee&#39;s, so you can recover quite quickly. Wrapping up In an ideal world as a contractor, you wouldn&#39;t be waiting until you need work to try and find work, you&#39;ll actually have a few opportunities land in your inbox on a regular basis. If I&#39;m mid-contract, I&#39;m still speaking to prospective new clients, taking calls, etc. Just because I&#39;m not available at that moment in time, doesn&#39;t mean I won&#39;t be in 6 months....and guess who&#39;s going to be top of my list to contact during phase #2? The great thing about all of the self-marketing I mentioned above is that you&#39;ll end up with companies coming to you for your expertise, not you pitching them your skills. You&#39;ve already reframed the conversation to put you in the driving seat, which gives you the advantage when negotiating terms and avoids the need to &amp;quot;sell yourself&amp;quot;. Word of mouth can go a long long way and often propel you past a few of the initial cold steps you have to do when pitching to a prospective new client. I once agreed a 6 month contract with a potential new client, with fees, terms and starting date set all within a 90s phone call, simply because a contractor there had put in a good word for me and vouched for my technical skills. As an employee, if you take 3 weeks off, the high chances are you&#39;re going back to the same job as before, without a worry or a need to find more work. As a contractor, you&#39;re essentially starting a new job with each contract (which could be every 3, 6 or 12 months). Put those pipeline building efforts in upfront, and they&#39;ll pay dividends when it comes to scheduling in time off. My DMs are open if you want to ask any advice on the above or chat about something you&#39;d rather not post publicly: @ben_howdle</content>
  </entry><entry>
    <title>On Starting</title>
    <link href="https://benhowdle.im/on-starting.html" />
    <updated>2018-06-15T00:00:00.000Z</updated>
    <id>https://benhowdle.im/on-starting.html</id>
    <content type="text">Thanks to Pete Lacey’s tweet, I decided to turn the inability I was facing to come up with some useful concepts to blog about, into writing a post on the inability most people seem to face on starting something. Pete gave me lemons, I&#39;m going to see how I do making lemonade. The Issues I believe the issue most face with starting something, anything, a blog post, a software project, a book, a painting, can be reduced (for simplicity) to two main causes: won’t start or can’t start. Lets start with the “won’t” camp. Perfectionism My belief is that the “won’t” people are afraid of what they produce not being good enough, either objectively based on peer review, or subjectively based off their own high standard. Ira Glass articulated this “Gap” between our taste vs. our ability beautifully in video form and in transcript form. This hits us all, even Paul McCartney. Just...stuck Sometimes, the issue isn’t “this won’t be good enough”, its more, “how do I possibly begin??”. This could be due to inability to tackle a new problem, because you’ve got no experience in it, or, a plethora of solutions overwhelm you. Paralysis by analysis does exist and it is very real. I wrote Too Scared To Write a Line of Code 5 years ago and it still rings true, for me at least. Remedies Now I’ve identified what I believe to be the main causes, lets switch gears and head towards positivity; the remedies and antidotes for these issues. Plan A really useful technique I’ve employed recently for various completed works (blog posts, project proposals*, etc), and its an obvious one, is to map out the rough final result. An outline. Lets take a blog post. Lets take this blog post. I didn’t start at the top and write continue prose for ~500 words, my brain doesn’t seem to work like that. I mapped out several key points as headings and outlined the rough shape of the post. All I then do is fill in the gaps. A captain maps a route to a final destination, they don&#39;t just write down, “sail to America”, but plan several points along the way to guide them. *True story: I sat for a full 40 minutes yesterday, almost continuously, staring at a blank Notes.app file, trying to start a project proposal. Once I finally managed to start, the words fell out of me. The secret to building large apps is never build large apps. Break your applications into small pieces. Then, assemble those testable, bite-sized pieces into your big application -Justin Meyer, author JavaScriptMVC Don’t think about your final product as LARGE, DAUNTING, SCARY but several smaller pieces you can assemble. Iterate A fear I often, and irrationally, have is, “as soon as I [write this first line of code/create this first directory/write the intro for this post], it will define the final form, cemented forever”. That’s codswallop. It’s really hard to build on, or evaluate an empty page, or an empty code file. Write or create something tiny and terrible, and then immediately begin improving it. I view software and writing rarely as creating, but just improving on previous ideas and solutions. I think about the McDonalds Theory maybe once every 6 months. It’s fascinating to think about but also a hugely useful weapon to have in your arsenal. Especially if you’re suffering from analysis paralysis. Start with something intentionally awful, and you’ll have an immediate (undesirable) base to work from and improve. Repurpose I rarely start a brand new project from truly nothing. I’m fortunate enough to have built up a body of work, that, if I am starting something new, it’s likely that at least one of my previous projects, or certainly large parts of them, can easily be re-used as a jumping off point for new work. Why solve the same problem twice? Take something pre-existing, and modify it. Writing vs. editing mode A concept I saw fairly recently was splitting up your work into distinct parts: Writing (creating) vs Editing (analysing). Writers write initial drafts without too many rules in mind. They simply vomit the words onto the paper, continuing to write as the ideas flow. To break the 4th wall slightly, when writing this post, I won’t look back at a single sentence I’ve written until the very end, when I go back through it. If I try (and I have done in the past) and edit as I go, I find it a hugely jarring process. It’s like I’m conflicting two ideals and motivations. Never half-ass two things. Whole-ass one thing. -Ron Swanson Judd Apatow and The Big Sick This is going to be purely anecdotal, because I can’t find a link to where I heard it, but when Kumail Nanjiani was writing The Big Sick (its great; watch it), Judd Apatow (producer) told Kumail to write the first draft as if no-one was going to read it. Just, dump it all down on paper, every detail, don’t try and confine it to a movie script, or a screenplay. I think the takeaway here is to just get everything out of you, onto paper/a code editor. It can be fixed up later, separately. First do it, then do it right, then do it better - this is my mantra for successfully getting things done. It&#39;s all about the iteration. -Addy Osmani</content>
  </entry><entry>
    <title>Notes on using Vue and React in production</title>
    <link href="https://benhowdle.im/notes-on-using-vue-and-react-in-production.html" />
    <updated>2018-03-23T00:00:00.000Z</updated>
    <id>https://benhowdle.im/notes-on-using-vue-and-react-in-production.html</id>
    <content type="text">Intro This post should serve as general observations and remarks on my own personal experience using both Vue and React for production applications and services, as opposed to simply drawing you in to a VUE VS. REACT clickbait fest. Background of my use-cases I began using React at the beginning of 2015. My first big use of it was on a project for Cisco, building a real-time test automation platform with websockets. Since then, I&#39;ve used it on countless personal projects; Ekko and Top Spot to name a couple. My experience with Vue began a little later, this time in April 2017, on a client project for Soho House; specifically The Ned. Since then, I&#39;ve built Readme App with Vue. What follows is an informal, loose account of the differences, strengths and highlights between the two libraries. Obligatory disclaimer: this really was my own experiences. There&#39;s inevitably multiple solutions to a problem, so my comparisons are based on my own approaches. Stateless functional components One of my favourite things about React: The conciseness of building a UI. Stateless functional components, for me, are the closest manifestation of Guillermo Rauch&#39;s concept of Pure UI. Take for example, a video player component: // Video.jsx export default ({ src }) =&amp;gt; &amp;lt;video src={src} /&amp;gt; Lovely stuff. Intuitiveness This was a pleasant surprise in Vue. If I wasn&#39;t sure how to do something, I found that if I tried to implement it as I saw fit (ie. guessed), 9 times out of 10...that&#39;s exactly how it was supposed to work. I find Vue incredibly intuitive. This goes hand in hand with their excellent documentation. No setState The ceremony and fluff that surrounds React&#39;s setState drives me nuts. Also the fact that state updates are asynchronous. Not DOM updates (that I can understand), but updates to the state object itself, which seems 1) counter-intuitive and 2) doesn&#39;t feel obvious that that&#39;s the case. Add these two things together and you can have some painful debugging experiences. // Counter.jsx this.setState({ counter: ++this.state.counter }) // Counter.vue ++this.counter Pug This is going to be an acquired taste, but honestly, the brevity of Pug has won me over. HTML (JSX) is a hecking pain to write. I know Emmet exists for autocomplete... // Nav-bar.jsx render() { return ( &amp;lt;div className=&amp;quot;nav-bar&amp;quot;&amp;gt; &amp;lt;div className=&amp;quot;flex&amp;quot;&amp;gt; &amp;lt;div className=&amp;quot;logo&amp;quot; loading={this.state.tweetsLoading} onClick={this.scrollToTop} /&amp;gt; {(!!user) &amp;amp;&amp;amp; &amp;lt;ThemeSwitcher /&amp;gt;} {(!!user) &amp;amp;&amp;amp; &amp;lt;User /&amp;gt;} &amp;lt;/div&amp;gt; &amp;lt;/div&amp;gt; ) } // Nav-bar.vue &amp;lt;template lang=&amp;quot;pug&amp;quot;&amp;gt; .nav-bar .flex .logo(v-loading=&amp;quot;tweetsLoading&amp;quot;, @click=&amp;quot;scrollToTop&amp;quot;) theme-switcher(v-if=&amp;quot;user&amp;quot;) user(v-if=&amp;quot;user&amp;quot;) &amp;lt;/template&amp;gt; You get the picture. Scoped CSS Going back to React styling after working in Vue&#39;s scoped CSS world was jarring. Pop this at the bottom of your Vue component: &amp;lt;style lang=&amp;quot;sass&amp;quot; scoped&amp;gt; p color: red &amp;lt;/style&amp;gt; And only the &amp;lt;p /&amp;gt; tags inside your component are styled red, thanks to the scoped attribute. From the Vue docs: One important thing to note is that separation of concerns is not equal to separation of file types. Having your CSS right there next to your Pug (/HTML/JSX) is great. Switching to a different file to style something seems really disconnected now. Implicit props A really small thing, but in Vue, the props are automatically added to the instance. So you don&#39;t have this.state.foo and this.props.bar, you just have this.foo and this.bar. Brevity. Ecosystem I would say that React&#39;s ecosystem is only marginally better, it just feels like more people are using it. But to be honest, I can&#39;t really fault Vue&#39;s ecosystem at all. I just think React&#39;s is slightly more vast and lively. Everything I searched for with Vue, I found. Every plugin I wanted to install, it was there. State management React has Redux, Vue has Vuex. I like both. Both are good. That&#39;s all she wrote. If all you have is a hammer, everything looks like a nail Inevitably, React and Vue are just tools, they&#39;re not the singular solution. It&#39;s going to come down to personal preference. For me, neither of them stand out enough to convince me never to use the other. If you try really hard, most things can become your hammer.</content>
  </entry><entry>
    <title>Making Jekyll workflow a bit better</title>
    <link href="https://benhowdle.im/making-jekyll-workflow-a-bit-better.html" />
    <updated>2017-08-17T00:00:00.000Z</updated>
    <id>https://benhowdle.im/making-jekyll-workflow-a-bit-better.html</id>
    <content type="text">I recently re-designed my personal site after not touching it since December 2015. When I sat down to re-design it, I felt like Jekyll&#39;s workflow and development experience suddenly felt...outdated. Mostly because a lot of the development I tend to do is JavaScript-based and (all fragility jokes aside), JS development in 2017 is pretty slick. This post is essentially going to be a collection of resources and links, because I&#39;ve not created anything new, I just found a few things along the way to make Jekyll-based development a bit slicker. Git branches If you want to embark on this journey, you&#39;re going to want to use Jekyll plugins, and some of them are likely to be custom ones you&#39;ve installed. Great! However If you&#39;re like me, and like a lot of other people, you&#39;re going to be pushing your Jekyll site to GitHub for free hosting, using GitHub Pages. However, I hope you&#39;re not like me in forgetting that GitHub won&#39;t run custom plugins on the Jeyll sites they host on GitHub Pages. I remembered this after finishing my re-design, deploying, and staring at a blank site for ~10 minutes. One option is to set up two branches in your GitHub repository; master and source, for example. Your source branch contains your Jekyll source files, where you build your site, and then master contains the Jekyll output; essentially static HTML files. This is what GitHub serves when people visit your site. For full instructions and a better explanation, I followed this guide: Custom Jekyll plugins with GitHub Pages Note, these are the plugins I&#39;m using: plugins: - jekyll-pug - jekyll-gist - jekyll-babel - jekyll-autoprefixer Babel for JavaScript Next up, JavaScript. This one is super straightforward. Just use jekyll-babel. This means you can write ES(whatever) code in, for example, js/main.js and Babel will transform it, as part of the Jekyll compilation, into _site/js/main.js. Perfect. Sass for CSS Jekyll provides built-in support for Sass and can work with CoffeeScript via a Ruby gem Well. This is actually in the docs so there&#39;s very little to say about it, so just in case you didn&#39;t realise! Autoprefixer for less verbose CSS Again, like Babel, this gem jekyll-autoprefixer Just Works™. Pug for HTML jekyll-pug requires a little more setup, like using npm, but it&#39;s fairly straightforward. Browsersync for live reloading This one was the trickiest to setup, and after failing to get Live Reload working, I found Blissful Static Site Development with Jekyll, BrowserSync, and tmux I figured Browsersync might be the way to go. This person&#39;s approach didn&#39;t quite work for me, and then I found Using browser-sync with Jekyll for fun and profit and between the two links, came up with a solution that worked for me. Note, my current solution is: $ jekyll serve --watch $ browser-sync start --files &amp;quot;_site/css/style.css&amp;quot; --files &amp;quot;_site/index.html&amp;quot; --proxy &amp;quot;localhost:4000&amp;quot; --reloadDelay &amp;quot;2000&amp;quot; Thanks for reading Not the most original post, but I hope collecting a few resources together in one place helps you make Jekyll development a little better.</content>
  </entry><entry>
    <title>Creating a real-time test automation platform for Cisco with React &amp; WebSockets…and React Native</title>
    <link href="https://benhowdle.im/creating-a-real-time-test-automation-platform-for-cisco-with-react-websocketsand-react-native.html" />
    <updated>2016-07-28T00:00:00.000Z</updated>
    <id>https://benhowdle.im/creating-a-real-time-test-automation-platform-for-cisco-with-react-websocketsand-react-native.html</id>
    <content type="text">Creating a real-time test automation platform for Cisco with React &amp;amp; WebSockets…and React Native Amongst other things, highlights include: Building a command line interface in the browser for test engineers to remotely debug scripts they’re running Fine-tuning the performance of the UI to handle hundreds of messages a second over the WebSocket connection, whilst keeping the interface responsive Porting a section of the web application to an iOS app using React Native Purpose of the application I was brought onto the project in May 2015 to build a web application from a set of pre-existing wireframes. I was given complete freedom to pick a tech stack/libraries to use on the front-end as long as: I could justify their use and my choice in them My choices in them contributed to what the application needed to do I was going to be building a client-side application that would connect over a single WebSocket connection to a Python backend. The application (including the Python backend) would be deployed onto machines in over 6000 plants and manufacturing factories around the world, including companies like Foxconn; the manufacturing company that builds electronic devices for a range of companies including Apple, Samsung and Microsoft. The application was to be used by test engineers to remotely debug scripts they’d written to test physical devices, and by operators who would be using the application to monitor the actual test results and to initiate/abort/pause the tests themselves. The test engineers could place questions in their script, which would pop up in the application for the operators to interact with, making the test execution dynamic, which the user interface would respond and update accordingly to. More features of the application are explained below. Client-side tech stack Having already built large scale client-side applications for companies like KashFlow and Kayako, I was already experienced in architecting this kind of setup, but this Cisco application needed the added fun of being real-time. So performance was going to be of paramount importance, as I said, there was going to be tests constantly running and updating the user interface, but it also needed to respond immediately to operator/test engineer intervention. With KashFlow and Kayako, I’d used Backbone.js to create both applications. However, with the importance of performance, I looked towards React to handle both the rendering of the user interface and responding to the user interaction. Here’s the email I sent on the 6th May 2015, justifying and explaining the main components in the stack: I’d say most of that still holds up. WebSocket The actual WebSocket connection is set as the property connection in an ES2015 JavaScript class, the class being exported as a singleton. The WebSocket class is largely spec compliant, with the only library being used to provide the ability to automatically try reconnects when/if the connection drops. The messages sent to and from the backend are encoded and decoded into JSON format, with an example response looking like this: { &amp;quot;action&amp;quot;: &amp;quot;update&amp;quot;, &amp;quot;container&amp;quot;: &amp;quot;Foo|Bar|Baz&amp;quot;, &amp;quot;update&amp;quot;: &amp;quot;status&amp;quot;, &amp;quot;value&amp;quot;: &amp;quot;RUNNING&amp;quot; } This tells me on the client-side that a test has started and a container is now running this test. I would then transmit an event to the rest of the application, informing them that there was new data from the backend. This event action would look like this (for the update above): Dispatcher.emit(&#39;api:data:update&#39;, { &amp;quot;action&amp;quot;: &amp;quot;update&amp;quot;, &amp;quot;container&amp;quot;: &amp;quot;Foo|Bar|Baz&amp;quot;, &amp;quot;update&amp;quot;: &amp;quot;status&amp;quot;, &amp;quot;value&amp;quot;: &amp;quot;RUNNING&amp;quot; }) More on the Dispatcher object below. Sending messages to the backend follow the same structure, but the object gets ran through JSON.stringify and sent as a string. Event Emitter I decided that communication throughout the application would largely take places through a central event emitter. For this, I used this excellent library. New data from the backend is broadcast through the Dispatcher (the exported singleton of the Event Emitter library) and whoever is interested in this specific information will listen out for it. Backbone Backbone is used as the data layer in the application. It’s job is to listen to events from the WebSocket class (through the Dispatcher), parse and store that data. After the Collections and Models have stored this data, they themselves emit an event informing the React Components of this new data. React Once the React Components are informed there is new data available in the Backbone layer, it calls methods on the Collections/Models to fetch this data, ie. this.props.containersCollection.getContainers() which returns an array of objects, which we can just store in the Component State object and trigger a re-render of the Component. Performance Identifying, analysing, debugging and improving performance in this application was the cause of many a headache and long days of trial and error and staring at Chrome DevTools timeline results and profile outputs. There were two main areas in the application which required a lot of attention, one being a view of several, even hundreds, of “containers” (a container will run a sequence of tests) that could be started simultaneously. By running them all at once would cause the backend to be sending multiple updates for each “container” at once, each requiring a visual change in the display of the interface. Starting 50 “containers” simultaneously. Updates would roll in from the backend multiple times a second. The second area being a Debug screen. This screen allowed test engineers to remotely debug their scripts that were running in these containers. The interface allowed them to drag ‘n drop several “console windows” onto an area of the screen which, when dropped, would open up a connection to the container through the backend. Some console windows were for logging, so would merely display data coming from the backend, some were more visual and would display a visual representation of the sequence of tests that were running, along with SVG arrows to indicate the flow of execution of each test. Utilising SVG to draw arrows between the different test steps in a sequence The most complex console window type allowed the engineer to interact with the container over an SSH connection (through the backend) which needed to be completely emulated in the browser. Lots and lots and lots of data. This console was fully interactive, with keyboard shortcuts and paging Yep, this meant re-writing a large proportion of an actual terminal, but in the browser… Fun. times. Each keystroke typed by the engineer would be sent to the backend, processed, and a response sent back to display on screen in the console window. ie. If I typed “l” then “s” then hit enter, the backend would send back the output of running ls on the command line. This terminal emulator involved lots of formatting (for sent vs. received characters), showing/hiding of special characters (ie. &#92;n for newline - engineers may or may not want to have these characters displayed), live search with highlighting (for searching back through connection responses). Paging worked very much in the same way as standard pagination works on a blog, engineers could PAGE UP and PAGE DOWN through data as and when they pleased. Humour me for a moment, but head over to your terminal/command line application and run ls -lRt and hit enter. The volume and frequency of that data pouring down your screen gives you a rough idea of the rate and volume of data that was being sent from the backend to the client-side application. This data all needed be parsed, stored, paged, formatted and displayed in the browser at best-to-real-time as possible. My tactic was to identify the maximum performance of the browser and platform the application was running on, and adjust the rate at which the WebSocket class sent new data to the Backbone Collections and Models, thus throttling the rate at which the React Component needed to touch the DOM. I created a buffer array in the WebSocket class which I pushed new data into, and then separately ran a requestAnimationFrame method to process the updates from the backend. I could then manually control the rate at which these updates are processed. Chrome on a Mac could handle a re-render once every ~100ms, whereas Firefox on a low-powered VM (unfortunately, this is the environment it was to be deployed to) would only handle refreshes once every ~500ms. This performance tuning was only in effect on these two intensive screens, the rest of the application processed updates as soon as they were received. React Native Near the end of the project, I was tasked with porting a certain screen (the screen which allowed multiple tests to be ran at once) to a native application, specifically, an iOS application. Even though I built Plot in Objective-C, it made more business sense for the native application to be built using React Native, which meant the JavaScript knowledge could be transferred within the team taking over this project, opposed to having to hire a specific iOS developer. The syntax of React Native is pleasantly easy to pick up if you already know React, but the real issues getting going came from the installing/running/configuring of the React Native project. Once these settled down after much Googling/StackOverflow-ing/GitHub Issue-ing, I picked up some momentum and managed to re-create the whole screen (I’d say ~95%) of the functionality in 10% of the time it took me the first time round to create the screen using React in the web application. My tactic was to have the React Component in one pane on the right, and the new React Native Component in the left pane, and I’d go line by line and translate the majority of the render method into React Native syntax. Not so tricky. The same functionality, running as an iOS app, using native controls Next was to wire up the user interaction event callbacks to the existing Backbone Collections and Models and try and keep everything above (ie. above in the architecture - namely the data layer) the React Components as untouched as possible, ie. everything was already set up in the Collections and Models to be sent/received from the backend, I just needed to send along the right data in the right format, and all was good. I extracted a lot of logic from the React web app Components into agnostic helper classes, which didn’t touch the DOM, use React or Backbone, but were purely for business logic, ie. formatting, filtering, etc…I then adapted both the React web app Components and the React Native Components to share these modules and therefore create common shared logic between the two parts of the codebase. I placed all the React Native code in a directory within the existing web application codebase, and sym-linked the Backbone Collections/Models and shared logic into this React Native directory, this allowed me to import modules in the React Native Components without re-structuring the web application codebase too much (in React Native, you can’t require modules above the root of the project directory - go figure). Hindsight I’m fairly happy with the choices I made for this application, in terms of libraries and architecture, but if I could start the project fresh today, I’d almost certainly use Redux in place of Backbone, and do away with using an Event Emitter. In removing Backbone, I’d also have to find a routing solution - which I’d go for React Router. Unfortunately for me, as you saw from the email, I chose Backbone on the 6th May 2015, and Redux got it’s first commit on the 29th May 2015. Timing. Next I worked on this application for 15 months, from it’s inception as a set of static HTML/CSS files, to it being a fully-fledged robust real-time application that’s currently being put through intensive QA/load-testing and approaching general availability in the software release cycle. Unfortunately for me, the rollout/deployment for this product will be integrated into the aforementioned manufacturing factories over the next few years, way beyond my time on the project! If anyone wants to know more details about what you’ve read above, I’d be more than happy to delve deeper on specifics. Just holla. I’m going to be available for projects from September, so if the any aspects of the above would be of use and you’d like to hire me, get in touch via email (on my site) or Twitter. 👋 Cisco, it’s been emotional Cisco, in San Jose, California, where I spent a total of 11 weeks over 4 separate trips working with their in-house team. The rest of the time on the project was spent developing remotely from the UK.</content>
  </entry><entry>
    <title>Hello Speculo - easy colour palette visualisation</title>
    <link href="https://benhowdle.im/hello-speculo-easy-colour-palette-visualisation.html" />
    <updated>2016-07-24T00:00:00.000Z</updated>
    <id>https://benhowdle.im/hello-speculo-easy-colour-palette-visualisation.html</id>
    <content type="text">Speculo Speculo is an easy way to visualise colour palettes. Every time you change a palette colour, it updates a set of commonly used website layouts. This allows you to visualise your colour palette in a variety of design scenarios. Visit Speculo &amp;rarr; Features Automatically updating preset layouts Click any layout to zoom in and focus on only that layout Colours can be selected via the colour picker, or you can simply enter a hex value Export your palette in a variety of mediums at any time, with one click Thinking behind Speculo I tend to lean more over to the developer end of the spectrum and attempt to do my own designs at any given opportunity (mostly on side projects). However, I feel like this leads me to feel more comfortable designing in the browser, as this gives me a sense of a more realistic environment (ie. it&#39;s going to be viewed in a browser, so lets design it in one). However CSS isn&#39;t the most visual tool, so I wanted to create something that would use real HTML and CSS, but would alleviate my responsibility to tediously write lines and lines of CSS to achieve relatively simple layouts, etc... So I made Speculo. I created, what I think (amongst others), are 6 of the more common website layouts I see whilst browsing around the web. Ranging from more popular layouts, to less so. I then wanted to be able to change colours in a preset palette and see the effect of those changes immediately in a layout that I would potentially use in my end designs. No silver bullet Having low confidence in my own design ability as it is, I&#39;m also going right ahead and picking faults with Speculo as I&#39;m launching it... 👍 Ben. Speculo&#39;s drawbacks:##### The layouts don&#39;t feel very convincing. I am in true awe of designers that can create and present dummy content in sample layouts that feel convincing. My thinking behind the style and content of the layouts was for them to feel extremely low-level, very minimal, and not opionated at all. I would say that they have almost zero visual design, as I wanted to colour palette that you&#39;re creating to speak more than the visual aesthetic of the layout. Not a game-changer. I mean this in the sense that I don&#39;t expect this to replace anything in your current design workflow, but more serve as a disposable playground to experiment with different colours. I would imagine you would come to Speculo with a colour palette in mind, or at least a base colour, and just want something already set up to test your aesthetic thoughts. If you are a designer, I might imagine that you probably have something set up in Sketch/Photoshop that allows you to rapidly iterate on colour palettes, but, if you&#39;re like me, then this forethought does not exist. I&#39;m hoping Speculo can fill a void for these people and provide a headstart without the &amp;quot;design boilerplate&amp;quot; that&#39;s required for this colour experimentation. Not small-screen-friendly. I just don&#39;t think this tool would work very well on certain form-factors, so I omitted the extra work needed to accommodate them from the final result. Inspiration I would say that Jon Gold&#39;s Rene was a big inspiration for creating this kind of design tool in the browser. His supports more of an immutable flow, whereas mine...does not. But his concept of seeing results immediately and in the browser and with very minimal layouts, was hugely conducive to pushing me towards what I wanted from Speculo. The actual inspiration for Speculo was entirely selfish; I needed to create a colour palette for another side-project and I just didn&#39;t want to open up Sketch. Make Speculo Great Again™ Speculo&#39;s source is fully open on GitHub so feel free to browse through. Contributing in any way, or suggesting ways to make Speculo better, would be greatly appreciated. Best case, you open a Pull Request/Issue and we make Speculo better, or worst case, you learn something from the React/Redux structure. Thanks for reading. Hit me up on Twitter with any comments, feedback, etc...</content>
  </entry><entry>
    <title>Everything Changes But You</title>
    <link href="https://benhowdle.im/everything-changes-but-you.html" />
    <updated>2015-03-24T00:00:00.000Z</updated>
    <id>https://benhowdle.im/everything-changes-but-you.html</id>
    <content type="text">I really didn&#39;t want to bum people out with this post, so that&#39;s why I&#39;ve named it after a Take That song, as opposed to a Smiths song (was going with &amp;quot;Heaven Knows I&#39;m Miserable Now...with Web Development&amp;quot;). Blame my recent viewing for the teen angst tone of this post. Essentially; everything is the same, nothing is new, stop worrying about stuff. This post isn&#39;t a cry for help, I&#39;m actually hoping it acts as some reassurance to new developers, old developers, developers with Imposter Syndrome™, basically anyone who&#39;s ever felt left behind or out of touch with modern development. (I&#39;ll keep it to web development specifically, for brevity sake) Overload Everyone is probably aware of the flurry of new stuff and the fragmentation of old stuff that seems to be happening lately. Open up Hacker News and see 20 different React.js architecture patterns, open up Designer News and see &amp;quot;101 Different Flat UI Kits You Should Be Using&amp;quot;, or ask around on Twitter for responsive image strategies (and watch your Mentions tab light up like Vegas). Questions I&#39;ve been asked (or seen being asked): What does Node.js even do? I can&#39;t see any use-cases for it... What does Webpack do, over Browserify? Which framework should I learn? I&#39;m finding it increasingly hard to answer these questions. Maybe because I&#39;m jaded. Maybe because I&#39;ve recently got into iOS and see web development (read: the web community) as kinda petty. Maybe because, given the opportunity, I&#39;ve spent the last 4 years joining in endless discussions about which tool ~ library ~ framework ~ plugin to use. Joined in voraciously. Joined in for arguments sake. A pissy little hot-head. Blindly defending my choice of Thing™ because...I...wanted...to prove the other person wrong? Bah. I&#39;m tired of that crap, and I&#39;m halfway through my first watch of The Wire. Life priorities. Fast forward My goto answer nowadays: &amp;quot;It depends&amp;quot;. My goto answer in my head: &amp;quot;I really don&#39;t care&amp;quot;. A not-so-great answer to give someone who&#39;s cared enough to ask for your advice is &amp;quot;I really don&#39;t care&amp;quot;. It sounds apathetic. And apathy doesn&#39;t get you clients, or make you any friends. Luckily, I&#39;ve got like 4 friends, so I don&#39;t need any more, but I still like working with new clients (so hire me, honestly I&#39;ll be more chirpy when we chat in Slack). Feedback Giving feedback also becomes difficult as well. My general philosophy (now, not before) is that personal preference trumps Best Practices™. I&#39;d much rather see someone produce functional (functional, not Functional Programming...gah!) code than attempt to conform to ever-changing, entirely subjective, community-driven patterns. Also, without sounding like a total turd, you can spot people who actually know their stuff vs. people who just reel off &amp;quot;that blog post I just read said this&amp;quot; from a mile off. Don&#39;t be that person. That person is dirt. And good god, please remember; your code still works even if Hacker News suddenly tells you you&#39;re doing it wrong. Or Facebook; Immutability guys, amirite? 4 minutes in and the author of this post is still yet to arrive at a tangible point No fear dedicated reader, I probably won&#39;t reach one, but you&#39;ve made it this far? Er...sure, I&#39;ve used that. panics I used to be afraid of telling people that I hadn&#39;t used That Hawt New Thang™ on GitHub, or that I didn&#39;t design mobile-first (@beep is a swell guy). Now I don&#39;t care. I don&#39;t mean that in a Bart Simpson/John Bender eat-my-shorts kinda way. Like, I still give a damn, but now I&#39;m fine with my knowledge, I&#39;m confident in my skills and I can now look at the development landscape and quickly analyse whether something new is going to change the way I want to develop, or whether it&#39;s not worth my time (remember, there&#39;s The Wire to watch). This saves me a whole bunch of squirming and stalling when developing. I just don&#39;t get choice paralysis now, but I used to. Many projects ago, I would have reassured people that I knew what I was doing, subsequently scurrying off to Google and Stack Overflow and developing and praying through blind faith that I could deliver what I promised *. I feel different nowadays. It&#39;s probably because I&#39;ve got my core skills nailed, and if you can find the right client or team to work with, they&#39;ll be able to see that. I don&#39;t bother embellishing now, it&#39;s too much stress. *This can be a great way to earn money, as you can work on a variety of projects, and you learn rapidly by baptism of fire. But my word, be prepared to have a knot in your stomach the whole time because you might get found out So hotshot, what do you use? My &amp;quot;stack&amp;quot; that I use for web projects: MongoDB Node.js Backbone.js Handlebars Sass As if this guy uses MongoDB? Doesn&#39;t he know It Isn&#39;t Webscale™? For the projects I do, and the scale they demand, it&#39;s a-ok. I find it nice to work with, it hasn&#39;t caused me issues, it&#39;s got good support and I know it well. In fact, apply that justification to all the above. &amp;quot;Why don&#39;t you use Rails? It&#39;s better than Node.js for MVC...&amp;quot; &amp;quot;Cool?&amp;quot; I had someone recently comment &amp;quot;how is Backbone still relevant in 2015??&amp;quot;. Because dear internet friend, there&#39;s not been anything that&#39;s come out since 2010 (Backbone&#39;s release date) that deals with client-side application architecture better. Of course, why believe me over someone on Hacker News? You have no real reason to. You really do have more reason to believe the people at Facebook than me, they&#39;re way smarter (I&#39;ve seen the film). They know what they&#39;re talking about and React is really well put together. However, my opinion on React after I built a project with it: it&#39;s neat. And with that, I merrily skipped back to using Backbone. Rarely nowadays does something come along that&#39;s actually gamechanging. People &amp;gt; product To amend my earlier goto of &amp;quot;It depends&amp;quot;, I think what&#39;s more apt is &amp;quot;It depends on you&amp;quot;. If you have to ask the question, &amp;quot;Which framework should I learn?&amp;quot;, then, without being too blunt, the answer is: it doesn&#39;t matter. By the very fact that you&#39;re asking that question means that you&#39;re not really going to gain much from one framework vs. the next. You&#39;ll get a boat load of experience from learning any of them. If you do value my opinion at this point, my advice is still to go learn Backbone. The phrase &amp;quot;learn Backbone&amp;quot; is a weird one, because there&#39;s not really much syntax to learn with Backbone, you&#39;re actually learning to &amp;quot;give structure to web applications&amp;quot; (from the site). So you&#39;re elevating your knowledge up a level by being able to build a client-side application. yay 4 u. Become good at one framework (Backbone, right?) and you&#39;ll find that all the others are trying to solve the same problems. However, with your newly found knowledge, you&#39;ll now know what the problems actually are. There&#39;s no worse feeling than not knowing what you don&#39;t know. Once you know you don&#39;t know something, you can act on it. To answer the question, &amp;quot;What does Node.js even do? I can’t see any use-cases for it&amp;quot; - the incredibly boring answer is; it does everything PHP does, it does everything Ruby does, and Python and so on...everything is the same, nothing is new, stop worrying about stuff yada yada, you get the picture. Node.js just does stuff a bit differently. It&#39;s a bit faster in places, it&#39;s a bit better in places, it&#39;s also a bit worse in places. And remember, in a roundabout way; we&#39;re all just working on CRUD apps. Now replace &amp;quot;Node.js&amp;quot; with &amp;quot;Go&amp;quot; in that last paragraph. Rinse and repeat. I actually don&#39;t know Sometimes, the best thing you can say is &amp;quot;I don&#39;t know&amp;quot;. I can say, from recent personal experience, that it&#39;s quite freeing to say that to someone and the world not come crashing down. It feels reassuring to put yourself (and your knowledge) out there, and for it to be ok that you don&#39;t know everything. Ben, wait, do you actually not write tests?? Essentially, this whole post was buttering you up to still like me after I confess that I&#39;ve never actually written any unit tests in my whole life as a developer. Shit that felt good. Next up... Look out for my next post on &amp;quot;Moving to Value-based Billing&amp;quot; and watch as I lose a whole load more future paid work... Also, if you manage to decipher any discernible points in this post, or if you cried, or if you put my face on a doll and punched it, please let me know.</content>
  </entry><entry>
    <title>Creating a dynamic search page for your Jekyll blog using JavaScript</title>
    <link href="https://benhowdle.im/creating-a-dynamic-search-page-for-your-jekyll-blog.html" />
    <updated>2014-11-11T00:00:00.000Z</updated>
    <id>https://benhowdle.im/creating-a-dynamic-search-page-for-your-jekyll-blog.html</id>
    <content type="text">Aim Provide a sane way for a user to search our static Jekyll blog and view a bookmarkable page containing the relevant results. For the eager-beavers among you, take a look at the Installation details at the bottom of the post. Intro &amp;quot;Transform your plain text into static websites and blogs.&amp;quot; Simply put, that is Jekyll. This post isn&#39;t an intro to Jekyll or even a sales pitch for Jekyll. The Jekyll site does a good enough job of that by itself. This post will, hopefully, provide a needed solution for people (power users?) trying to get more out of Jekyll. Well, more than it was intended for... Static with a side of dynamic please Gone are the days when a monolithic heap of PHP was needed to run a simple blog or website. Jekyll came along with a monolithic amount of Ruby but left it on your machine, and not clogging up your webserver. Templating logic, Markdown and HTML go in...a static site comes out. That static site (by &amp;quot;static site&amp;quot;, we simply mean HTML, CSS &amp;amp; JavaScript) gets dumped onto your webserver and you&#39;re away! All the folder structure is created for you, all the correct links to HTML files, etc...are generated by Jekyll. Dreamy. But wait. We kinda liked that little bit of the dynamic-ness that a nifty bit of WordPress PHP could provide. Oh hi... &amp;lt;?php $query = new WP_Query(); ?&amp;gt; Searching Searching. A common need for a blog. However, the very concept of &amp;quot;searching&amp;quot; means dynamic results. Query terms that we can not possibly foresee the user providing. Once our site is compiled and sent to the webserver, that&#39;s it. No backend logic to handle a search query. So we have to be smart about this. We&#39;ll have to utilise JavaScript, as that&#39;s the only real power we have in terms of client-side logic. We need to get input from the user. We can do this easily enough by outputting a HTML form element in our template, like so: &amp;lt;form action=&amp;quot;/search&amp;quot; method=&amp;quot;GET&amp;quot;&amp;gt; &amp;lt;label for=&amp;quot;query&amp;quot;&amp;gt;Search:&amp;lt;/label&amp;gt; &amp;lt;input type=&amp;quot;text&amp;quot; name=&amp;quot;query&amp;quot; placeholder=&amp;quot;Enter your search term&amp;quot;&amp;gt; &amp;lt;/form&amp;gt; This is just a standard HTML form, which when submitted, sends the value of the &amp;quot;query&amp;quot;, input in the URL, to the &amp;quot;/search&amp;quot; page. You can place this anywhere, maybe the header? Next up, our search.html page. We need a place to output our search results, which we&#39;ll place in our search.html page: &amp;lt;div id=&amp;quot;results&amp;quot;&amp;gt; &amp;lt;/div&amp;gt; JavaScript You might be wondering why we&#39;re outputting an empty div in the HTML. Well, like I said earlier, we can&#39;t know what the user will be searching for, so we have to work with what we&#39;re given. Which unfortunately, isn&#39;t much. So the idea is, we&#39;ll take the search query term from the URL, grab relevant results from a JavaScript array I&#39;m about to show you, and populate our div above with links to the correct posts. As Jekyll compiles our site for us, it allows us to place special Liquid templating logic in any file, and it&#39;ll automatically process them when it parses &amp;amp; compiles your site. Handy. So we can quite easily create a JavaScript array containing all our site&#39;s posts. Again, handy. Place this in your search.html, right below the #results div: &amp;lt;script&amp;gt; var JEKYLL_POSTS = []; {% for post in site.posts %} JEKYLL_POSTS.push({ title: &amp;quot;{{ post.title }}&amp;quot;, link: &amp;quot;{{ post.url | prepend: site.baseurl }}&amp;quot;, content: &amp;quot;{{ post.content | strip_newlines | strip_html }}&amp;quot; }); {% endfor %} &amp;lt;/script&amp;gt; Apart from the fact that we&#39;re generating JavaScript from Liquid templating through Ruby (gross), it allows us to create quite a sane structure to work with. We now have a JavaScript array which contains an object for each of our site&#39;s posts. Next up, we need to instantiate our JavaScript to actually perform the search on the above array. Place this below the script tag above: &amp;lt;script src=&amp;quot;/js/search.js&amp;quot;&amp;gt;&amp;lt;/script&amp;gt; &amp;lt;script&amp;gt; new jekyllSearch({ selector: &amp;quot;#results&amp;quot;, properties: [&amp;quot;title&amp;quot;, &amp;quot;content&amp;quot;], noResultsMessage: &amp;quot;Sorry, no results were found&amp;quot; }); &amp;lt;/script&amp;gt; Configuration The file search.js is JavaScript code you&#39;ll be able to drop in to your Jekyll project and not really have to amend at all (unless you&#39;re confident you know what you&#39;re doing of course!). We tell the search plugin where to put our results with the selector option. We can then also tell it which parts of the post to search in, ie. you may want to limit searching to the post titles or, let the user search post content as well. If you don&#39;t provide this properties option, the plugin will only search the &amp;quot;title&amp;quot; property of each post. We can also pass in a custom message for when there were no results found from the user&#39;s search. Voilà Now when the user submits the search form, they are taken to a search page, which will list the relevant results and let them click-through to the posts! A splash of dynamic-ness for your static Jekyll site. The output from the plugin (for each search result) looks like this: &amp;lt;div class=&amp;quot;search-result&amp;quot;&amp;gt; &amp;lt;h2&amp;gt; &amp;lt;a href=&amp;quot;/jekyll-search-demo/jekyll/update/2014/11/11/php-for-winners.html&amp;quot;&amp;gt;PHP for winners&amp;lt;/a&amp;gt; &amp;lt;/h2&amp;gt; &amp;lt;/div&amp;gt; Installation details I&#39;ve set up a sample demo of this functionality, which can be viewed at online. The code for the demo (and the code you&#39;ll need for your site) can be found on GitHub. The important files _includes/header.html - This is where we output the search form search.html - This is where we output the posts and include the JavaScript js/search.js - This is the search JavaScript plugin itself. Notes The JavaScript plugin will work out of the box. However... You may want to customise the HTML output. You can do this by amending the JavaScript code, in-particular the &amp;quot;outputResults&amp;quot; function which creates the HTML from the found results. Good things We now have a bookmarkable, dynamic search page and users can search our static content. Yay JavaScript! Undesirable things It relies on JavaScript being enabled. I&#39;ll let your conscience deal with that one. The page also isn&#39;t SEO-able, as the results are generated by JavaScript on page load. With a small blog, the amount of data we&#39;ll be embedding into search.html will be minor and nothing to worry about. However Once your blog starts to grow, you may be concerned with the amount of data that&#39;s compiled. My advice would be to drop the content key from the compilation. If you wanted to do that, your new loop would look like this: &amp;lt;script&amp;gt; var JEKYLL_POSTS = []; {% for post in site.posts %} JEKYLL_POSTS.push({ title: &amp;quot;{{ post.title }}&amp;quot;, link: &amp;quot;{{ post.url | prepend: site.baseurl }}&amp;quot; }); {% endfor %} &amp;lt;/script&amp;gt; You&#39;d then end up with a lightweight data structure for searching. Obviously this means that we then can&#39;t use the content for searching purposes, so if you do pass in content in the properties argument for the plugin, it&#39;ll simply be ignored. To clarify, I&#39;d really only resort to doing this if you noticed the perceived page load speed decrease significantly. This would be a premature optimisation if employed too early.</content>
  </entry><entry>
    <title>iOS for Web Developers: Building Permeate</title>
    <link href="https://benhowdle.im/ios-for-web-developers-building-permeate.html" />
    <updated>2014-04-25T00:00:00.000Z</updated>
    <id>https://benhowdle.im/ios-for-web-developers-building-permeate.html</id>
    <content type="text">Permeate iOS app → Intro I&#39;ve been a web developer for 4 and a bit years now, primarily using JavaScript &amp;amp; Node.js to build web apps. This was my first foray into the native world. I thought I&#39;d share my experiences of approaching iOS from a web background. I certainly wouldn&#39;t class this as a tutorial of sorts, as my knowledge is going to be minimal after just one app, but if you do learn something as a result of reading this; then that&#39;s swell. I&#39;ve been in the web world for a little while now, so I know the people to talk to, the sites to visit and the blogs to read. However, I feel like I&#39;m starting all over again with iOS, so if any iOS people do read this, your wisdom and advice would be greatly received. Resources I always see posts where the resources are at the end of the post, but I thought I&#39;d save you the trouble of listening to me waffle on and just put them first. So, without further adieu (some are quite specific to Permeate&#39;s functionality, some are more generic iOS ones): Tutorials Loading and Caching Images Asynchronously on UITableView iOS 6 and the Social Framework: Twitter Requests Blocks and Table View Cells on iOS API requests with SLRequest Design Then Code: Building iOS Apps From Scratch Sites/posts with great resources Treehouse (ignore the dodgy marketing, the content is ace) Code Tutsplus On learning Ruby Motion versus Objective C &amp;amp; Xcode Ray Wenderlich Tools/cheatsheets/code RGB Syntax Conversion The iOS 7 Design Cheat Sheet App Icon Template Standard iPhone Element Sizes (Width x Height) iOS Code Examples Objective-C cheatsheet SSToolkit I tend to keep my Kippt iOS list up to date with anything I find around the web, so feel free to keep an eye there. Why Permeate? Permeate, apart from iHi.im is the simplest product I&#39;ve made. It has two views. A &amp;quot;log in/connect to Twitter&amp;quot; view and a timeline view. A lot of people commented on the web app and said it would work better as an app, which I do admit, it does. A lot of people want the &amp;quot;web to win&amp;quot; in the colossal and endless battle between web and native. What are we trying to win? I&#39;m primarily a web developer and enjoy developing for the web a lot. The ecosystem is vast and things move very fast. The community is thriving and information is everywhere. However, it&#39;s fragmented. There are 3 or 4 different ways to do stuff on the web, depending on which forum/blog post you read. It seems that the web community would rather build a new tool or language to achieve a goal or fix a problem than to massage current tools and nurture existing technologies for the better. The native world seems to move at a slower, more thorough rate. The quality is higher, as the barrier to entry is higher. It seems that the web community scatters itself into cliques and wages wars in GitHub issue threads and Hacker News comments. I noticed a lot more singularity and unison in iOS. Maybe that&#39;s because Apple control everything in their &amp;quot;walled-garden&amp;quot;, but hey, it&#39;s a nice garden. Some things work better as apps that people can download. I like that. There&#39;s no war. No battle. No-one has to subscribe to the One True Way™. Native was personally a breath of fresh air after being in the messy landscape of the web for a while. So, onwards. Tooling I&#39;m never been a massive fan of IDE&#39;s for development, I think that stems from bad memories of Visual Studio for Visual Basic and Netbeans for Java; huge, bloated and slow. Overkill for web development if you ask me. Hence why I stick with glorified text editors (Sublime Text 2) for stuff like Node and JavaScript. However, this is Apple. And whilst iTunes is one of the most hormonal programs I&#39;ve ever used, Xcode really is a coding delight. People often joke about Objective-C having verbose syntax, which it does (will delve into this further down), but using Xcode&#39;s autocomplete feature, I don&#39;t think I typed out one method name in the entire app. Storyboards let you visually lay out the different screens in your app and how the user might get between them. Using Storyboards was interesting, but frustrating at times. I think as a web developer, you subconsciously steer away from &amp;quot;visual&amp;quot; tools, maybe harking back to WYSIWYG editors of years gone by. You don&#39;t have to use Storyboards, they&#39;re completely optional. You can do everything programmatically. For example, in Permeate, you can click a link in a tweet and it will open that URL in an in-app browser (WebView). I chose to create this new WebView in the code and not in the Storyboard, as it was used temporarily and discarded fairly quickly by the user. Also, it&#39;s main content was the website the user was opening so it was less important for me to design any UI for it. Overall, Xcode stood up strong and never crashed once. I still wouldn&#39;t use an IDE for web stuff, but for native, it was a delight. Code Wow wow wow Objective-C. You wrecked my tiny little JavaScript mind. I tried 4 times to learn Objective-C and only now am I able to code in it like a capable human. It&#39;s so different to anything I&#39;ve ever used, but I kind of enjoyed (in a masochistic way) the challenge and brain-hurt of it&#39;s verbosity and strange ass-backwards syntax. But y&#39;know, if you wanna make iPhone apps, please don&#39;t shortcut the learning and use RubyMotion or Phonegap, it&#39;s not worth it. Take some time to learn Objective-C. It&#39;ll pay off eventually. Until that moment, you can embarrass yourself on StackOverflow with me. The one thing I would say, if you&#39;re struggling to grasp it, translate another app in another language into Objective-C and re-create something already done. I know we all learn differently, but personally, I find it so hard to relate to something new if I&#39;m only looking at theory and abstract concepts and not putting something into practice as soon as I can. Using Permeate&#39;s web app algorithm as my base, I could just convert this line-by-line into Objective-C and really draw parallels between what I was most comfortable with and what I was learning. As a quick comparison: Objective-C _tweets = [sortedArray subarrayWithRange:NSMakeRange(0, 10)]; Node.js tweets = tweets.slice(0, 10); So you can see, even though the Objective-C is a bit more verbose, it&#39;s pretty clear what it&#39;s doing when you have the JavaScript right next to it. If anyone wants to see the full Objective-C translation of the above Node.js code, you can... here. Plugins Package management for the web is not solved. However, very recently I&#39;ve started using Browserify for dependency management and npm as my package management. It&#39;s a dream. For native, save yourself the headache of manually downloading files from GitHub repos and use Cocoapods. Not really much else to say on that, but it worked really nicely and let me include a library into my application in ~30 seconds. Design A mistake I made with Permeate was not designing it in PS/Sketch first. Nothing really to do with the &amp;quot;designing in the browser&amp;quot; flame-war that happens every 3 months in the web community, I just found myself using Interface Builder (visual/behavioural manipulation in Xcode; fonts, colour, etc...) very clumsily and added unneeded friction to my design thinking and process. The next app I&#39;ll make, I&#39;ll design as much of the UI as I can in Sketch and then move onto code. For the same reason I like to design web UI elements in Photoshop/Sketch before I use CSS, is that when I design, I don&#39;t want to be thinking about CSS vendor prefixes or the box model, that crap clouds one&#39;s thinking. Same thing happened when I was trying to design Permeate at the same time as working out how to change a font size in Interface Builder. Not conducive to progress. I found design, in general, a lot easier than designing for the web (not to say iOS design is easier). They&#39;re a few things that you can achieve in one line of CSS that take quite a bit of code to replicate with Objective-C. However, everyone knows that responsive web design is hard. Not having to deal with that on iOS was great. Assets become a lot easier to manage. Whereas on the web, people seem to want to tattoo #perfmatters across their belly, in iOS that kinda falls by the wayside. It&#39;s nice. Loading times are obviously still important for your app, but it&#39;s not so crucial as the web, where people scorn you for making a web page &amp;gt; 1MB or taking &amp;gt; 1 second to render your page. Apple encourage you to provide a launch image which is displayed to the user as your app loads up (this image is usually a screenshot of your app in it&#39;s initial, but blank state - a concept which the web community have only just started to employ). In future, I need to spend more time research iOS UI patterns as a lot of Permeate&#39;s design was driven by my familiarity of the web and it&#39;s design conventions. Deployment Holy hell Apple. Provisioning profiles. Code signing. Certificates. A really naive thing to say is that it was just way too complex, but as I&#39;m naive, I&#39;m gonna say it. It really could do with being a lot more seamless. There was a lot of work to be doing with assets as well. Apple need screenshots and icons at different sizes and resolutions for different things. I guess if I was a bit more design savvy, then this might not have taken me so long, but oh boy, it did. Updating The long wait to update my app for people is gonna hurt...a lot. The App Store isn&#39;t Heroku (duh). What I mean is, the web is instant. One line deploys via the command line and every person from that point forward sees your brand new code. Entering a new world of OTA updates is going to be painful, but I guess that sending an update to someone&#39;s phone via the App Store feels a lot more personal. The updates become more meaningful and substantial. Final notes I started learning iOS properly in January on Treehouse and by reading various tutorials, but I took a long while to build up enough confidence to dive into Xcode and start building something. I was very aware that I&#39;d make mistakes and knock my confidence before really even starting. However, once I did start, I&#39;d say Permeate took me collectively around 24 hours. This was spread over a few evenings and weekends as I was still doing my usual freelance contracts in the day. The app is totally free. Which goes against my ethos of product building, ie. I&#39;d really prefer to work on chargeable products. But I figured, this is my first proper app, it was a learning experience, I don&#39;t have to provide any real support for it. So I didn&#39;t want the added pressure of making it &amp;quot;premium&amp;quot; enough for people to want to buy it. It has bugs, there are things I want to add or improve about Permeate, but I haven&#39;t got paying customers complaining to me about them. Just a few jokes asking for an Android version(?). So I hope this mind-dump was in someway useful for people. However, if you just Pocket™ it and never read it, that&#39;s also ok, I understand. If you do iOS and spot any errors in my process, or things that could be improved then please do holler at me on Twitter: @benhowdle. I&#39;ll keep this post updated as other useful things come to mind...</content>
  </entry><entry>
    <title>Building a Backendless Web App in 2 Hours</title>
    <link href="https://benhowdle.im/building-a-backendless-web-app-in-two-hours.html" />
    <updated>2014-03-02T00:00:00.000Z</updated>
    <id>https://benhowdle.im/building-a-backendless-web-app-in-two-hours.html</id>
    <content type="text">iHi.im → Preamble Jack Smith and myself made a web app a couple of years ago called iHi. It ran on a PHP/MySQL backend and some Google Maps driven JavaScript on the client. The user would hit the site, the PHP would allocate them a unique URL (stored in the database) and they could share the URL with friends, essentially sharing their location. The hosting expired on that site, so I pulled it offline and only just sat down last week to rewrite it with no database or backend to speak of. I hope to show you a couple of tactics for writing backend-less apps below. Here goes. Templating The app itself is only 3 &amp;quot;views&amp;quot;. A start view, a share view and a viewing view. Templating is a really fast, usable way to bring HTML rendering and control into your application. No more building up HTML strings directly in the JavaScript, but maintaining a separation of concerns so we can work on our HTML separately from our JavaScript logic/app code. Stateful URLs JavaScript apps which just replace content on the page when the user is switching context are just disappointing. The web is built on URLS, so let&#39;s keep that going in our application. A nice way to manage state in a single page JavaScript app is by the URL. Because we have no server-side logic, HTML5 PushState is out unfortunately, because as soon as the user refreshes the page, they&#39;re going to hit the server, and the server won&#39;t have any knowledge about the URLs we&#39;re going to set up. So our only option for routing is hash-based navigation, which we can listen out for changes in the JavaScript like so: window.addEventListener(&#39;hashchange&#39;, router.init); Every time we, programmatically, or the user, manually, changes the URL with a different hash value, we can handle that event and act upon it. The action, in this app&#39;s case, would be to process a new Google Maps instance and render the appropriate template into our main content area. Storing data...in the URL!? So, without a database to hold our data, what can we do? Well, we can encode it and store it in the URL, then when another user hits the link, we can parse the relevant data and show them the relevant map location. Because we have a &amp;quot;#&amp;quot; symbol in our URLs, I used the Bit.ly API to shorten our link for easy sharing purposes (I&#39;ll get onto that in a minute). We can take the first user&#39;s latitude and longitude from Google Maps/HTML5 Geolocation and pass it to Bit.ly like so: var linkVal = &#39;http://&#39; + window.location.host + &#39;/%23location,data=&#39; + JSON.stringify({ longitude: data.longitude, latitude: data.latitude }); This will be the link that the user will come into the app on (maybe from Twitter, or an email), which might look like: http://ihi.im/#location,data={&amp;quot;longitude&amp;quot;:-1.9388282,&amp;quot;latitude&amp;quot;:52.079217} Which we can easily parse to work out where they are (to plot them on the map from Google), like so: var path = window.location.hash.slice(1), data = JSON.parse(decodeURIComponent(path.split(&#39;=&#39;)[1])); A small gotcha with doing this method: without the decodeURIComponent method, iOS was converting the object in the URL to entities which weren&#39;t parse-able by JSON.parse(). Desktop browsers didn&#39;t seem to encode it, so we were fine, but the above code was required to make it work on iOS. From this point, we have a lovely JavaScript object, ready to do what the hell we want with and we haven&#39;t spoke to any databases! Bit.ly shortening API Like I mentioned earlier, I used Bit.ly for easy URL sharing. The whole code to shorten a link through Bit.ly looks something like this: var linkVal = &#39;http://&#39; + window.location.host + &#39;/%23location,data=&#39; + JSON.stringify({ longitude: data.longitude, latitude: data.latitude }); var url = &amp;quot;http://api.bit.ly/v3/shorten?login=benhowdle89&amp;amp;apiKey=R_01f556645116f8620103c31e48d7f2a2&amp;amp;longUrl=&amp;quot; + linkVal + &amp;quot;&amp;amp;format=json&amp;quot;, xhr = new XMLHttpRequest(); xhr.open(&#39;GET&#39;, url); xhr.addEventListener(&#39;readystatechange&#39;, function() { if (xhr.readyState == 4 &amp;amp;&amp;amp; xhr.status == 200) { var json = JSON.parse(xhr.responseText), bitlyLink = json.data.url, } }); xhr.send(); So we can quite easily use this new URL to populate a tweet intent link or a mailto link. Misc I hope you&#39;ve seen a few tricks and tips to get you going building backend-less apps. They&#39;re a fun challenge and they make you think in totally new ways, ie. techniques aren&#39;t quite so mature and proven as server-side rendered applications. A big ol&#39; disclaimer should probably be (you&#39;ve probably wondered it all the way down to here), of course iHi does run on a server (it&#39;s hosted on GitHub pages), but by &amp;quot;backendless&amp;quot;, I mean that none of the logic is on the server and we don&#39;t use any databases; client-side techniques only around here. Quite a few techniques above were inspired by Sacha Greif from his Side Project slides, so shout out to that. Go play with the source on GitHub and play with the app itself - iHi.im. Oh, and the &amp;quot;2 hours&amp;quot; part, although slightly link-baiting, is completely doable...just use some/all of the techniques above!</content>
  </entry><entry>
    <title>Easy Peasy Equal Heights</title>
    <link href="https://benhowdle.im/easy-peasy-equal-heights.html" />
    <updated>2014-01-29T00:00:00.000Z</updated>
    <id>https://benhowdle.im/easy-peasy-equal-heights.html</id>
    <content type="text">Making elements the same height Sometimes you might have a need to grab a few elements, and then make them the same height. In an ideal world, you&#39;d use CSS, but this isn&#39;t an ideal world, so we must use JavaScript! So drop this code in and you can achieve equal heights, really easily: &amp;lt;script type=&amp;quot;text/javascript&amp;quot;&amp;gt; function sameHeights(selector) { var selector = selector || &#39;[data-key=&amp;quot;sameHeights&amp;quot;]&#39;, query = document.querySelectorAll(selector), elements = query.length, max = 0; if (elements) { while (elements--) { var element = query[elements]; if (element.clientHeight &amp;gt; max) { max = element.clientHeight; } } elements = query.length; while (elements--) { var element = query[elements]; element.style.height = max + &#39;px&#39;; } } } &amp;lt;/script&amp;gt; Stick this code anywhere on the page (or in a JavaScript file) to include it. To actually run it you can use the following markup: &amp;lt;div class=&amp;quot;sameHeights&amp;quot;&amp;gt; &amp;lt;div data-key=&amp;quot;sameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;sameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;sameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;sameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;sameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;/div&amp;gt; (the only important part is actually putting data-key=&amp;quot;sameHeights&amp;quot; on the elements you want the same height) Then to initialise the code, drop this in below where you included the above JavaScript: &amp;lt;script type=&amp;quot;text/javascript&amp;quot;&amp;gt; if (&#39;addEventListener&#39; in window) { window.addEventListener(&#39;resize&#39;, function(){ sameHeights(); }); window.addEventListener(&#39;load&#39;, function(){ sameHeights(); }); } &amp;lt;/script&amp;gt; There we have it, the JavaScript will make all your elements equal heights! You may have noticed we also initialise the plugin on window resizing as well, which means this&#39;ll slot nicely into your responsive designs... What if I want multiple sets on a page? Say you have the following HTML: &amp;lt;div class=&amp;quot;sameHeights&amp;quot;&amp;gt; &amp;lt;div data-key=&amp;quot;sameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;sameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;sameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;sameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;sameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;/div&amp;gt; &amp;lt;div class=&amp;quot;otherSameHeights&amp;quot;&amp;gt; &amp;lt;div data-key=&amp;quot;otherSameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;otherSameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;otherSameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;otherSameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;div data-key=&amp;quot;otherSameHeights&amp;quot;&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;/div&amp;gt; to make the JavaScript work on both these groups independently, simply drop in this JS: &amp;lt;script type=&amp;quot;text/javascript&amp;quot;&amp;gt; if (&#39;addEventListener&#39; in window) { // first group window.addEventListener(&#39;resize&#39;, function(){ sameHeights(&#39;[data-key=&amp;quot;sameHeights&amp;quot;]&#39;); }); window.addEventListener(&#39;load&#39;, function(){ sameHeights(&#39;[data-key=&amp;quot;sameHeights&amp;quot;]&#39;); }); // second group window.addEventListener(&#39;resize&#39;, function(){ sameHeights(&#39;[data-key=&amp;quot;otherSameHeights&amp;quot;]&#39;); }); window.addEventListener(&#39;load&#39;, function(){ sameHeights(&#39;[data-key=&amp;quot;otherSameHeights&amp;quot;]&#39;); }); } &amp;lt;/script&amp;gt; and there we have it, multiple groups acted on and an equalised. What about the browsers, will somebody please think of the browsers? This works in IE9 and above. If it&#39;s below IE9, then the plugin just doesn&#39;t run, it won&#39;t error or anything awful like that.</content>
  </entry><entry>
    <title>News vs. Information Seeking</title>
    <link href="https://benhowdle.im/news-vs-information-seeking.html" />
    <updated>2014-01-03T00:00:00.000Z</updated>
    <id>https://benhowdle.im/news-vs-information-seeking.html</id>
    <content type="text">A constant barrage of &amp;quot;new things&amp;quot; When I started back in 2009, I remember having a handful of bookmarks that I used to scour for all the information I could lay my hands on... Sites like Nettuts and Line25 were big enough buckets of information on development and design, if felt like those were all a beginner really needed. Then I joined Twitter and started following Smashing Mag, HackerNew Popular and CSS-Tricks and many more, each account/site/service being amazing at what they did, but their purpose was to share information and thus help people. However, like many developers and designers, I&#39;m a little like a magpie in the way that I consume information; if it&#39;s new and shiny, I feel compelled to check it out. And boy, does the information come in droves. If I were to check the last 200 tweets in my Twitter timeline, I imagine I&#39;d find 5 JavaScript libraries I&#39;m not using or 3 CSS naming conventions I haven&#39;t adopted. Jeez. Selective learning I&#39;ve never been one for learning things I don&#39;t need to know. I&#39;m not saying that that is what University is like, but I&#39;ve spoken to a few people who&#39;ve gone to University and they&#39;ve said, that all the relevant stuff they&#39;ve learned has been through their own information seeking. If someone asked me to do something or if I needed to implement something, I&#39;d go learn it! Some people learn better by absorbing a huge amount of information upfront and then applying the relevant parts to the problem they&#39;re trying to solve. I&#39;m opposite, I&#39;d rather be presented with the problem and pick out the relevant bits of information and ignore the rest. Works for me, but it might not be for everyone! Do I even have a problem? This is where I arrive at the point and &amp;quot;skip to the end&amp;quot;, but I see a lot of people (including myself), tweeting/facebooking/myspacing &amp;quot;should I be using Grunt?&amp;quot;, &amp;quot;Do I need to learn Less or Sass?&amp;quot; or &amp;quot;Hang on, do I need to host my site on GitHub now with Jekyll??&amp;quot;. If you&#39;re like me, when you see new libraries and techniques being tweeted out by influential outfits, there&#39;s a slight feeling of guilt that I&#39;ve a) not heard of said library/technique or b) not used it yet. My problem is that people might be encouraged and pushed to believe that they have a problem, when they very well may not. I sometimes long for the days when I knew very little and had to go and seek each piece of information out specifically to solve the problem I was having. Not saying I now don&#39;t do that, but I find it immensely distracting when I scan Twitter, or worse, the hell-hole that is HackerNews, because I&#39;m genuinely not sure if these &amp;quot;new things&amp;quot; are just offering a solution to a problem I don&#39;t have or whether they can actually help me. This isn&#39;t some grand New Years resolution post that I&#39;m going to cut out the noise and go all Amish on the Internet, but hopefully just a reminder to be selective in what you feel you need to learn...</content>
  </entry><entry>
    <title>Hosting your Blog or Portfolio on GitHub</title>
    <link href="https://benhowdle.im/hosting-your-blog-or-portfolio-on-github.html" />
    <updated>2013-11-21T00:00:00.000Z</updated>
    <id>https://benhowdle.im/hosting-your-blog-or-portfolio-on-github.html</id>
    <content type="text">Everyone and their mother is now telling you to &amp;quot;stick your site on GitHub&amp;quot;, but if you don&#39;t know what a GitHub is or the idea of forking, pushing and committing freaks the hell out of you, then I hope this short guide will at least help you overcome those barriers and come to realise that it&#39;s a dream workflow to have your site hosted on GitHub and all the benefits that come packaged with it. Why the sudden increase in popularity? Well, as GitHub&#39;s own popularity has grown and is finding itself more deeply engrained into developer&#39;s workflows, more and more projects are going up on GitHub and therefore more project/demo pages are being hosted on GitHub. Naturally, as more of your stuff is on GitHub and your workflow is more finely tuned, the next step is hosting your blog or portfolio on GitHub using the same workflow. Benefits: It&#39;s free A lot of people use it, so there&#39;s lots of help You can optionally use Markdown, which is ace You can edit and re-upload your files directly on the site (GitHub.com) It&#39;s free Enough talk, let&#39;s get going… Create your repository If you&#39;re not already signed up to GitHub, head over to GitHub.com and sign up. Otherwise, head over and create a repository. Make the name the following: &amp;quot;[yourGitHubUsername].github.com&amp;quot;, so mine would be &amp;quot;benhowdle89.github.com&amp;quot;. Mark it as &amp;quot;Public&amp;quot; (it is by default). Then just hit &amp;quot;Create Repository&amp;quot;. Editing and uploading files Next step is to put files up onto GitHub from your local machine. If you want to/are comfortable with using Git on the command line, then by all means, go for it. But if you want to use a GUI, then GitHub provide an excellent Mac App for you to download. Once downloaded, you should be able to sign in and see a similar pane to the one below… You should then see all of your GitHub repositories on the right-hand side… Once you hit the &amp;quot;Clone to Computer&amp;quot; button, it should ask you where you want to save the folder on your machine and then take you to a similar view to the one below… Any file that you change in the saved folder is going to show up here, ready to be uploaded to GitHub. Pressing &amp;quot;Commit &amp;amp; Sync&amp;quot; should do just that, upload to GitHub (and therefore, put the changes live on your portfolio). To make it easy for you, I created a really simple, one page, HTML portfolio on GitHub, which can be Viewed or Downloaded. Feel free to copy all the downloaded files into your local folder and upload them to GitHub as a quick start exercise. You should now be able to visit [yourGitHubUsername].github.com and see your shiny new portfolio! You can see mine at http://benhowdle89.github.com, it will redirect to http://benhowdle.im but I&#39;ll explain why and how you can do that in a few moments… You mentioned a blog? I created a second repository for you to View and Download. This blog is powered by Jekyll, a Ruby backed, static site generator, which is really great for smaller sites and blogs. However, using it is a bit beyond the scope of this tutorial, but the docs on the site are brilliant and I wrote a Jekyll In 15 Minutes post which gets you going with it really quickly! Custom domains You&#39;ll notice that the blog you&#39;re reading is located at http://benhowdle.im, which is hosted on GitHub. There are two things you have to do to be able to do this. Firstly, you need to create a file in your local site folder called &amp;quot;CNAME&amp;quot; (very important it&#39;s uppercase, and note the lack of file extension). Place this in the root of your folder and put your custom domain name in the file, ie. like mine, then upload as you normally would. Secondly, now this might vary depending on who you registered your domain name with, but you need to create an A record pointing to GitHub&#39;s IP address (204.232.175.78), like so: Once that&#39;s done and DNS propagation is complete (can take years…not years, but it feels like years), you should be able to visit your domain and see your GitHub hosted portfolio! Did I miss anything? I hope this guide will help you out with hosting your portfolio on GitHub. Remember, you&#39;re just creating a set of HTML and CSS files, like you&#39;re used to and uploading them to the internet. For free. You just have to get a bit comfortable with some of the lingo used, ie. repositories and committing, etc… If I have missed a step somewhere along the way or you get stuck, please do drop a comment below or tweet me.</content>
  </entry><entry>
    <title>Rewriting History with Backbone.js</title>
    <link href="https://benhowdle.im/rewriting-history-with-backbone-js.html" />
    <updated>2013-11-13T00:00:00.000Z</updated>
    <id>https://benhowdle.im/rewriting-history-with-backbone-js.html</id>
    <content type="text">Controlling navigation in a large client-side application As more and more applications are moving to the browser, we&#39;re now finding ourselves having to fix some of the already solved paradigms, ie. navigation. Sending the user to a different URL is no problem, whether you&#39;re using hash based navigation or HTML5 PushState, this is probably going to be handled by your MV* framework of choice. However… The real fun happens when you want to start doing stuff like &amp;quot;in-app back buttons&amp;quot;. We began this venture by simply using window.history.back() as a blanket solution. How naive we were. For the most part, it works. But like all blanket solutions, you&#39;ll unstuck at some point because with large apps, you end up with a lot of custom requirements. We found that our main issue with a blanket window.history.back() solution was that we often didn&#39;t always want the user to be navigated back historically, but rather hierarchically. So going from &amp;quot;items/123&amp;quot; back to &amp;quot;items&amp;quot;, is a hierarchical manoeuvre, but also could be a historical manoeuvre. So we needed more granular control of sending the user back to previous URLs. And often is the case, some wheels haven&#39;t been invented yet. So, what I&#39;m showing you here isn&#39;t revolutionary, but is a strategy we&#39;ve implemented at KashFlow and I wanted to share it with you all. I should point out, that because we&#39;re using Backbone.js here, this solution will be largely applicable to Backbone.js apps. Tracking the user&#39;s movements We start with extending and enhancing Backbone&#39;s standard Router, like so: (function() { .extend(Backbone.Router.prototype, Backbone.Events, { before: function() {}, after: function() {}, route: function(route, name, callback) { Backbone.history || (Backbone.history = new Backbone.History); if (!.isRegExp(route)) route = this._routeToRegExp(route); if (!callback) callback = this[name]; Backbone.history.route(route, .bind(function(fragment) { var that = this; var args = this.extractParameters(route, fragment); if ((this.before).isFunction()) { this.before.apply(this, args); } if (callback) callback.apply(that, args); if ((this.after).isFunction()) { this.after.apply(this, args); } }, this)); } }); }).call(this); What this allows us to do is define a before() and after() function on our instantiated Router class. These functions will run before and after the user has been routed to a new URL, respectively. With this in place we can define a storeRoute() method on our Router: appRouter.js: initialize: function(){ this.history = []; this.ignore = false; }, after: function(){ this.storeRoute(); } So, after every route the user takes, we&#39;re going to call our storeRoute() function. We&#39;ve also declared a history array on initialization of our Router and an ignore property. You&#39;ll see why in a second. Here is our storeRoute() function which is also in appRouter.js: storeRoute: function(){ if (!app.router.ignore) { var re = /[a-zA-Z]+/[&#92;d|new]/; // matches, eg. &amp;quot;quotes/5&amp;quot; or &amp;quot;quotes/new&amp;quot; if (!this.history.length) { var parts = Backbone.history.fragment.split(&#39;/&#39;), len = parts.length, i = 0; while (i &amp;lt; len) { this.history.push(parts.slice(0, i + 1).join(&#39;/&#39;)); i++; } } else { this.history.push(Backbone.history.fragment); } } else { app.router.ignore = false; } } So, quite a bit going on here. Firstly, we check to see whether we should ignore the user&#39;s route or not. There may be certain situations where you might not want to log where the user has been, for example if you&#39;ve sent them to a temporary state URL. The next part deals with rewriting history or faking the user&#39;s journey to a URL. This is for the occasions where the user has either come directly to a URL or they&#39;ve refreshed the page on a particular URL, so their in-memory history would be empty. Now, we&#39;ve decided, in our application, that if a user goes directly to a URL, like &amp;quot;/quotes/123&amp;quot;, we need to tell the Router, and inparticular, the history array, how they&#39;ve got there. So we match these kinda of URLs with a regex and loop over the URL, passing each part of the URL into the history array. For example, if the user goes to &amp;quot;/quotes/123&amp;quot;, we process this and get left with a history array containing two elements: quotes and quotes/123. You&#39;ll see why this is important in a second. Sending them back to where they came from We then move on to our next Router function, the previous(), again, in appRouter.js: previous: function(options) { app.router.ignore = options.ignore || false; if (this.history.length &amp;gt; 1) { this.navigate(this.history[this.history.length - 2], { trigger: true }); this.history = this.history.slice(0, -1); } } So, we can hook up an event listener to our &amp;quot;in-app back button&amp;quot; and set the handler to be app.router.previous(). When called, we can safely, as we&#39;ve checked the length property of the history array, use Backbone&#39;s navigate() function to send the user to the correct URL. We then remove the last element from the history array, ie. the place they&#39;ve clicked &amp;quot;Back&amp;quot; from. This is because we don&#39;t really want to be storing their &amp;quot;previous URL before clicking back&amp;quot; URL as they&#39;ll be stuck in an infinite loop between the two URLs! Reinventing the wheel With Backbone.js you tend to have to create a few concepts on your own as it really is a base library that you end up building an application framework on top of. So I&#39;m really keen to hear of your similar issues, like the one above, that you&#39;ve solved with different methods or whether the newer frameworks, ie. Angular or Ember, manage to alleviate these problems.</content>
  </entry><entry>
    <title>Options are the devil</title>
    <link href="https://benhowdle.im/options-are-the-devil.html" />
    <updated>2013-10-29T00:00:00.000Z</updated>
    <id>https://benhowdle.im/options-are-the-devil.html</id>
    <content type="text">They&#39;re also decisions and choices in disguise A while ago, I saw this Dribbble shot from Morgan Allan Knutson and I have to say, other than thinking the design of the poster was visually pleasing, the message on the poster didn&#39;t really mean that much to me. But… Recently, I&#39;ve been working on the rewrite of KashFlow&#39;s software alongside a dedicated UI/UX guy and noticing how much thought goes into the tiny details which can swing the mood of a user to polar opposites of the spectrum; a big no no. What I&#39;ve noticed, more in life than in UIs, having gone through these UX processes, is that people genuinely hate or get flustered being presented with options...or decisions in disguise. I was sat in a restaurant the other evening (ok, it was a chip shop, but hey, who&#39;s keeping count?) and was half listening to a man order his meal: Him: &amp;quot;fish and chips, please&amp;quot;, Her: &amp;quot;large chips or small chips?&amp;quot;, Him: &amp;quot;ummm, large&amp;quot;, Her: &amp;quot;large fish or small fish?&amp;quot;, Him: &amp;quot;oh, er, ummm, small please.&amp;quot; The initial request from the man would have sufficed. I&#39;m guessing (and I haven&#39;t looked into Chip Shop menu conversion rates) but if they just had: Fish - £X Chips - £X then I really don&#39;t think people would stop coming to the establishment to eat. Four steps removed and friction eliminated in an ordering process. Easy. People hate being presented with choices, it requires them to think. People don&#39;t generally like being forced to think (generous helping of generalising aside), they really don&#39;t. An instance that springs to mind is at the barber shop a couple of years ago, at the end of the cut I was asked whether I want it &amp;quot;off the neck?&amp;quot;. I had no clue what that really meant and was generally cautious with my response and which route I committed to. One wrong decision and I could have ended up with a bowl cut. In other words, they should have decided. When you go to the garage to get your car fixed, you&#39;re not asked if you want the carburettor fitted way [x] or way [y]. I like when I pay for a service or product and they&#39;ve made these choices for me. If I don&#39;t like the choices they&#39;ve made, then I&#39;m also happy for them not to allow me to change it. If I change [x], then I&#39;ve changed the way they&#39;ve intended for me to use their product or service. I&#39;ll go use a different product, but that&#39;s ok. Apply this to your product and you might not get as many users but the users who truly like the product for what it is, stick around. When was the last time Apple let you change the core values of their products? Never. You use Apple products because they have a magnanimous amount of confidence in the decisions they&#39;ve made for you and your usage of their products. Getting Real &amp;quot;Customers shouldn&#39;t have to think about every nitty gritty detail - don&#39;t put that burden on them when it should be your responsibility.&amp;quot; This quote from 37 Signals&#39; &amp;quot;Getting Real&amp;quot; book really stuck out for me (in fact, most of that chapter did really). It&#39;s kind of like going, &amp;quot;Well, erm, I dunno, this is a tricky UI/UX decision, oh hell, we&#39;ll let the user decide.&amp;quot; Passing the buck to the user to make the UX decision for you? Sounds worse when it&#39;s put like that, huh? Development Not to steal the words right out of Jason Fried&#39;s mouth but options, preferences, setting, configurations genuinely do add a fair bit of development time and you&#39;re going to have to find a nice way of displaying this UI to your users so they can change these preferences. Settings pages aren&#39;t sexy. Maintaining these options in your code also isn&#39;t sexy. Have confidence in your UX and UI decisions and put a concrete set of values in your product. Don&#39;t pass the buck to your users!</content>
  </entry><entry>
    <title>Too Scared To Write A Line Of Code</title>
    <link href="https://benhowdle.im/too-scared-to-write-a-line-of-code.html" />
    <updated>2013-10-23T00:00:00.000Z</updated>
    <id>https://benhowdle.im/too-scared-to-write-a-line-of-code.html</id>
    <content type="text">Kill your output with premature optimisation &amp;quot;Design Patterns&amp;quot;, &amp;quot;Code architecture&amp;quot;, &amp;quot;Scalability&amp;quot;, &amp;quot;OOP&amp;quot;, &amp;quot;Maintainability&amp;quot;, &amp;quot;The code you write now, is the legacy code of the future&amp;quot;, &amp;quot;Be kind to your future self&amp;quot; &amp;amp; &amp;quot;Code smells&amp;quot;. Just like Bruce Almighty trying to block out the voices in his head, my pangs of guilt and angst come from the paradigms above; like an unwavering, continuous stream of distraction overwhelming my thinking as I&#39;m trying to write one single line of code. One single line of code. That&#39;s it. Nothing special. No one&#39;s going to live or die if it&#39;s not the most optimised, architected and scalable line in the world. We are under a constant barrage of posts, tutorials and articles about these paradigms. I often feel guilty if these paradigms aren&#39;t at the forefront of my mind whilst developing. It often kills my output. I&#39;m trying to adopt a new workflow where I won&#39;t try and solve a problem until it becomes a problem; until I see it in the &#39;wild&#39;. Just like Adii Pienaar wrote, why do we worry about scalability on day 1? This is precisely the approach I&#39;m trying to apply to my developing. However, it&#39;s the same with any new approach, I&#39;m not blindly following it. It doesn&#39;t give you an excuse to write shit code, but forces you to complete a task more quickly and get that feature out there. Your users care about precisely two things, &amp;quot;Does it work?&amp;quot; and &amp;quot;Is it fast?&amp;quot;. (I&#39;m talking specifically development here, they obviously care about design and all that jazz.) So, I&#39;m trying to stick to the following mantra: Build it, release it, analyse it and only then decide if it needs optimising.</content>
  </entry><entry>
    <title>The Principle of Least Power</title>
    <link href="https://benhowdle.im/the-principal-of-least-power.html" />
    <updated>2013-09-26T00:00:00.000Z</updated>
    <id>https://benhowdle.im/the-principal-of-least-power.html</id>
    <content type="text">Tim Berners-Lee on the Principle of Least Power: &amp;quot;Computer Science spent the last forty years making languages which were as powerful as possible. Nowadays we have to appreciate the reasons for picking not the most powerful solution but the least powerful. The less powerful the language, the more you can do with the data stored in that language. If you write it in a simple declarative from, anyone can write a program to analyze it. If, for example, a web page with weather data has RDF describing that data, a user can retrieve it as a table, perhaps average it, plot it, deduce things from it in combination with other information. At the other end of the scale is the weather information portrayed by the cunning Java applet. While this might allow a very cool user interface, it cannot be analyzed at all. The search engine finding the page will have no idea of what the data is or what it is about. The only way to find out what a Java applet means is to set it running in front of a person.&amp;quot; The challenge Consider the following: I have a &amp;lt;ul&amp;gt;&amp;lt;/ul&amp;gt; and an unknown number of &amp;lt;li&amp;gt; as children. If there is one child, I want it to be red, else if there are more than one child, I want all the children to be green. Given the choice of any combination of tech, solve this. Our solutions First up, we might go for some server-side rendering with logic wrapped around the output of the &amp;lt;li&amp;gt;. PHP The following code would suffice: ul.multiple li { color: green; } ul.single li { color: red; } $arr = array(1, 2, 3, 4); $klass = (count($arr) == 1) ? &#39;single&#39; : &#39;multiple&#39;; echo &amp;quot;&amp;quot;; for($i = 0; $i &amp;lt; count($arr); $i++){ echo &amp;quot;&amp;quot; . $arr[$i] . &amp;quot;&amp;quot;; } echo &amp;quot;&amp;quot;; Messy, no separation of concerns, mixed logic and probably over-engineered. Template helpers So, we could do something with Handlebars for example and register ourself a little helper: Handlebars.registerHelper(&#39;oneOrMany&#39;, function (value) { return (value.length == 1) ? &#39;single&#39; : &#39;multiple&#39;; }); Some simple CSS: ul.multiple li { color: green; } ul.single li { color: red; } and the template (in a gist, because for the life of me, I could not work out how to escape hashes in Markdown): &amp;lt;ul class={{#oneOrMany posts}}{{/oneOrMany}}&amp;gt; {{#each posts}} &amp;lt;li&amp;gt;{{content}}&amp;lt;/li&amp;gt; {{/each}} &amp;lt;/ul&amp;gt; Little bit more elegant, can be pre-compiled so it&#39;s fairly performant. It&#39;s also only using JavaScript for purely presentation logic, so it satisfies our separation of concerns issue. So, convinced there must be a simpler way to do this, using far less powerful languages (because, as Berners-Lee says, &amp;quot;picking not the most powerful solution but the least powerful&amp;quot;). I consulted some people that were far better, and far more up to date on modern CSS selectors, to see if it could be achieved using CSS only. Crazy, right?? Turns out, it could. CSS Harry Robert&#39;s solution: ul &amp;gt; li { color: green; } ul &amp;gt; li:only-child { color: red; } Both Harry and my colleague Daniel Grant came to this secondary solution independently. This solution seems to offer greater browser support: ul &amp;gt; li { color: green; } ul &amp;gt; li:first-child:last-child { color: red; } Two pure CSS solutions, boiled down from server-side rendering and intertwined business logic. Future for CSS It&#39;s exciting times for CSS, we started to see a whole heck of stuff being included and people are pushing the boundaries of CSS everyday. To quote, this answer on Performance of CSS Transitions vs. JS animation packages from Stack Overflow: &amp;quot;JavaScript is an interpreted language and the JS engine of the browser has to parse and execute every instruction during run-time (I know there exist JS compilers, like V8 (used in Chrome), but the principle remains the same). On the other hand, browsers can implement CSS transitions natively, e.g. in C/C++ or something. This code will be compiled to machine language.&amp;quot; So, remember that the most powerful language or tech may not offer the best solution. If anyone has any other exotic solutions to the above challenge, I&#39;d love to see them.</content>
  </entry><entry>
    <title>Make it Personal</title>
    <link href="https://benhowdle.im/make-it-personal.html" />
    <updated>2013-09-19T00:00:00.000Z</updated>
    <id>https://benhowdle.im/make-it-personal.html</id>
    <content type="text">This post is a re-publish from an article I did for .Net Magazine back in September, 2012. Since they&#39;ve gone and deleted it from their site after redirecting everything to CreativeBloq, I thought it&#39;d be a shame to see it trashed. The internet’s mark is forever; a saying that has plagued the minds of copywriters and web designers for the duration of the modern web. A flippant scrawl on a notepad or piece of paper is disposable, the internet’s ink is permanent. This isn’t a bad thing though. This unequivocal notion forces and encourages us to be more thoughtful and precise about how we portray our services and messages across the web. We create websites for people, because websites are read by people: people don’t like to be treated like Google bots or spiders crawling content. A bot can’t understand a subtle joke or catch a movie reference, a person does. This is important as you get the opportunity to think about who you want your intended audience to be. Human touch Recently, I created a ‘hire me’ page for my personal freelance site. I started listing my services and writing placeholder text for form inputs and had a slight eureka moment. I realised that I only wanted to work with certain types of people, people who would catch my reference to Mad Men and Back to the Future. I’m happily employed as a full-time web developer, but when I do freelance, I want to work with people who are like me and understand my humour. So I ditched my current copy, which sounded like I was listing skills on LinkedIn, and created flowing prose. When you are looking to get hired in the field you work in, your site is likely to be read by people who do the same thing as you: generally a developer’s portfolio will be analysed by the head of development or a senior developer. It’s not going to be read by ‘Joe the Plumber’. If you want to slip in that you can complete a Rubik’s Cube one-handed then fine. You’ve created targeted content. A recruiter will read your skills from your CV and match you up to a job based on keyword/ programming languages, but a developer or designer will hire you. You’re not trying to impress a computer or search engine, you have to converse with a potential future employer without verbally communicating with him or her. One thing that has impressed me recently is the user experience at LinkedIn rival Zerply. Especially the way that they communicate with the user, through a variety of mediums. For instance, I love that the email address they use for notifications, reminders, and so on is doreply@zerply.com. When I first saw it, my brain instinctively read donotreply@zerply.com or noreply@zerply.com. When I got the joke, I didn’t take any important action or immediately respond congratulating them on their email address choice, but what I did do was come away with a warm, fuzzy feeling about Zerply and knew that it was the kind of service that I wanted to use. Humour Another good example of lovely human copy is Stellar.io, an organisation tool for your favourites on Twitter, Flickr, Vimeo and YouTube. The first thing you read when hitting their homepage is: “What is Stellar? Good question! We’re still figuring that out” – again, not a revolutionary piece of content but straight away I feel like a man or woman has typed that to me. I feel a hastened sense of trust, the service has flaws/ imperfections and they’re fine with telling me this. Their small print is carefully crafted. The company even injects some small pearls of humour into it, saying: “Stellar will never post any tweets to your account or delete your photos or anything like that. Never ever ever. Ever”. On some websites this sort of humour just isn’t appropriate. On Stellar, made for internet power users, it is, and that’s why they stand out. Working on the web is important. Communicating well on the web is even more important. Conveying your human side through copy and interactions is key to conquering this transparent medium.</content>
  </entry><entry>
    <title>Open Source is kinda weird</title>
    <link href="https://benhowdle.im/open-source-is-kinda-weird.html" />
    <updated>2013-08-25T00:00:00.000Z</updated>
    <id>https://benhowdle.im/open-source-is-kinda-weird.html</id>
    <content type="text">&amp;quot;Side projects, yo!&amp;quot;, &amp;quot;Are you going to Open Source it?&amp;quot;, &amp;quot;Is it on GitHub yet?&amp;quot;, &amp;quot;Please don&#39;t make another one; contribute to an existing OS project&amp;quot;, &amp;quot;Have a full time job and you create Open Source projects? Congratulations, now you have two full time jobs.&amp;quot; Everyone talks about the amazing sides to being involved with Open Source; as we all know, there are many. You can be a complete beginner on the web, have no client experience to speak of any yet, you could contribute to the Bootstrap project on GitHub and affect a ton of sites out there. High impact, high exposure. Another MV* framework...really? James Padolsey put this way better than I&#39;m going to attempt to, in his post, &amp;quot;There is no directive&amp;quot;. A tl;dr if you don&#39;t want to read the whole thing is roughly: &amp;quot;We’re all running around solving problems that we care about and discovering new things to improve our understanding of the world, and that’s wonderful. We’re not automatons working in unison towards an absolute.&amp;quot; The interpretation I had of this, was that if you have an idea for something, it probably already exists in some form, so instead of creating a new [x], you should contribute that idea to &amp;quot;already-existing-solution&amp;quot; [y]. That just doesn&#39;t sit right with me. This paradigm of thinking echoes throughout the development world; not so much in other industries. Apple built the iPod as a competitor to other music players. Audi built their cars to knock down the long standing reign of other manufacturers. Are we not supposed to challenge each other, come up with our own, better solutions? Think Backbone.js is crap, go build a better one! Think Bootstrap is balls, match it! Guilt Another thing is guilt, goddamnit. Jacob &amp;quot;Fat&amp;quot; Thornton outlines this in his talk, entitled, &amp;quot;What Is Open Source &amp;amp; Why Do I Feel So Guilty?&amp;quot;. Once something is put on GitHub and people start using this library/software, people will send support requests, feature ideas and file issues. Maintaining this is a real commitment. Dan Eden explains this on his Animate.css issue list. A measure of a developer Once a service picks up popularity and becomes the &amp;quot;defacto&amp;quot;, it&#39;s really hard not to judge a developer based on their activity on this service. It seems that the new way to judge a potential employee (for a development position) is to check their GitHub activity and Open Source contributions. Is this right? It&#39;s true that it does seem a good way to judge someone based off of this; if they contribute to Open Source, they deeply care about the web industry. However, they aren&#39;t mutually exclusive; someone doesn&#39;t have to be on GitHub to be a good developer or care about the web industry. However, the bonus of having a full GitHub profile, is that you have a free development portfolio and your code is available for people to see. Win win. I just think we need to be careful about judging developers too harshly if they choose not to contribute to Open Source and put all their code on GitHub.</content>
  </entry><entry>
    <title>A Backbone View to a Kill</title>
    <link href="https://benhowdle.im/a-backbone-view-to-a-kill.html" />
    <updated>2013-06-09T00:00:00.000Z</updated>
    <id>https://benhowdle.im/a-backbone-view-to-a-kill.html</id>
    <content type="text">Backbone is awesome. It allows you to write large scale apps but still keep control of the code you&#39;re writing (read: it doesn&#39;t do all that much for you). However, a lot of people struggle (people, being me, for ages) to level up from the &#39;Todo&#39; App in the early stages. Hopefully this post and the related GitHub repo will assist you nicely. We&#39;re going to be using Dan Harper&#39;s sweet ViewManager for our post&#39;s code. It&#39;s nowhere as sophisticated as something like Marionette.js, however it does give you a really nice method of controlling/destroying the Views within your application. The one thing that confused me when starting out with Backbone.js, was actually creating a real world application, ie. an app with lots of different states, lots of different &amp;quot;areas&amp;quot; or &amp;quot;regions&amp;quot; - sidebar, header, content, etc... ViewManager So I thought, lets use Dan Harper&#39;s ViewManager to create a small application. The nice thing about Backbone.js, is that we can create a whole app with just Views (DOM manipulation, events, etc...). So, excuse the hot design, but here is our lovely demo site. We introduce the concept of region&#39;s. A region&#39;s, in this context, definition (stolen straight from Marionette.js): Regions provide a consistent way to manage your views and show / close them in your application. They use a jQuery selector to show your views in the correct place. We have two regions in our app; header and content. Our sidebar is simply HTML so no need to attach any functionality to it, however our header contains our menu links and our content needs to swap different Views in and out of itself. In our Router&#39;s app.js file: app.utils.viewManager.addRegion(&#39;content&#39;, &#39;#jsRegion-content&#39;); ... app.utils.viewManager.swap(&#39;header&#39;, new app.views.header()); The site uses HTML5&#39;s History API for pushState navigation and as you can see from the code, we use the app&#39;s Router file to swap in the relevant View instance into the content div. In our Router&#39;s app.js file (we navigate the user to /about and then load up the &amp;quot;about&amp;quot; view): about: function(){ app.utils.viewManager.swap(&#39;content&#39;, new app.views.about()); } Views are appropriately shut down and destroyed by our ViewManager. You&#39;ll also note the fact that all of our Views extend a Base View. This holds all the methods that we might want to be globally available to our Views. As you can see, because our app is just static HTML held in templates, we can let the Base View&#39;s render method take care of populating the View&#39;s element with the relevant View&#39;s template. In about.js (our specific view): template: $(&#39;#template-about&#39;).html() In base.js: render: function() { var template = Handlebars.compile(this.template); this.$el.html(template()); return this; } As you can see, we have the makings of a scalable application using the ViewManager. This specific demo site only uses DOM manipulation, ie. it doesn&#39;t save or read data from a server. You could use this GitHub repo as a solid base for your next Backbone.js application, easily adding Models and extra functionality as needed, but the ViewManager will take most of your pain away when dealing with memory management and View lifecycle. Caveats This repo isn&#39;t a model for everything in Backbone.js, it doesn&#39;t deal with script tag management (hopefully you missed the 80 billion tags I included in index.html). We also aren&#39;t pre-compiling our Templates (we can use Grunt for this, for example), so we&#39;re missing out on a big performance win there. And it&#39;s obvious but I&#39;ll point it out, Backbone.js does leave a lot of the decisions for you to make, so every developer who deals with Backbone.js has a different take on it&#39;s use. This repo isn&#39;t gospel for Backbone.js development. Use it, modify it and then find a nice workflow for your style of development. A small request If anyone can figure out, when I&#39;m using pushState, why I need to manually trigger the &amp;quot;home&amp;quot; route on initialisation and not just let the &amp;quot;routes&amp;quot; object take care of the initial routing: Backbone.history.start({ pushState: true, root: &amp;quot;/&amp;quot; }); app.router.navigate(&#39;home&#39;, { trigger: true }); I&#39;ll update this post with the solution (if anyone has it!). As always, if anyone has any problems or queries with the post/it&#39;s code then feel free to tweet me or raise an issue on the GitHub repo and I&#39;ll do my bestest to help you out.</content>
  </entry><entry>
    <title>Jekyll In 15 Minutes</title>
    <link href="https://benhowdle.im/jekyll-in-15-minutes.html" />
    <updated>2013-06-02T00:00:00.000Z</updated>
    <id>https://benhowdle.im/jekyll-in-15-minutes.html</id>
    <content type="text">This post isn&#39;t going to go too much in depth about what Jekyll is, but more how to get going with it insanely quickly, however; Jekyll is a static-site generator. You give it Markdown, Liquid, HTML &amp;amp; CSS and it gives you a deployment ready site to upload to your server. Go read more on the site. Fire up your command line Now type these commands: gem install jekyll jekyll new my-awesome-site cd my-awesome-site jekyll serve --watch # =&amp;gt; Now browse to http://localhost:4000 What did we just do? Well, we installed Jekyll as a Ruby Gem (I should point out, that I have zero Ruby knowledge, so hopefully that gives you comfort...or you&#39;ve stopped reading). We then created a new Jekyll site, put ourself into the newly created folder and then tell Jekyll to start up a lightweight webserver. We can then access the site on the URL above. We add on the --watch flag to tell Jekyll to re-generate the site on save of any of our source files. Super rad lovely workflow. Folder structure Jekyll creates you a folder called &amp;quot;_site&amp;quot; every time it compiles your source files. This folder will contain everything you need to upload to your server to display your site. You don&#39;t mess with these files or this folder. Do. Not. Mess. Any folder you place in the root folder that is prefixed with an underscore, ie. &amp;quot;_layouts&amp;quot; is not copied to the &amp;quot;_site&amp;quot; folder on compilation. These folders are processed by Jekyll for output. If you create a folder called &amp;quot;img&amp;quot;, for example, Jekyll will directly copy that folder to the &amp;quot;_site&amp;quot; folder, untouched. If I create a folder called &amp;quot;archive&amp;quot;, and create an index.html file inside, then it&#39;s automatically available at &amp;quot;myawesomesite.com/archive/&amp;quot;. Layouts If I&#39;m creating a blog, I would have two &amp;quot;layout&amp;quot; files inside a &amp;quot;_layouts&amp;quot; folder - default.html and post.html. Default.html will contain the HTML for my menu, footer, etc...basically everything but the dynamic content. My post.html will contain the wrapper HTML for my blog post&#39;s content, ie. post title, a &amp;quot;tweet this&amp;quot; link, Disqus code, etc... In our layout files, we tell Jekyll where to place our dynamic content like so: {{ content }} or {{ content }} Posts Sticking with the blog analogy, I&#39;ll need to be writing some posts sooner or later, so I&#39;ll create a &amp;quot;_posts&amp;quot; folder in the root. This will contain, in my case, several &amp;quot;.markdown&amp;quot; files, these are purely my post&#39;s content. Front-matter This is YAML front-matter block. It&#39;s where you stick all of the configuration for that particular page/post/content. It&#39;s placed at the top of a file and looks something like this: --- layout: post tags: post title: How to be cool on the internet --- This will be processed by Jekyll during compilation, in this case, Jekyll will see that we&#39;ve set the &amp;quot;layout&amp;quot; value to be &amp;quot;post&amp;quot;, this tells Jekyll to use the post.html (we created earlier) as it&#39;s wrapper. Going back to our &amp;quot;archive&amp;quot; page, we want to output links to all the &amp;quot;posts&amp;quot; in our &amp;quot;_posts&amp;quot; folder. We have the following HTML in our archive/index.html file: {% for post in site.posts %} {{ post.title }} {{ post.date }} {% endfor %} What we&#39;ve done above is loop around all the files in our &amp;quot;_posts&amp;quot; folder. Any variables we set in our front-matter, like we did above, can be accessed in these loops, ie. post.title will equal &amp;quot;How to be cool on the internet&amp;quot; when it&#39;s iterating the one above. More about front-matter. And...the rest? This is a nice point to wrap up. This isn&#39;t an exhaustive reference for Jekyll, when/if you need that, go to the very awesome docs. It&#39;s hopefully enough to get you to the point where you can get a blog up and running in ~20 mins! If you want a complete Jekyll blog to clone and play around with, check out Musings, it&#39;s a very small blog that I run, but it&#39;s a nice Jekyll starting point. Thanks to the 27 people on HelpMeWrite who wanted to read this!</content>
  </entry><entry>
    <title>Patterns for Managing Large Scale Backbone Applications</title>
    <link href="https://benhowdle.im/patterns-for-managing-large-scale-backbone-applications.html" />
    <updated>2013-03-24T00:00:00.000Z</updated>
    <id>https://benhowdle.im/patterns-for-managing-large-scale-backbone-applications.html</id>
    <content type="text">Backbone is frickin&#39; awesome. What I feel makes it awesome is the fact that it helps you write and structure better JavaScript code and not better &amp;quot;framework-x&amp;quot; code. It&#39;s amazingly intelligent but not opinionated, which is what causes problems for a lot of people when getting to grips with it (myself included). The site itself and corresponding docs don&#39;t really go beyond the syntax and methods Backbone provide, not much in terms of architecture and structure. Head on over to Google and you&#39;ll find a plethora of Backbone tutorials, most go over theory and the syntax, a few go over building a small app (a Todo app - the new &amp;quot;Hello World&amp;quot;), but very very few actually delve into building multi-regioned, multi-stated, authentication backed Backbone applications. I&#39;ve had the pleasure of building one of the latter described applications for the past few months at KashFlow and have picked up a few tips and patterns along the way which have made building this large scale application a heck of a lot easier. Quick caveat: They&#39;ve worked for me. They may not work for you. This is ok. So, without further waffle, let&#39;s get started... Object Structure I house all of my Backbone app code under one global object, ie: window.App = { Models: {}, Collections: {}, Routers: {}, Views: {}, init: function(){ App.router = new App.Routers.main(); Backbone.history.start(); } }; $(function() { App.init(); }); If you notice, I use CamelCase for Classes and Objects and lowercase for instances of those Objects and Classes, so at quick glance, you can see what&#39;s what. A Model declaration/instantiation for example: App.Models.Quote = Backbone.Model.extend({}); and App.quote = new App.Models.Quote(); Namespacing Chances are, if you&#39;re building a large scale application, you&#39;ll be saving data back to a server for data persistence and if you&#39;re doing that and using Backbone, chances are you&#39;re communicating with a REST API. So, you&#39;re familiar with the idea of resources. To make everything a little easier, I&#39;ll use examples from the actual application I&#39;m building; online accounting software. Namespace the crap out of everything: Object, files, folders, Views, routes, etc...keep them all under namespaces. If you hit /quotes/ then the app&#39;s router will direct you towards the &amp;quot;quotes&amp;quot; function in the router, perform a .fetch() on the &amp;quot;quotes&amp;quot; Collection and load up &amp;quot;app.Views.quotes&amp;quot; to handle and delegate the responsibility for what the user is about to see; a list of their quotes. Namespacing is quite an obvious one but I think it&#39;s so important to firm this one up. Folder Structure Nothing too original but something a little like this: /models/ /quote.js /collections/ /quotes.js /views/ /quotes/ /quotes.js /quotesDetails.js Again, pretty standard, but within Views, I add an extra folder layer where I group the resource name, ie. quotes, customers, etc... Regions and DOM Event Management When I started with Backbone, I quickly realised that I was going to need a robust way of managing content within certain areas of the application&#39;s chrome (the layout - sidebar, header, footer, content, etc...). Luckily, Dan Harper was on hand to give me some indispensible code he&#39;d written and was using in his own Backbone applications. Dan since put it on GitHub so you can use this resource in your applications; Backbone ViewManager. The general idea revolves around two concepts: Regions and one Base View. Adding Regions (Regions are shortcuts to DOM elements): Backbone.ViewManager.Core.addRegion(&#39;panel&#39;, &#39;#panel&#39;); Backbone.ViewManager.Core.addRegion(&#39;sidebar&#39;, &#39;#sidebar&#39;); Backbone.ViewManager.Core.swap(&#39;panel&#39;, new App.Views.quotes()); Backbone.ViewManager.Core.swap(&#39;sidebar&#39;, new App.views.sidebarMain()); The &amp;quot;swap&amp;quot; method essentially takes the render().el of the passed-in View and inserts it into the DOM element. However, not before emptying the DOM element of the previous View and turning off all the events that were bound to it. All your views inherit from one Base view, like so: App.Views.quotes = Backbone.ViewManager.BaseView.extend({}); This allows the ViewManager to call methods like view.off() and we have the off() method declared on the Base View, so all sub views inherit from it. I&#39;ve found Dan&#39;s method to be substantial enough for me and haven&#39;t found any performance bottlenecks so far or zombie views. However, if you do want something a little more feature rich, check out Marionette.js. Templates I use Handlebars and up until very recently, I used to house all of my templates in script tags in the index.html file, grab them by ID with JavaScript in my View&#39;s render function and compile them on the fly. Now I&#39;m cool and use Grunt.js. So I use the Grunt Contrib Handlebars plugin for Grunt which means that I can keep all of my templates in separate files, ie. /templates/ /quotes/ /quotesDetails.hbs Which Grunt will precompile those template files into a global &amp;quot;JST&amp;quot; JavaScript Object, which means that in my view, I can access the template via: var template = JST[&amp;quot;templateName&amp;quot;]; and pass in data as before. Minimal change for performance increases and better file organisation. What I don&#39;t use Require.js - This is probably the biggest thing I get suggested to me, but so far, I&#39;ve not seen enough of it&#39;s &amp;quot;wins&amp;quot; to convince me to retrofit it to this application. However, being the naive one that I am, it&#39;s probably amazing and hopefully some cleverer people in the comments section will point out it&#39;s use cases and advantages. Conclusion Get your application to the point, maybe by using the above tips and patterns, where it doesn&#39;t feel like you&#39;re building a large scale application. Every moving part should feel very modular and self contained, even just by, initially, putting each View into separate files. This process is not a concrete or final solution and as with the internet, I imagine that in a couple of weeks, a new way of building large scale Backbone apps will emerge and we&#39;ll all be able to take bits from it. But, until that point, take what you&#39;d like from the above mind dump and let me know if any of it needs further expansion or I&#39;ve missed any tips or patterns that you&#39;ve used successfully.</content>
  </entry><entry>
    <title>useCapture demystified</title>
    <link href="https://benhowdle.im/useCapture-demystified.html" />
    <updated>2013-03-03T00:00:00.000Z</updated>
    <id>https://benhowdle.im/useCapture-demystified.html</id>
    <content type="text">Any guesses as to which will alert first? window.addEventListener(&amp;quot;click&amp;quot;, function(){alert(1)}, false); window.addEventListener(&amp;quot;click&amp;quot;, function(){alert(2)}, true); Answer: the second one. Explain that to me, yo! The third, now optional, parameter in addEventListener is &amp;quot;useCapture&amp;quot;, which literally means fire the event at &amp;quot;capture&amp;quot; and not &amp;quot;bubble&amp;quot;. When an event is attached to an element, it will fire once the event has fully bubbled up to the document. However, if you set the useCapture flag to true, the event will fire on capture, and not wait for the event to fully bubble up the DOM tree. tl;dr &amp;quot;Events can be activated at two occasions: At the beginning (&amp;quot;capture&amp;quot;), and at the end (&amp;quot;bubble&amp;quot;).&amp;quot; from Unable to understand useCapture attribute in addEventListener Deeper understanding Javascript Events – Capturing And Bubbling</content>
  </entry><entry>
    <title>The Unseen Revisions</title>
    <link href="https://benhowdle.im/the-unseen-revisions.html" />
    <updated>2013-02-15T00:00:00.000Z</updated>
    <id>https://benhowdle.im/the-unseen-revisions.html</id>
    <content type="text">In the relatively brief time I&#39;ve spent learning to design and develop, I&#39;ve learnt a great amount but the most scary and humbling thing I&#39;ve learnt is what I still have left to learn. It seems to be a rabbit hole of knowledge and understanding. Constantly feeling stupid whilst looking at what others are producing and achieving both drives and grounds me. The key thing that struck me very recently, and subsequently sparked me into putting these thoughts down, was that we constantly see results. Results of people&#39;s hard work, obsessive iterations and revisions. We see the popular page on Dribbble, 12 highly perfected designs, we see the Backbone source on GitHub. We see the end of a person&#39;s journey. The very gleaming, shiny version of their hard work, frustration and pain. Something that Jonathan Ive said has always stuck with me, from this video Objectified he says that (not a direct quote) the light on the side of the Macbook (to indicate that it&#39;s sleeping), which slowly &amp;quot;breathes&amp;quot; by getting lighter and darking continuously, this feature took the engineers and industrial designers months and months to perfect, yet made it so that customers couldn&#39;t see the months of deliberation and frustration, they wanted it that way. Design should feel effortless to the user. However, this is all well and good, but to someone else, ie. another developer or designer, it causes the bar to be continually raised and standards set higher. What do I do when I look at great design or amazing code; 20% admire it and 80% feel daunted by it. However the whole point of this post is that we have to remember that we don&#39;t see the hours, days, months of crap iterations and awful revisions that people go through the reach their Nirvana. The next time you open up the popular page on Dribbble or browse the Underscore library or jQuery source, just think of the crazy amount they would have changed since their first iteration. In fact, more transparency would be very beneficial to everyone, ie. to see the starting point of a project. I would personally pay to see the first line of code that John Resig wrote for the jQuery library.</content>
  </entry><entry>
    <title>setTimeout - a temporary hack</title>
    <link href="https://benhowdle.im/settimeout.html" />
    <updated>2013-01-29T00:00:00.000Z</updated>
    <id>https://benhowdle.im/settimeout.html</id>
    <content type="text">The JavaScript below is perfectly legal, it grabs a DOM element that doesn&#39;t exist and then sets it&#39;s text. No errors at all. setTimeout(function () { document.getElementById(&#39;myEl&#39;).innerText = &#39;Hey there, yo!&#39;; }, 0); var myDiv = document.createElement(&#39;div&#39;); myDiv.id = &#39;myEl&#39;; document.body.appendChild(myDiv); Great Odin&#39;s raven!? Imagine your code as building blocks, all stacked on top of each other, this is how the browser&#39;s engine executes your JavaScript - top to bottom. It processes one piece then the next, then the next, one at a time, essentially blocking the progress of execution until the current operation is complete. JavaScript is single-threaded, ie. it can&#39;t do more than one thing at a time, like a multi-threaded application can. What&#39;s actually happening above is that the setTimeout is added to the browsers event queue, right at the end. So it&#39;s not executed until all other operations in the current stack are carried out and complete. tl;dr setTimeout() buys you some time until the DOM elements are loaded (and rendering is complete), even if it&#39;s set to 0. Deeper understanding Resig&#39;s How JavaScript Timers Work</content>
  </entry><entry>
    <title>APIs</title>
    <link href="https://benhowdle.im/apis.html" />
    <updated>2013-01-24T00:00:00.000Z</updated>
    <id>https://benhowdle.im/apis.html</id>
    <content type="text">I must hear the word &amp;quot;API&amp;quot; about 20+ times a day. However, I&#39;ve noticed a few people recently on Twitter and email me asking &amp;quot;where do I begin with APIs?&amp;quot; or &amp;quot;how do I get started with APIs?&amp;quot;. Well I&#39;m hoping after reading this today, you&#39;ll 1. Know what an API is and 2. Not to be so afraid to just get stuck in with them! I&#39;ve tried to write this tutorial whilst trying to remember what concepts confused me when I first started with APIs. So if I&#39;ve not explained something enough or left something out that you&#39;re struggling with, then let me know in the comments below or on Twitter - @benhowdle. AP-what? Without quoting Wikipedia or telling you what it stands for (because it&#39;s not really that helpful), I&#39;m going to describe exactly what an API is and what it does in my own words: An API is an exposed set of methods for a developer to use, not needing to know the working guts of a system. People often associate APIs with things like Twitter or Facebook or Foursquare, services like that. But an API has much broader use, ie. If you use JavaScript you may well have used .pushState(), well if you have, then you&#39;ve used the History API. PushState is a method of the History API. Your arsenal What do you need to use an API? What languages can I use? Answer: any. From personal experience, I&#39;ve used APIs heavily with only two languages; JavaScript and PHP. For PHP, you&#39;d want to use cURL, like so: // Get cURL resource $curl = curl_init(); // Set some options curl_setopt_array($curl, array( CURLOPT_RETURNTRANSFER =&amp;gt; 1, CURLOPT_URL =&amp;gt; &#39;http://testcURL.com/?item1=value&amp;amp;item2=value2&#39; )); // Send the request &amp;amp; save response to $resp $resp = curl_exec($curl); // Close request to clear up some resources curl_close($curl); In JavaScript (jQuery for conciseness of code), you&#39;d use $.ajax, like so: $.ajax({ url: &amp;quot;test.html&amp;quot;, cache: false }).done(function( html ) { $(&amp;quot;#results&amp;quot;).append(html); }); So, $.ajax in JavaScript (jQuery) and cURL in PHP. That&#39;s it. Cool story &amp;quot;bro&amp;quot;, but how do I get my latest tweet? The PHP way: &amp;lt;?php function get_data($url){ $ch = curl_init(); $timeout = 5; curl_setopt($ch,CURLOPT_URL,$url); curl_setopt($ch,CURLOPT_RETURNTRANSFER,1); curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout); $data = curl_exec($ch); curl_close($ch); return $data; } function showTweets($username){ $url=&#39;http://api.twitter.com/1/statuses/user_timeline.json?screen_name=&#39;.$username.&#39;&amp;amp;include_rts=true&amp;amp;count=1&#39;; $obj = json_decode(get_data($url)); foreach ($obj as $item) { $tweet = $item-&amp;gt;text; echo &#39;My latest tweet: &#39; . $tweet . &#39;&#39;; } } ?&amp;gt; And the JavaScript (jQuery) way: $.getJSON(&amp;quot;https://api.twitter.com/1/statuses/user_timeline/benhowdle.json?count=1&amp;amp;include_rts=1&amp;amp;callback=?&amp;quot;, function(data) { $(&amp;quot;#twitter&amp;quot;).html(data[0].text); }); HTTP Methods So far we&#39;ve dealt with reading data from an API, not sending it. Reading data is done through the HTTP method &amp;quot;GET&amp;quot;. Sending data, or posting data, is done with the HTTP method &amp;quot;POST&amp;quot;. In PHP, we can easily send (POST) data to a remote service with the following code, taken from Execute a HTTP POST Using PHP CURL: //set POST variables $url = &#39;http://domain.com/get-post.php&#39;; $fields = array( &#39;lname&#39; =&amp;gt; urlencode($last_name), &#39;fname&#39; =&amp;gt; urlencode($first_name), &#39;title&#39; =&amp;gt; urlencode($title), &#39;company&#39; =&amp;gt; urlencode($institution), &#39;age&#39; =&amp;gt; urlencode($age), &#39;email&#39; =&amp;gt; urlencode($email), &#39;phone&#39; =&amp;gt; urlencode($phone) ); //url-ify the data for the POST foreach($fields as $key=&amp;gt;$value) { $fields_string .= $key.&#39;=&#39;.$value.&#39;&amp;amp;&#39;; } rtrim($fields_string, &#39;&amp;amp;&#39;); //open connection $ch = curl_init(); //set the url, number of POST vars, POST data curl_setopt($ch,CURLOPT_URL, $url); curl_setopt($ch,CURLOPT_POST, count($fields)); curl_setopt($ch,CURLOPT_POSTFIELDS, $fields_string); //execute post $result = curl_exec($ch); //close connection curl_close($ch); And with JavaScript (jQuery) we can execute a POST in the following way: var obj = { first: &#39;ben&#39;, last: &#39;howdle&#39; } $.ajax({ type: &#39;POST&#39;, url: &#39;destination.php&#39;, data: obj, success: function(data, textStatus, jqXHR) { // Handle a success state }, error: function(jqXHR, textStatus, errorThrown) { // Handle an error state } }); So now you can retrieve data from an API and send data to it as well. Gotchas! Now, you may be feeling a little more confident and rush to start pulling data from APIs using JavaScript, however, you may end up with an error in your Console of something like: &amp;quot;XMLHttpRequest cannot load http://example.com Origin localhost is not allowed by Access-Control-Allow-Origin.&amp;quot;. What the hell? Well this means that the API you&#39;re trying to grab data from hasn&#39;t enabled CORS (Cross Origin Resource Sharing), read more about CORS, so you have two options; 1. persuade the service to enable CORS or 2. use JSONP. A simple modification to our above code and we can have JSONP added to our Twitter call, like so: $.ajax({ url: &amp;quot;https://api.twitter.com/1/statuses/user_timeline/benhowdle.json?count=1&amp;amp;include_rts=1&amp;quot;, dataType: &#39;jsonp&#39; }).done(function( data ) { console.log(data); }); Next steps Create an API client A Beginner’s Guide To jQuery-Based JSON API Clients Build your own API Create a RESTful Services API in PHP Move to API centric development API Centric Development Learn about REST APIs Representational State Transfer Build a completely client-side web app communicating with a REST API Future of Software Development Delivery</content>
  </entry><entry>
    <title>On Heisenberg.js</title>
    <link href="https://benhowdle.im/on-heisenberg-js.html" />
    <updated>2013-01-11T19:12:00.000Z</updated>
    <id>https://benhowdle.im/on-heisenberg-js.html</id>
    <content type="text">Today, I released Heisenberg.js, a project base for building modern JavaScript applications. Features From the site itself, here are the key features: Not a framework, but a base project to help you structure your JavaScript code and files. Makes use of the Mediator Pattern, so you can trigger events around your application. Templating built in - Handlebars is included but feel free to change engines. Follows the Object Literal Pattern with one global namespace, usually &#39;myApp&#39; or &#39;chatApp&#39;. In this case, it&#39;s &#39;breakingBad&#39;. Included: jQuery, Underscore.js and Handlebars. A fully commented source to show you why it&#39;s doing what it&#39;s doing and how to build upon it&#39;s foundation. The JavaScript version of the HTML5 Boilerplate The thing that I hope makes this project succeed is that it&#39;s almost the same code you&#39;ve been writing in jQuery or JavaScript. It&#39;s just that I&#39;m hoping to provide a way for people to kick off their next project with a little more structure than maybe they&#39;ve had before. It potentially introduces some new paradigms that you may not be familiar with; the idea of The Mediator Pattern. This is where instead of directly referencing another Object/Module in your application, you send a &amp;quot;message&amp;quot; (you trigger an event) to a central Object which then fires that event across your whole application. Any Modules that are listening out for that event can then perform an action based off of it. Data can be sent with these events as well. The idea being that you then end up with lots of smaller, decoupled Modules as opposed to one big file and all functionality being mixed together. The second thing that you may or may not be using is templating. How many times have you, maybe in the success of an AJAX call, built up a string of HTML and injected it into the DOM? I certainly have. Templating allows you to keep all HTML out of your JavaScript logic and tucked away in neat &amp;quot;partials&amp;quot;, ready to be called into action. Heisenberg.js comes with Handlebars as standard but is completely template engine agnostic, so feel free to use any of the others. Whether you use this project in it&#39;s intended manner is obviously completely up to you. It&#39;s basically jQuery but organised into files and Modules. My advice would be to build upon this solid base but feel free to experiment within this architecture and find what works best for you. Future plans It&#39;s hosted on GitHub, and has already got a few suggestions for improvements/additions, which I&#39;ll be gladly integrating in the near future. The goal To be as widely used as HTML5 Boilerplate. I think with JavaScript, there&#39;s still quite a large middle ground between jQuery in a scripts.js file and full blown Backbone.js applications. I hope this helps a lot people organise their code and form their own opinions on architecture and coding style. Watch this space.</content>
  </entry><entry>
    <title>Underlying Passion</title>
    <link href="https://benhowdle.im/underlying-passion.html" />
    <updated>2013-01-05T14:12:00.000Z</updated>
    <id>https://benhowdle.im/underlying-passion.html</id>
    <content type="text">I believe that passion is the underlying reason why people and products succeed. Have you ever used a product and you clearly know that there was no passion involved in the design, manufacturing or shipping of said product? Yep. That’ll be the the lack of passion. Misalignment of a button, displeasing typography, odd placement of parts, all of these things, at first glance, might be misconstrued as accidental. Nothing is an accident. When a person ships or builds a product, may it be digital or physical, the level of passion will directly affect their ability to check for these inaccuracies. People’s passion will run parallel with the nature in which they conduct their design or development, when they make the conscientious decision to put their name to a product, they send it out to the public in full knowing that it’s going to be critiqued and observed by all and sundry. Passion (or lack thereof) will shine through. The same principle’s will be prevalent when you look for work. Or in fact, what I’ve found is that the more you deeply care about what you do, then work tends to come your way more frequently. I don’t proclaim to be the most knowledgeable person in my field, but I make up for this in passion and an insatiable desire to be better. When you emit passion, then people build an immediate trust with your knowledge and thinking and will be more inclined to work with you. So, my advice is simply be passionate; in everything related to your field. Whether you design, develop, make, build, ship, make sure that you’re proud to have your name attached and let that passion be the first thing people experience when they use your product.</content>
  </entry><entry>
    <title>Mobile Last</title>
    <link href="https://benhowdle.im/mobile-last.html" />
    <updated>2012-12-30T20:28:37.000Z</updated>
    <id>https://benhowdle.im/mobile-last.html</id>
    <content type="text">&amp;quot;Mobile Last&amp;quot; - is this guy trolling?? Maybe. Maybe not. It&#39;s a slight tongue in cheek dig at our community; what usually happens: Industry Big Wig says &#39;Mobile First (or similar) is the new way we do things&#39;, then everyone thinks &#39;right, we must now do Mobile First&#39;, then we start telling clients we are now a &#39;Mobile First&#39; agency or designer, clients then think they must go with someone who implements &#39;Mobile First&#39;. I don&#39;t know about you, but I get very nervy when we start &amp;quot;unofficially&amp;quot; labelling approaches. &amp;quot;Mobile First&amp;quot;, &amp;quot;Responsive Web Design&amp;quot;, &amp;quot;Adaptive Web Design&amp;quot; and &amp;quot;Future Friendly&amp;quot;. Haven&#39;t got a problem with these actual approaches, it&#39;s just the furore that surrounds the use of them. Conversations on Twitter usually go like &amp;quot;You&#39;re using LESS, I hate you&amp;quot;, &amp;quot;Well, you&#39;re using Sass, you&#39;re a douche&amp;quot;. Please. Who cares? The client? No. Your users? No. You and your team? Yep, these are the only people that is affects. With my latest site redesign and domain name change, I actually did employ a &amp;quot;Mobile Last&amp;quot; approach. I designed the site, did the markup, wrote the templating stuff, did the CSS for desktop, THEN, did the styles for tablet and mobile with a few lines of CSS. Hey, I know it&#39;s no Square or Karma, but it works for me and works for the content I have on my site. Most of this article was venting the frustration of our community jumping on trends and partly because I wanted to write a post in iA Writer and save it to Dropbox then compile it with Jekyll.</content>
  </entry><entry>
    <title>Twitter Bootstrap</title>
    <link href="https://benhowdle.im/twitter-bootstrap.html" />
    <updated>2012-12-17T20:28:37.000Z</updated>
    <id>https://benhowdle.im/twitter-bootstrap.html</id>
    <content type="text">Twitter Bootstrap takes some flack. Some horrible flack, which I&#39;m certainly guilty of dishing out in the past. But this, after much thought and a recent plethora of negative activity and animosity in the web community, I realised that it&#39;s not Twitter Bootstrap that&#39;s the problem; it&#39;s developers and designers not willing to go that extra mile. 24 Ways recently published this article. I&#39;ve never read a more half assed title - &amp;quot;How to Make Your Site Look Half-Decent in Half an Hour&amp;quot;. Come on. &amp;quot;Half-decent&amp;quot;. Is this what you want to output? To be known for? I can&#39;t understand why, if you have an ounce of pride, you&#39;d not want everything you release to be the best you could do? Twitter Bootstrap was made by @mdo and @fat for internal use at Twitter and released it because they thought it&#39;d be useful for rapid prototyping of web elements or UI components (correct me if I&#39;m wrong). Twitter Bootstrap can be likened to the default drop shadow setting in Photoshop. It&#39;s not that people hate their existence, it&#39;s that people see it in use and immediately think &amp;quot;you haven&#39;t tried&amp;quot;. Developers will hide behind the fact that they can&#39;t design. Well, you could have made the greatest plugin, web application or library in the world but the moment you slap Twitter Bootstrap on your site for it, all initial value is lost and people just see a &amp;quot;cookie cutter website&amp;quot;. If you&#39;ve made a great product and you can&#39;t design, find a designer, there is a deluge of designers out there looking for a collaboration. There&#39;s no excuse. I had the same conflict when I made SVGeezy. I had written a plugin and was a little disenchanted that it wouldn&#39;t have a sweet landing page to accompany it. Luckily, the delightful Jack Smith had a free evening and said he&#39;d put together a landing page, he went off muttering something about a &amp;quot;kanye west hover effect on the title&amp;quot;. So my fairly simple plugin had a great landing page and I felt proud sending the link around. I wonder if I&#39;d have felt the same had I have just whacked Twitter Bootstrap on it? My underlying thought on this is, people will always make tools to make people&#39;s life easier. Use them wisely. Don&#39;t be &amp;quot;just another Twitter Bootstrap site&amp;quot; and don&#39;t strive to make your site &amp;quot;half-decent looking&amp;quot;.</content>
  </entry><entry>
    <title>Smashing Magazine&#39;s The Mobile Book</title>
    <link href="https://benhowdle.im/smashing-magazines-the-mobile-book.html" />
    <updated>2012-12-11T21:04:15.000Z</updated>
    <id>https://benhowdle.im/smashing-magazines-the-mobile-book.html</id>
    <content type="text">Smashing Magazine sent me a preview of their upcoming book The Mobile Book, which initially you might think, &amp;quot;Hmmm, that&#39;s a seemingly bold title&amp;quot;. It is. It&#39;s saying, this is the standard, the reference book for Mobile. Well, it is. I worked for a mobile web publishing company for 18 months and the depth of knowledge provided by the experts in this book (and I mean experts) is extraordinary. It blew me away. It reminded me of the time I was learning jQuery and thought &amp;quot;I should really get to grips with JavaScript as well&amp;quot;, it was the tip of the iceberg of what I had learnt and seeing what I had to learn scared the crap out of me. This book provides that much needed solid foundation for designing and building for Mobile, it fully equips you for the journey ahead. The book is honest, it even says it&#39;s an &amp;quot;artefact of it&#39;s time&amp;quot;, which means it admits that one day, we&#39;ll look back and think, gosh can&#39;t believe we used to design/build for mobile as a separate entity. But this is ok. The industry is going through a period with RWD that people seem to be following the latest rule/approach/pattern that is popular on Twitter, but this book takes stock of everything and provides a diplomatic, comprehensive guide to understanding &amp;quot;Mobile&amp;quot;, delivered by (even though I&#39;m not keen on the exact word) current &amp;quot;thought-leaders&amp;quot; and people who have a real passion for the Mobile endeavours in our community. Buy it when it comes out. Oh and if they ask, tell &#39;em Ben Howdle sent you.</content>
  </entry><entry>
    <title>Using jQuery custom events in your Object Literal and Observer Pattern applications</title>
    <link href="https://benhowdle.im/using-jquery-custom-events-in-your-object-literal-and-observer-pattern-applications.html" />
    <updated>2012-12-03T23:36:50.000Z</updated>
    <id>https://benhowdle.im/using-jquery-custom-events-in-your-object-literal-and-observer-pattern-applications.html</id>
    <content type="text">Harnessing jQuery&#39;s cooler features in your application can really enhance the performance and, sometimes more importantly, the maintainability of your web application. The short snippet below utilises the Observer Pattern and the Object Literal Pattern. To read more about these and other Design Patterns, I can&#39;t recommend Addy Osmani&#39;s excellent &amp;quot;Learning JavaScript Design Patterns&amp;quot; enough. Firstly, the snippet. var app = {}; app.mediator = $({}); app.walt = { init: function() { app.mediator.on(&quot;yo&quot;, this.yo); }, yo: function(event, params) { console.log(params.name); } }; app.hank = { init: function() { var data = { name: &quot;jesse pinkman&quot; }; app.mediator.trigger(&quot;yo&quot;, data); } }; app.walt.init(); app.hank.init(); What the above does, is uses an intermediary Object to handle the Publishing (triggering) and the Subscribing (on) of messages and data within your application. You may sometimes see the Observer Pattern being referred to as &amp;quot;PubSub&amp;quot; - this is simply short for Publisher/Subscriber. Advantages of using this pattern are that you can remove objects from your code and things won&#39;t break. Each object doesn&#39;t have a working knowledge of how any other objects work, it just blindly sends messages and listens out for messages, reacting to those events. The result is a nice, decoupled application with a handsome degree of separation of concerns.</content>
  </entry><entry>
    <title>From jQuery to JavaScript and back again</title>
    <link href="https://benhowdle.im/from-jquery-to-javascript-and-back-again.html" />
    <updated>2012-10-30T20:57:36.000Z</updated>
    <id>https://benhowdle.im/from-jquery-to-javascript-and-back-again.html</id>
    <content type="text">I saw an article posted on Netmag the other day for which I contributed a couple of coding tips. The tip I&#39;m talking about in this post reads as follows: Ben also thinks that falling back on frameworks and libraries isn&#39;t always the best answer. &amp;quot;If you&#39;re coding a small project and always, for example, include jQuery, think &#39;Can I do this with Vanilla JS?&#39;&amp;quot; You may find that, indeed, you can do it in JavaScript, better and faster. See, I wrote this tip back in July sometime (only now has it been published) so my thoughts have somewhat evolved since then. The line that struck me was the last one: &amp;quot;you can do it in JavaScript, better and faster&amp;quot; - the real truth is that, no, I can&#39;t. John Resig and his team at jQuery have put many, many man hours into optimising jQuery to the max, I don&#39;t think you&#39;d find a line in the source that hasn&#39;t been put under a microscope and scrutinised, benchmarked and refactored. I think the problem with my quote, apart from it being an over-generalisation, is that we are constantly being shown polar opposites in the code quality of people&#39;s jQuery. With some truly awful code to the other end of the spectrum with some compact, snappy code. jQuery can be used wildy and clumsily. The reason being is that it abstracts out a lot of the JavaScript &amp;quot;gotchas&amp;quot; into nice, easy function calls. Most errors are caught by jQuery and never seen by the user, but more dangerously, never seen by the developer. Someone will probably prove me wrong but in my experience, the bad code that I&#39;ve seen (my early stuff definitely included) is largely using jQuery, as opposed to vanilla JavaScript. Coding in raw JavaScript teaches you to ration, it teaches you to be conscious of errors and mindful of constant optimisations. My only piece of advice is to get a solid background in JavaScript, then move forward, appreciate the blood, sweat and tears Resig put into jQuery and use it to iron out those peculiarities that JavaScript often smacks us in the face with.</content>
  </entry><entry>
    <title>Set up your own GitHub page</title>
    <link href="https://benhowdle.im/set-up-your-own-github-page.html" />
    <updated>2012-10-21T23:48:58.000Z</updated>
    <id>https://benhowdle.im/set-up-your-own-github-page.html</id>
    <content type="text">Earlier, I set up my own GitHub page. There is a helpful guide on the GitHub site itself. However, I thought I&#39;d add a couple more points to it and explain my process a little. Your own GitHub page is just a plain ol&#39; repo, nothing special, it&#39;s just the name of it that matters. Make sure you create a new repo following the format of [GITHUB_USERNAME].github.com, so for me, my repo was &amp;quot;https://github.com/benhowdle89/benhowdle89.github.com&amp;quot;. Now you can place an index.html file in the root with CSS &amp;amp; JS files as normal in there and GitHub will automatically push that repo to the &amp;quot;benhowdle89.github.com&amp;quot; URL. With my GitHub page, I wanted to display all of my GitHub repos and a small bit of &amp;quot;me&amp;quot; information from GitHub. I also decided to use the excellent HandlebarsJS for the templating solution - makes sense as our GitHub repo markup was going to be repeated and exist as &#39;boilerplate&#39; code in large. Please feel free to Fork the repo and use it for your own page. To use the JavaScript for yourself, go to the bottom of &#39;app.js&#39; and replace my username with yours, see below: App.init(&#39;benhowdle89&#39;); // change this for yours! As far as CSS goes, feel free to customise it as much as you want for your own page. It should be noted that the set of files in the repo will actually work on any site, so you could put it on your own site and it would be fine. Any questions or feedback please leave a comment below or tweet me @benhowdle.</content>
  </entry><entry>
    <title>The obsession with responsive websites</title>
    <link href="https://benhowdle.im/the-obsession-with-responsive-websites.html" />
    <updated>2012-09-06T20:31:13.000Z</updated>
    <id>https://benhowdle.im/the-obsession-with-responsive-websites.html</id>
    <content type="text">For good or bad, Ethan Marcotte coined the term Responsive Web Design a while ago and from that point, #RWD was plastered across tutorials, Twitter, conferences, I think some people even got &amp;quot;RWD4 LYFE&amp;quot; tattooed across their knuckles. People went crazy for it. The humble pixel was outcast as villainous, inflexible and just plain wrong. Ems and percentages were the new kings, proudly thrust forward into stylesheets everywhere for type, layout and grids. But I think, somewhere along the line, people got silly. Now, no one is going to take my opinions in this article seriously unless I provide examples of what I mean. By doing this, I&#39;m going to have to link to certain sites that highlight my point. I really do hope that the owners and designers of these sites do not take offense to my words, it&#39;s not personal. My first issue: big sites. The thinking before RWD fever hit, was that mobile and narrow viewports were the afterthought in design and development. You would slap a 960px width on your &#39;.wrap&#39; DIV and ahhh, most monitors and resolutions accounted for...bliss. Then people found Mobile First and Adaptive approaches and yes, fair enough, the web got better. Much better. The current state of the web is that people are far more conscientious about mobile and take great care in not constricting their designs and catering for a much wider audience, even people grabbing a drink from the fridge. But, on the flip side, it seems to have gone full circle and we&#39;re so focussed on &#39;small&#39; that we&#39;ve forgotten about &#39;big&#39;. These sites are not best viewed on a 21&amp;quot;+ iMac - Trent Walton Zeldman Stuff and Nonsense I know right, you have to sit back to avoid being punched in the face by the font size. This tweet from Mark Collins is in reference to the last link. 1700px width inputs! That&#39;s twice as big as the average site. There&#39;s nothing wrong with putting a &#39;max-width: 1300px;&#39; on your sites. Don&#39;t be afraid of the pixel. My second issue: Attitude towards non-responsive designs How many sites a day do you pull in the bottom right corner of your browser window on? All of them? Maybe so. The first thought across some people&#39;s mind is &amp;quot;Tut, it&#39;s not responsive&amp;quot;, whilst they blindly disregard the rest of the design. Wrong. Please don&#39;t get into a habit of doing this. Your users don&#39;t do it, nor should you. My third issue: lack of design in responsive sites This quote from David Bushell pretty much sums it up: With so much focus on pure content we&#39;re in danger of losing our license to design. The supposedly superficial &#39;decoration&#39; is what makes a website compelling in a bustling commercial environment. From his article. Design with mobile in mind? Sure. But design it, don&#39;t just make it accessible. Final thoughts Design with your users in mind. Don&#39;t just design to be included in a &amp;quot;Top 30 Responsive Sites&amp;quot; post, or wildly jam a jQuery plugin in which inflates your type to fit any resolution (112px is too big). And please, does anyone like form controls? Who decided that navigation would be nice as a &amp;lt;select&amp;gt; element on mobile?</content>
  </entry><entry>
    <title>A quick start JavaScript class structure</title>
    <link href="https://benhowdle.im/a-quick-start-javascript-class-structure.html" />
    <updated>2012-07-22T11:25:37.000Z</updated>
    <id>https://benhowdle.im/a-quick-start-javascript-class-structure.html</id>
    <content type="text">A google around reveals lots of ways to structure your JavaScript code. My method is nothing special, nothing new and is probably covered hundreds of times across the web. But, it&#39;s always useful to see different methods, right? Maybe you hadn&#39;t seen this way yet... var myClass = function() { var privateVar = &#39;do not look at me from the outside&#39;; return { publicVar: &#39;access me from anywhere&#39;, fname: &#39;&#39;, init: function(fname) { this.fname = (fname !== undefined) ? fname : &#39;guest&#39;; console.log(&#39;you&#39;ve kicked this class off, &#39; + this.fname); }, getPrivateVar: function() { return privateVar; } }; }(); myClass.init(); // you&#39;ve kicked this class off, guest myClass.init(&#39;ben&#39;); // you&#39;ve kicked this class off, ben console.log(myClass.publicVar); // access me from anywhere console.log(myClass.privateVar); // undefined console.log(myClass.getPrivateVar()); // do not look at me from the outside I&#39;m always interested in people&#39;s methods of structuring their code, so stick a link to a Gist or a JSFiddle and share your ways in the comments section...</content>
  </entry><entry>
    <title>Hire Yourself, You Won&#39;t Look Back</title>
    <link href="https://benhowdle.im/hire-yourself-you-wont-look-back.html" />
    <updated>2012-07-05T23:02:02.000Z</updated>
    <id>https://benhowdle.im/hire-yourself-you-wont-look-back.html</id>
    <content type="text">Remember in Austin Powers where he met his past self, how well did they get on, right? Well this is not really like that. What I mean by &amp;quot;hire yourself&amp;quot; is; you know all those times where you&#39;ve gone, &amp;quot;Can&#39;t wait to see if a client asks me to use [X] technology in a project&amp;quot; or, &amp;quot;wow, I get to design this!&amp;quot;. Probably won&#39;t happen, unless you&#39;re really lucky or unless this happens all the time and I&#39;ve severely misjudged this post. As a developer, designer or any of those types, I&#39;m guessing you chose to do it. Not many people find a sign in a shop window saying, &amp;quot;WEB DESIGNER WANTED - ENQUIRE WITHIN&amp;quot;. It doesn&#39;t happen. So I&#39;m also guessing that you have an eternally unsatisfied perception of your own skills and knowledge, insatiably reaching bigger and better goals? Just hire yourself. Do it. Start making your own timeline for learning. Do things and experiment with new libraries and code when you want to. Think of a side project? Do it. Release it. Think of the next million dollar idea? Get on that. The projects page on your website should be brimming with links and case studies of ideas that you&#39;ve made happen through your own volition. Don&#39;t get much client work? Fill your portfolio with your own stuff. Under strict NDA at your current workplace? Start a GitHub or Dribbble account and start filling it. Also, nothing looks better than this to a potential employer. Who likes an empty portfolio, right? And before anyone says, &amp;quot;Ben, you&#39;re just a tool and a hypocrite, where&#39;s your projects page?&amp;quot;, check it out muchachos: Projects. Adios amigos.</content>
  </entry><entry>
    <title>Say Yes To Everything</title>
    <link href="https://benhowdle.im/say-yes-to-everything.html" />
    <updated>2012-07-03T20:29:15.000Z</updated>
    <id>https://benhowdle.im/say-yes-to-everything.html</id>
    <content type="text">Preface The below is a suggestion. Only that. It&#39;s my method and it may not suit everyone. As you were. Intro Playing on the &amp;quot;step out of your comfort zone&amp;quot; cliche, saying &amp;quot;yes to everything&amp;quot; involves a lot more risk. Risk of embarrassment and risk of not delivering something. Why Only doing what you do is dangerous. I&#39;m not talking about if you&#39;re a designer, then just designing is dangerous, that&#39;s ridiculous. I&#39;m talking about taking your strength, whatever it may be and pushing yourself to do an aspect of it that normally you might think you&#39;re too inexperienced for or unable to perform. The risk part of it involves saying yes to clients/potential clients. You can sit on development blogs or look at design tutorials for a long time and of course, you&#39;re going to learn a heck load, but when you&#39;re in a situation where you&#39;ve quoted a number of days and a cost to perform a task, you get that unpleasant feeling of panic wash over you and a slight nauseating sensation of &amp;quot;I cannot do this&amp;quot;. But you have to. You&#39;ve promised and you&#39;ve committed. Pride is a big motivator in this and plays a key role in this notion. If you lack pride then you&#39;re not going to mind letting someone down and gaining a sub par reputation. Results Pushing yourself to this extreme every time you say yes to something is not the nicest method of learning but you certainly set yourself on an accelerated learning path. Everything is real and everything is applied. How many times have you seen examples for OOP where they use code such as: class Animal{ public $type = &amp;quot;animal&amp;quot;; public function dance() { echo $this-&amp;gt;type . &amp;quot; dances!&#92;n&amp;quot;; return true; } } Sure, you get to learn principles, but you don&#39;t learn application of knowledge. And that is where you gain true understanding. Summary I hope this hasn&#39;t sounded too preachy, I hate that attitude people have of shoving their opinions down other&#39;s throats. I would recommend this method as I&#39;ve employed it in my experience and it&#39;s got me to where I am now in a short space of time. The next time someone wants something doing and you&#39;re free, have a go at it, commit yourself. Let yourself feel that pressure. Now, I&#39;m not saying that if someone wants a Ruby application making and you ONLY know PHP, please don&#39;t say yes. They&#39;ll probably hate you for wasting their time. [DISCLAIMER] If you say yes to a piece of work and can&#39;t do it and it goes bad...we never met, capiche?</content>
  </entry><entry>
    <title>Others Are Better Than You, Get Over it</title>
    <link href="https://benhowdle.im/others-are-better-than-you-get-over-it.html" />
    <updated>2012-06-24T19:21:02.000Z</updated>
    <id>https://benhowdle.im/others-are-better-than-you-get-over-it.html</id>
    <content type="text">Preface Articles like this - http://michaelwright.me/critique make me sad because he is discouraged from giving critique, &amp;quot;This is why I, and others I know, are a lot more reluctant to provide critique. &amp;quot;I have followed Michael on various networks for a while now and once you see past his, sometimes, short manner (honest and blunt), you realise that the guy is trying to help you, not make you feel silly. I am the first to admit that when I was starting out, I didn&#39;t take criticism very well. As a web professional, your work is constantly on display. That means you have to be ok with receiving comments on it, the good and the bad. Others are better than you The &amp;quot;penny-drop&amp;quot; moment for me was realising that there are people that are actually better than you, of course I knew that there were better people out there than me, I&#39;m looking at you Sir Jonny Ive. But, what I mean is, when taking critique it&#39;s very easy to jump on the defensive, immediately defending and throwing back comments on why you did things or what something means. Further tips on this: How to take criticism. Sites like Forrst, Dribbble and even Twitter, to an extent, are perfect platforms for people to give feedback, yet some cannot take it. Some posts are even entitled &amp;quot;Please give me feedback on [x]&amp;quot;, then they complain when critique is given. Critique is never personal and it is always subjective, remember this. If you find yourself starting out, then there are going to be people that are better than you, if you have been a web person for 10+ years, there are always people that are better than you. Find yourself a small number of people that you consider better than you and target them. Get yourself to their level, then find better people, you&#39;ll accelerate your learning so much by constantly having a goal ahead of you. We&#39;re all a little bit competitive, right? It helps. You&#39;re still human, right? Some people fall into the trap of the &amp;quot;god complex&amp;quot; mentality. Believing that, because they have excelled in their field, they can act like they are better than others. These people are still human and were once starting out, just like every one of us. I&#39;m not gonna name names because this isn&#39;t a good medium for it, but there are several people on Twitter who follow people because they&#39;re a &amp;quot;Big Name&amp;quot; or something. Some are genuinely nice guys and gals and will take the time to help you out or hand out advice. Others...not so much. Move on. You don&#39;t need them! (I&#39;ve subsequently unfollowed a heck load of &#39;em!) In conclusion...I know, thank god, right!? The next time someone gives you critique, listen to them. Ignore people on networks that just &amp;quot;troll&amp;quot;, they&#39;re not worth the time of day. I&#39;m well aware that there are a lot of people I speak to on a daily basis that are better than me, once I realised this, it helped me vastly and I&#39;ve gained so much knowledge from them. If you find yourself in the position where you&#39;re getting beginners reaching out to you and asking for help, even if it&#39;s a silly question, help them. Don&#39;t become one of the elitists mentioned above. You probably asked silly questions when you started out. If you&#39;ve found this article preachy and irritating, then I apologise. It&#39;s been brewing for a while and I thought I&#39;d just get it out there. Hope it helps some people.</content>
  </entry><entry>
    <title>Newcomers to web design, don&#39;t be put off</title>
    <link href="https://benhowdle.im/newcomers-to-web-design-dont-be-put-off.html" />
    <updated>2012-01-05T21:33:14.000Z</updated>
    <id>https://benhowdle.im/newcomers-to-web-design-dont-be-put-off.html</id>
    <content type="text">I started out in 2009 creating websites. I started doing this in the evenings and weekends because I realised that that was what I wanted to do. It took a fair amount of motivation to come home from a full time job and to learn new skills. I think the web design industry demands motivation, it keeps everyone on their toes. If you stay static and refuse to look at new technologies/findings then you get left behind. This is unfortunate but true. So stay open and hungry! I hope newcomers (I&#39;ve only been in the industry for 2-ish years so I suppose I am still a newcomer) to web design read this article and come away with a reinvigorated sense of optimism and passion. I got into the web because it&#39;s not just a 9 to 5 job. It&#39;s a lifestyle choice, it&#39;s the ultimate way to express yourself and create just about anything you want. Pictured above is from The Social Network, now this came out a while after I started developing but I actually found this incredibly inspiring. I&#39;m not too much of a Facebook supporter but seeing Mark Zuckerberg&#39;s story told in this hollywood way really spurred me on further to keep creating and thinking about new ideas. When I began learning HTML &amp;amp; CSS from the W3Schools I was blissfully unaware of Twitter and tutorials/blogs from other designers and developers. I&#39;m not sure if this helped or hindered me. One the &amp;quot;helped&amp;quot; side i missed out on the petty arguments that can occur between certain &amp;quot;Big names&amp;quot; in the industry, which to a newcomer may be off putting and distracting from what actually matters. Another &amp;quot;helped&amp;quot; point is that I found my own way of learning, I wasn&#39;t influenced by the hottest trend at the time or the flavour of the month CSS framework, I could just learn the raw components of what I needed. However, it&#39;s a big however, on the &amp;quot;hindered&amp;quot; side I missed out (briefly, maybe less than 6 months) on the ever growing, ever helpful, fantastic community we involve ourselves in. No joke, if I had not have tapped into places like http://tutsplus.com and http://stackoverflow.com I would not be where I am today. I decided not to go to university and at the time of making that choice, the idea of working on the web wasn&#39;t even on my radar. So I had no structured learning. No one stood over me giving me assignments and tests. But, to be honest, I don&#39;t feel like I missed out on the technical side of it. Sites like http://thenewboston.org/ provide so much learning material in easy to digest mediums. I found not going to university allowed me to learn by doing. Learn by creating. There are certain problems you do come across from not going to university, as I have come across. The one that I felt held me back the most was actually working with other developers and designers. When you are self-taught, it tends to be quite a lonely role for a while. I would certainly suggest finding someone to partner up with to hack around projects and hey, even make some money from doing Mr Joe Blogg&#39;s Shoe Shop website. It&#39;s a start. Learning from others is the most valuable tools I find. Nothing beats human interaction to work through a problem. Good advice also comes from people telling you to RTFM, though it seems unhelpful at the time, go with it, suck it up and do it. You&#39;ll be a greater professional that way. My first website I (tried to) build was a music related website which was backed by a MySQL database and PHP powered, I found this to be a little more complex than I anticipated and begrudgingly abandoned it. But undeterred, I lowered my sights and made some more realistic creations. My advice would be to move slowly up the &amp;quot;skill ladder&amp;quot; and get a solid grounding in a technology/language before you move on to bigger things. Also, I found that side projects were (and are) the best thing for fuelling learning. I still create side projects now and find they satisfy my hankering to work with the web. My latest project is a Movie web app which pulls data from TMDb and allows users to save a Movie to their own &amp;quot;Bucket List&amp;quot; of films to watch before they die. Simple but very relevant to what I enjoy - watching movies. This personal involvement makes it easier to get motivated to sit in front of my iMac and code instead of, say, sleep. Recently a list was put together of all the necessary skills to become a frontend developer. Very useful, but very daunting to people just learning to make text italic. From that list, you could boil it down to HTML, CSS &amp;amp; Javascript (PHP + MySQL if you&#39;re drawn to development as well as design). Responsive Web Design, Progressive Enhancement and Fluid layouts are aspects of design, not skills. Do not even take notice of them when you are just starting out. Learn to code well or design beautifully, the above terms are to be adapted to after learning core skills, not during. You don&#39;t want to cloud your learning. With those 5 skills listed above, you could adapt and pick up anything else on the list from the link above (ok, apart from Ruby maybe). So my closing statement is, unfortunately, an overused saying. But &amp;quot;Stay Hungry. Stay Foolish&amp;quot;. A list of great learning resources (that I&#39;ve personally learnt from): http://tutsplus.com/ http://thenewboston.org/ http://css-tricks.com/ http://teamtreehouse.com/ http://davidwalsh.name/ http://www.smashingmagazine.com/ http://www.netmagazine.com/ http://thinkvitamin.com/ http://stackoverflow.com/ http://forrst.com/ http://line25.com/</content>
  </entry>
</feed>
