<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Hyperbliss</title>
        <link>https://hyperbliss.tech/</link>
        <description>Stefanie Jane's personal blog, lab experiments, and more.</description>
        <lastBuildDate>Wed, 15 Apr 2026 18:39:04 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved 2026, Stefanie Jane</copyright>
        <item>
            <title><![CDATA[Regex Nightmares]]></title>
            <link>https://hyperbliss.tech/lab/regex-nightmares</link>
            <guid isPermaLink="false">https://hyperbliss.tech/lab/regex-nightmares</guid>
            <pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[21 regular expressions dissected down to the molecular level. Interactive step-throughs, live testers, and the real-world disasters they caused.]]></description>
            <content:encoded><![CDATA[
21 regular expressions that range from "impressively clever" to "someone should have stopped you." Each one is dissected so you can understand exactly how the magic works, exactly where the madness begins, and (where applicable) exactly which production outages resulted.
]]></content:encoded>
            <author>stefanie@hyperbliss.tech (Stefanie Jane)</author>
        </item>
        <item>
            <title><![CDATA[The Terminal Renaissance: Designing Beautiful TUIs in the Age of AI]]></title>
            <link>https://hyperbliss.tech/blog/2026.04.04_terminal-renaissance</link>
            <guid isPermaLink="false">https://hyperbliss.tech/blog/2026.04.04_terminal-renaissance</guid>
            <pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Claude Code writes 4% of GitHub commits. Developers live in terminals more than ever. So why does nobody talk about designing them well? A strategy guide for building beautiful, AI-ready terminal interfaces: design principles, theme engines, and the automation layer that lets agents see what they build.]]></description>
            <content:encoded><![CDATA[
Something shifted.

It wasn't sudden. More like tectonic plates moving under the industry while everyone watched the AI hype cycle. But the evidence is hard to ignore. Claude Code now authors [4% of all public GitHub commits](https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point), 135,000 a day, doubling month over month. [69% of developers](https://devecosystem-2025.jetbrains.com/) keep a terminal open at all times. OpenCode, a terminal-native AI coding agent, hit 95,000 GitHub stars in two weeks. Ghostty, a GPU-accelerated terminal emulator, went nonprofit because its creator believed the terminal mattered enough to protect from acquisition.

The terminal isn't having a nostalgia moment. It's having a _platform_ moment.

And nobody's talking about how to design for it.

## The Three Forces

Three things happened at once, and the compound effect is bigger than any of them alone.

**AI agents chose the terminal.** Claude Code, Codex CLI, Gemini CLI, OpenCode: every serious AI coding tool lives in the shell. Not because terminals are trendy, but because the terminal is where _execution_ happens. IDE extensions suggest code. Terminal agents _write_ code, run tests, read logs, fix errors, and commit. The terminal became [an AI runtime nobody intentionally designed](https://adelzaalouk.me/2026/feb/22/terminals-agents-and-the-control-plane-nobody-built/), and it turns out to be a really good one.

**Modern tooling raised the floor.** A generation of Rust and Go tools quietly replaced the Unix standard library with versions that are faster, prettier, and more intuitive. ripgrep over grep. bat over cat. eza over ls. fd over find. yazi over ranger. zoxide over cd. lazygit over raw git. atuin over Ctrl+R. And tools like [ChromaCat](https://github.com/hyperb1iss/chromacat), which turns any terminal output into animated gradient art with plasma patterns, aurora effects, and 40+ themes, proved that the terminal could be genuinely _beautiful_, not just functional. The terminal got a glow-up that had nothing to do with AI; it just got _better_ as a daily environment.

**Terminal emulators became premium products.** Ghostty renders at 500fps with native GPU acceleration and platform-native UI. Kitty pioneered an inline image protocol that lets terminals _show_ things. WezTerm ships a built-in multiplexer. Rio runs on WebGPU. The modern terminal baseline is true color, font ligatures, Unicode everywhere, and rendering performance that puts some web apps to shame.

Put them together: more developers spending more time in terminals that are more capable than ever, building with frameworks that make terminal UIs genuinely enjoyable to create.

So where's the design language?

## The Missing Manual

Web developers have Material Design, Apple's Human Interface Guidelines, WCAG accessibility standards, and a research tradition going back decades. Mobile developers have platform-specific HIG documents, accessibility mandates, and component libraries that enforce consistency.

Terminal developers have... vibes.

I went looking for the equivalent. Here's what I found:

- **[clig.dev](https://clig.dev/)**: a solid CLI design guide that
  [explicitly excludes TUIs](https://clig.dev/#introduction): "Full-screen terminal programs are niche projects; very few of us will ever be in the position to design one."
- **Base16 / Tinted Theming**: a color system with 230+ palettes. Covers color
  only. Nothing on layout, interaction, navigation, or component patterns.
- **A 1983 ACM paper** on terminal interface design. The last (and essentially
  only) academic work on the subject.
- **awesome-tuis**: the most-starred TUI resource list on GitHub. It's a catalog
  of apps. Zero design resources.

There is no HIG for terminal applications. No accessibility standard. No cross-framework design system. No academic research tradition. The closest thing is library documentation for individual frameworks, useful but framework-specific and focused on _how to build_, not _what to build_.

This gap isn't just an oversight. It's a massive opportunity. Terminals are a medium with their own affordances (information density, keyboard-first interaction, spatial memory, graceful degradation across connection quality) and they deserve design thinking that's native to those strengths, not borrowed from the web.

What follows is my attempt to start filling that gap. Not theory: lessons from building five production TUI applications across two frameworks, with a design system that spans all of them.

## Designing for 80 Columns

The first thing you learn building terminal UIs: every cell matters in a way that pixels don't. A web developer can throw a 32px margin on something and it disappears into the layout. In a terminal, a single wasted column is a percentage of your real estate. The constraint shapes everything.

### Layout as Architecture

Terminal layouts aren't just arrangements; they're _architectures_ that determine how users build mental models of your app. After building five apps and studying [23 exemplar TUIs](https://github.com/hyperb1iss/hyperskills), I've found that almost every successful terminal app falls into one of seven patterns:

**Persistent Multi-Panel**: Everything visible at once, panels in fixed positions. lazygit, btop, and Unifly all use this. The magic is _spatial consistency_: users learn that "network traffic is top-right" and their eyes go there automatically. You never rearrange panels without explicit user action. The user's spatial memory _is_ the navigation.

**Miller Columns**: Three columns showing parent, current, and preview. yazi and ranger use this for file navigation. The insight: hierarchical data has a natural horizontal flow. You see where you came from (left), where you are (center), and where you're going (right). Elegant for anything tree-shaped.

**Drill-Down Stack**: Browser-like navigation into increasingly specific views. k9s does this beautifully for Kubernetes (cluster → namespace → deployment → pod → container → logs), with `:resource` jumps for power users. The pattern for deep hierarchies where showing everything at once would be chaos.

**Widget Dashboard**: Independent, self-contained widgets in a grid. btop and bottom take this approach for system monitoring. Each widget owns its own data lifecycle and rendering. Good when the relationship between data is "these are all about the same system" rather than "these are all about the same item."

**IDE Three-Panel**: Sidebar, main content, and detail/output. Iris Studio, harlequin, and most development tools use some variant. The layout metaphor is: _navigate_ (left), _work_ (center), _inspect_ (right). Tab bars give the main panel multiple personalities.

**Overlay/Popup**: Appears over the shell, does one thing, disappears. atuin and fzf embody this. No state between invocations. The terminal equivalent of a modal dialog, summoned when needed, gone when done, never disrupting your scrollback.

**Header + Scrollable List**: Fixed header with stats, scrollable data below, function bar at the bottom. htop and tig. The oldest pattern and still one of the most effective for any "view a list of things with summary stats" use case.

The choice isn't arbitrary. When I built Unifly (a network dashboard), persistent multi-panel was obvious: network state is best understood _all at once_, with your eyes learning where each metric lives. When I built Iris Studio (an AI git workflow), IDE three-panel was the right call, because you're working on one thing at a time but need navigation and context flanking the main content.

Picking the wrong layout is like picking the wrong data structure. Everything downstream gets harder.

### Seven Principles

I've codified the design patterns that work across all seven layout types into principles. I won't enumerate them as a numbered list; that's not how they work in practice. Instead, they're threads that run through every decision:

**Spatial consistency** is the foundation. Panels don't move. Tabs stay in order. The user builds a mental map of your app in the first minute and navigates by _location memory_ after that. Every time you shuffle the layout, you reset their spatial model to zero.

**Keyboard-first, mouse-optional** means every feature is reachable without a mouse, but mouse support isn't an afterthought either. The reason: terminal power users are keyboard people, but beginners discovering your app will click. Support both; optimize for keys.

**Progressive disclosure** is how you avoid the "wall of keyboard shortcuts" problem. Three tiers: a footer bar showing the 3-5 most important keys (always visible), a `?` help overlay with the full keybinding reference (on demand), and complete documentation for everything else. Beginners see the floor. Experts find the ceiling. Nobody reads a manual to get started.

**Semantic color** means color carries _meaning_, not decoration. Green means success. Red means danger. Yellow means caution. If you stripped all color from your app and it became unusable, your design is broken. Color should reinforce information hierarchy that's already established through layout, typography, and symbols. More on this shortly.

**Async everything** is non-negotiable in 2026. Never freeze the UI. File operations, network calls, AI generation: all background tasks with progress indicators. The user should always be able to press `Esc` and get back to a responsive interface. A TUI that hangs is a TUI that gets killed.

**Contextual intelligence** means your interface adapts to what the user is doing _right now_. Keybindings change when focus moves between panels. The status bar reflects current state. Help shows shortcuts that are actually available in this context. The UI earns trust by always being accurate about what's possible.

**Design in layers** is the principle I wish someone had told me on day one. Start with monochrome: is the app _usable_ with no color at all? Then add 16 ANSI colors: is the hierarchy _readable_? Then layer in true color: is it _beautiful_? Each tier is independent. Your app works on a monochrome SSH session _and_ looks stunning in Ghostty. That's not a tradeoff; it's a design discipline.

### The Vim Question

One pattern that emerged across every framework and every app I built: vim keybindings are the terminal lingua franca.

Not because every terminal user runs vim. But because `j`/`k` for up/down, `h`/`l` for left/right, `/` for search, `?` for help, `g`/`G` for top/bottom, and `Esc` to go back is the most information-dense navigation vocabulary ever designed. It's six keystrokes that handle 80% of navigation. And it's muscle memory for exactly the audience that builds and uses TUIs.

I structure keyboard interaction in four layers:

- **L0 (Universal)**: Arrow keys, Enter, Escape, `q` to quit. Shown in the
  footer. Anyone can use this.
- **L1 (Vim motions)**: `j`/`k`/`h`/`l`, `/`, `?`, `:`. Also shown in the
  footer. Terminal natives expect this.
- **L2 (Actions)**: Single mnemonic keys: `d` for delete, `s` for stage, `r` for
  refresh. Discoverable through the `?` help overlay.
- **L3 (Power)**: Composed commands, macros, configuration. Documentation only.
  The ceiling for experts who've invested the time.

Each layer is invisible until the user reaches for it. That's progressive disclosure applied to keyboard interaction.

## Color as Information Architecture

Color in a terminal is a _resource_, not a paintbrush. You have a constrained palette compared to the web, a wildly unpredictable rendering environment (users run every terminal emulator and theme combination imaginable), and an audience that may be looking at your app over SSH on a 16-color connection.

### The Three-Tier Model

The golden rule: **usable at 16 colors, beautiful at true color**.

Your app encounters terminals in three capability tiers:

**16 ANSI colors**: The foundation. These are the colors the user's terminal theme controls. When you say "red," the terminal decides what red looks like. This means your reds match their theme. The upside: automatic coherence. The downside: no fine control. Design with named ANSI colors and your app blends into any terminal. This is your SSH-over-a-bad-connection baseline.

**256 colors**: Extended palette with fixed colors. You gain control but lose theme coherence. Your specific shade of purple will look the same on every terminal, which means it may clash with their background. Use sparingly for emphasis; don't build your entire palette here.

**True color (24-bit)**: Full control. 16 million colors. This is where you make it beautiful. But always remember: it's an enhancement layer over a 16-color foundation, not a replacement for one.

Detection is straightforward: check `$COLORTERM` for `truecolor` or `24bit`. Check `$TERM` for `256color`. Respect `$NO_COLOR` unconditionally: if it's set, strip all color. This isn't just accessibility; it's professional courtesy.

### Semantic Color Slots

The insight that changed how I think about terminal color: define colors by _function_, not appearance.

Instead of "this panel border is `#e135ff`," it's "focused panel borders use `accent.primary`." Instead of "errors are `#ff6363`," it's "errors use `status.error`." A semantic layer between your code and your colors.

Here's the vocabulary I use across all five apps:

- **text.primary**: Main body text. Off-white on dark backgrounds.
- **text.muted**: Secondary information, metadata, timestamps. Noticeably
  dimmer.
- **text.emphasis**: Headers, focused items. Bright, bold.
- **bg.base** → **bg.surface** → **bg.overlay**: Three background layers, each
  ~5-8% lighter. Creates depth without borders.
- **accent.primary**: Your brand color. Interactive elements, focused borders.
- **accent.secondary**: Supporting interactions. Secondary highlights.
- **status.success / .warning / .error / .info**: Exactly what they sound like.
- **git.staged / .modified / .untracked**: Domain-specific tokens for git apps.
- **diff.added / .removed**: Domain-specific tokens for diff views.

When your colors have semantic names, your entire app becomes theme-swappable overnight. Change the values behind the names; every screen updates instantly. I proved this across five apps with 20 different themes, same codebase, completely different personalities.

### Theming as Infrastructure

This is where most TUI developers stop: they pick some hex codes, scatter them through the codebase, and ship one look. Changing anything means grepping through 50 files.

I got tired of this after the second app. So I built [Opaline](https://github.com/hyperb1iss/opaline), a token-based theme engine for Ratatui that implements the semantic color model as actual infrastructure.

The pipeline:

> **Palette** (raw hex colors) → **Tokens** (semantic names that reference
> palette) → **Styles** (composed foreground + background + modifiers) →
> **Gradients** (multi-stop color interpolation)

Each layer references the one below it. Tokens like `text.primary` resolve to palette entries like `gray_50`. Styles like `keyword` compose a foreground token with bold. Gradients interpolate between palette entries for progress bars and visual effects.

The result: 20 builtin themes, including five [SilkCircuit](https://github.com/hyperb1iss/silkcircuit) variants (Neon, Soft, Glow, Vibrant, Dawn), plus Catppuccin, Dracula, Nord, Rose Pine, Gruvbox, Tokyo Night, and more. Every theme is validated against a contract test suite: 40+ tokens must be defined, 18+ styles must resolve, 5 gradients must interpolate correctly. Users can write their own themes as TOML files. Runtime switching costs nothing.

The bigger lesson isn't about Opaline specifically. It's that theming is _infrastructure_, the same way a design system is infrastructure for the web. If you want visual consistency across multiple apps, or you want to support user customization without chaos, you need a resolution pipeline with semantic indirection. Hex codes in source files is a phase, not a strategy.

### SilkCircuit: A Terminal Design Language

To make the theme system concrete, I designed SilkCircuit as a cohesive visual identity for terminal applications. Not "use my colors," but "here's what a complete terminal design language looks like."

- **Electric Purple** (`#e135ff`): Brand, emphasis, focus states
- **Neon Cyan** (`#80ffea`): Interaction, file paths, tech elements
- **Coral** (`#ff6ac1`): Accents, hashes, constants
- **Electric Yellow** (`#f1fa8c`): Warnings, timestamps, attention
- **Success Green** (`#50fa7b`): Confirmations, additions, online states
- **Error Red** (`#ff6363`): Danger, deletions, offline states

Five variants prove the system works: Neon is electric and high-contrast. Soft is muted and comfortable. Glow adds bloom-like emphasis. Vibrant is saturated and bold. Dawn is a warm light theme. Same semantic slots, completely different energy. The design language is the _mapping_ from meaning to color, not the colors themselves.

## Five Apps, Two Frameworks

Theory is cheap. Here's what I actually learned by shipping.

### Unifly: The Dashboard

[Unifly](https://github.com/hyperb1iss/unifly) is a real-time network management dashboard for Ubiquiti UniFi controllers. Eight screens of live data: WAN traffic charts, device health, client lists, firewall rules, topology maps, event streams, historical analytics. Built in Rust with Ratatui.

**The design lesson: information density is a feature, not a problem.** Every cell on screen earns its place. WAN bandwidth charts use a dual-layer technique : HalfBlock area fills for the smooth body with Braille character line overlays for the crisp edge. Traffic bars use fractional block characters (`▏▎▍▌▋▊▉█`) for sub-cell precision that makes terminal charts feel surprisingly smooth. Status indicators use semantic symbols: `●` online, `○` offline, `◐` transitioning, `◉` pending adoption.

**The architecture lesson: never poll.** Unifly uses reactive streams, `tokio::watch` channels that push data changes to the UI. The TUI doesn't ask "has anything changed?" on a timer. It gets told when something changes. The difference in responsiveness is visceral.

**The product lesson: the dual-product pattern.** Unifly ships as two binaries from the same codebase: `unifly` (CLI for scripting and automation, JSON output, composable with pipes) and `unifly-tui` (interactive dashboard for humans). One core, two faces. The CLI lets you `unifly devices --json | jq '.[] | select(.status == "offline")'`. The TUI lets you explore the same data visually, drill into details, restart devices. Neither is better; they serve different workflows.

![Unifly TUI tour showing real-time network stats, device health, and traffic charts](https://raw.githubusercontent.com/hyperb1iss/unifly/main/docs/static/img/unifly-tour.gif)

### Iris Studio: The AI Workflow

[Iris Studio](https://github.com/hyperb1iss/git-iris) is a six-mode AI-powered git workflow tool. Explore code semantically, generate commit messages, run code reviews, draft PR descriptions, create changelogs, write release notes, all from a three-panel TUI with a universal chat interface. Built in Rust with Ratatui.

**The design lesson: modes need visual identity.** Six modes could easily feel like six apps wearing a trench coat. Consistent three-panel layout across all modes (navigate left, work center, inspect right) with mode-specific content keeps it unified. Shift+letter shortcuts for mode switching build muscle memory fast.

**The architecture lesson: pure reducers make AI UIs predictable.** When an AI agent controls your UI, you need a state model you can reason about. Iris uses a Redux-style pure reducer where every state transition is a function from `(state, event) → (new state, side effects)`. No I/O inside the reducer. Agent responses flow through the same event system as keystrokes. This makes the entire UI testable, debuggable, and auditable.

**The interaction lesson: universal chat changes everything.** Press `/` in any mode and a chat overlay appears. Ask Iris to refine a commit message, explain a security finding in a review, or add detail to release notes, and it updates the content directly through tool calls. The AI isn't in a separate panel; it's accessible _from anywhere_ you're working. Context follows you.

![Iris Studio commit mode with three-panel layout and SilkCircuit Neon theme](https://raw.githubusercontent.com/hyperb1iss/git-iris/main/docs/images/git-iris-screenshot-1.png)

### q: The Minimalist

[q](https://github.com/hyperb1iss/q) is a tiny Claude Code CLI built with TypeScript, Bun, and Ink (React for terminals). One letter, four modes: query (fire-and-forget questions), pipe (Unix pipeline citizen), interactive (full TUI), and agent (tool-using AI).

**The design lesson: know when _not_ to be a TUI.** q's pipe mode is the opposite of a rich interface: raw text output, no markdown formatting, no code blocks, no decoration. It's a perfect Unix filter. `cat config.yaml | q "convert to json" > config.json`. The discipline is in _not_ rendering things when the context doesn't want rendering.

**The framework lesson: React's mental model works in terminals.** Ink maps React's component model directly to the terminal. `<Box flexDirection="column">`, `<Text color="cyan">`, `useState` for state, `useEffect` for side effects. If you know React, you know Ink. The conceptual overhead is near zero. Claude Code itself is built on Ink, and that's not a niche endorsement.

### Vigil: The Agent Orchestrator

[Vigil](https://github.com/hyperb1iss/vigil) is a PR lifecycle manager that dispatches AI agents to handle mechanical code review tasks. Card-based dashboard showing all your open PRs, six specialized agents (triage, fix, respond, rebase, evidence, learning), and a human-in-the-loop toggle that ranges from "approve every action" to "let agents run."

**The design lesson: state machines need visual language.** Vigil classifies PRs into five states: hot (needs attention now), waiting (blocked on something), ready (good to merge), dormant (stale), blocked (can't proceed). Each state maps to a color, an icon, and a card style. The dashboard _looks_ different when things are on fire versus when everything is calm. Color as information, not decoration.

**The architecture lesson: the HITL/YOLO spectrum is a design decision.** Sometimes you want an agent to show you what it plans to do and wait for approval. Sometimes you want it to just handle things. The toggle between these modes is a UX feature, not a backend feature. It changes the entire interaction model of the dashboard. Building it taught me that human-AI control boundaries are UI design problems.

### Opaline: The Invisible One

[Opaline](https://github.com/hyperb1iss/opaline) is the theme engine underneath the other four. It doesn't have its own TUI. It _is_ the reason the other four look cohesive.

**What it taught: infrastructure is the unglamorous work that makes everything else possible.** Opaline is 20 builtin themes, a four-pass resolution pipeline, contract testing that validates every theme against a strict schema, and a `ThemeSelector` widget for drop-in theme pickers. Nobody sees it directly. Everyone benefits.

## Choosing Your Framework

I build in two frameworks, Ratatui (Rust) and Ink (TypeScript/React). Having shipped production apps in both, here's the actual decision guide:

**Reach for Ratatui** when your app is a dashboard, a monitor, or any data-heavy view that runs for hours. Immediate-mode rendering means you describe the entire UI every frame and the framework diffs the terminal buffer for you. Zero garbage collection pauses. Runs beautifully over SSH. Netflix, AWS, and OpenAI all ship Ratatui apps in production. It's the right tool for btop-shaped problems.

**Reach for Ink** when your app is conversational, agent-driven, or benefits from the npm ecosystem (syntax highlighting, markdown rendering, rich text). React's component model and hooks make state management familiar. Bun gives you fast startup and embedded SQLite. Claude Code is built on Ink. It's the right tool for chat-shaped problems.

**What they share is more interesting than how they differ.** Both ecosystems converge on the same design patterns: unidirectional data flow (events → state → render), vim keybindings as the default navigation model, footer key hints with `?` help overlays, semantic color systems, and action dispatch architectures. The framework is the least interesting choice you'll make. The design principles travel across both.

The real question isn't "Ratatui or Ink?" It's "what patterns does my app's data flow demand?" If you answer that well, the framework choice falls out naturally.

## Playwright for Terminals

Here's the part nobody else is talking about.

AI coding agents can write TUI code all day. Claude Code, Codex, Gemini CLI: they'll generate Ratatui components, Ink React trees, Bubbletea models without breaking a sweat. But they have a fundamental problem: **they're blind**.

When Claude Code runs your TUI app, it gets stdout text. It cannot see the layout. It cannot verify that panel borders line up. It cannot check that `j`/`k` navigates correctly. It cannot tell if the status bar is rendering in the right color. It's building a visual interface _without eyes_.

This isn't a theoretical gap. Claude Code's Bash tool [doesn't allocate a real PTY](https://github.com/anthropics/claude-code/issues/9881). Interactive programs hang. TUI apps corrupt terminal state. Gemini CLI shipped proper PTY support in October 2025; Claude Code still hasn't. The most capable AI coding agent in the world cannot interact with the category of applications we're building.

Web developers solved this problem years ago with Playwright and Cypress. The agent writes code, opens a browser, renders the page, inspects the DOM, takes screenshots, simulates interactions, and iterates. Test-driven development with eyes.

Terminals have nothing equivalent. Until now.

### ghostty-automator

I built [ghostty-automator](https://github.com/hyperb1iss/ghostty-automator), a purpose-built IPC layer for [Ghostty](https://ghostty.org/) that exposes the terminal's actual state to external processes.

Not scraped text. Not regex-parsed ANSI escape sequences. Not tmux pane captures. The terminal emulator itself tells you, through structured data over a Unix socket, exactly what's on screen: every cell's character, foreground color, background color, bold/italic/underline state, and cursor position. The full semantic state of the rendered terminal.

A [Python library](https://github.com/hyperb1iss/ghostty-automator-python) wraps this with Playwright-style async ergonomics:

- `terminal.send("cargo run")`: send a command
- `terminal.wait_for_text("Listening on")`: wait for specific output
- `terminal.screen()`: read what's on screen as text
- `terminal.cells()`: read styled cells with color and formatting
- `terminal.screenshot()`: capture a PNG
- `terminal.press("KeyJ")`: send keystrokes
- `terminal.click(row=5, col=20)`: click at a position
- `terminal.expect.to_contain("Dashboard")`: assert content

An [AI agent skill](https://github.com/hyperb1iss/ghostty-automator-python/tree/main/skills/ghostty-terminal-automation) wraps the whole thing so any Claude Code agent can install terminal automation in one command:

```bash
npx skills add hyperb1iss/ghostty-automator-python
```

The agent gets the full API: send commands, read screens, take screenshots, click cells, assert content. No MCP server configuration, no protocol wiring. Just install and go.

### The Loop

Put the whole stack together and something remarkable happens:

The AI agent has **design knowledge**: a [3,000-line TUI design skill](https://github.com/hyperb1iss/hyperskills) covering layout paradigms, color theory, interaction patterns, accessibility requirements, and anti-patterns ranked by real-world complaint frequency.

It has **theming infrastructure**: Opaline, so it can work with semantic colors and swap themes without touching the layout code.

It has **frameworks**: Ratatui and Ink, which it already knows how to use from training data and documentation.

And now it has **eyes and hands**: ghostty-automator, so it can run the app in a real terminal, see the rendered output, interact with it through keystrokes and mouse events, and verify that what it built matches what it intended.

The loop closes: **design → build → run → see → fix → repeat**. The same workflow web developers have had for years, finally available for terminal applications.

Most terminal automation approaches parse ANSI byte streams, capture tmux panes, or run headless emulators. ghostty-automator is different: purpose-built IPC where the emulator itself participates. No parsing, no scraping, no guessing. The terminal tells you its state because you asked in its native protocol.

This is Playwright for terminals. And I think it changes what's possible.

## What Comes Next

The terminal stopped being the environment you escaped from. It became the environment you returned to, because it's actually better for how serious work happens now.

AI agents made it the control plane for software development. Modern frameworks made it beautiful. A generation of Rust and Go tooling made it a pleasure to live in. And now the infrastructure exists for those same agents to build, test, and iterate on terminal interfaces autonomously.

We have design principles for a medium that never had them. We have theming systems that bring design-system rigor to the terminal. We have frameworks in multiple languages that make building TUIs genuinely enjoyable. And we have an automation layer that gives AI agents eyes.

What we need now is more people building beautiful things. The terminal is a canvas. The tools are ready. The renaissance is here.

---

**Projects mentioned in this post:**

- [Opaline](https://github.com/hyperb1iss/opaline): Token-based theme engine for
  Ratatui (20 builtin themes)
- [Unifly](https://github.com/hyperb1iss/unifly): UniFi network management CLI +
  TUI
- [Git-Iris](https://github.com/hyperb1iss/git-iris): AI-powered git workflow
  with Iris Studio TUI
- [q](https://github.com/hyperb1iss/q): The tiniest Claude Code CLI
- [Vigil](https://github.com/hyperb1iss/vigil): AI-powered PR lifecycle manager
- [ghostty-automator](https://github.com/hyperb1iss/ghostty-automator): Terminal
  automation IPC for Ghostty
- [ghostty-automator-python](https://github.com/hyperb1iss/ghostty-automator-python):
  Playwright-style Python API + AI agent skill
- [ChromaCat](https://github.com/hyperb1iss/chromacat): Terminal colorization
  with animated gradient patterns and 40+ themes
- [SilkCircuit](https://github.com/hyperb1iss/silkcircuit): Electric meets
  elegant — terminal design language and theme system
- [tui-design skill](https://github.com/hyperb1iss/hyperskills): 3,000-line TUI
  design knowledge base for AI agents

**Frameworks:**

- [Ratatui](https://ratatui.rs/): Rust terminal UI framework (18.7K stars, used
  by Netflix/AWS/OpenAI)
- [Ink](https://github.com/vadimdemedes/ink): React for the terminal
  (TypeScript)
- [Bubbletea](https://github.com/charmbracelet/bubbletea): Elm architecture for
  Go TUIs (40K stars)
- [Textual](https://github.com/Textualize/textual): Python TUI framework with
  CSS-like styling

**Further reading:**

- [AI coding tools are shifting to the terminal](https://techcrunch.com/2025/07/15/ai-coding-tools-are-shifting-to-a-surprising-place-the-terminal/),
  TechCrunch
- [Your terminal is an AI runtime now](https://adelzaalouk.me/2026/feb/22/terminals-agents-and-the-control-plane-nobody-built/),
  Adel Zaalouk
- [Claude Code is the Inflection Point](https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point),
  SemiAnalysis
- [Building a TUI Is Easy Now](https://hatchet.run/blog/tuis-are-easy-now),
  Hatchet
- [Learning From Terminals to Design the Future of User Interfaces](https://brandur.org/interfaces),
  Brandur
- [TUI Design](https://jensroemer.com/writing/tui-design/), Jens Roemer
]]></content:encoded>
            <author>stefanie@hyperbliss.tech (Stefanie Jane)</author>
        </item>
        <item>
            <title><![CDATA[Context Engineering: Orchestrating AI Agents for Maximum Impact]]></title>
            <link>https://hyperbliss.tech/blog/2026.01.26_context-engineering</link>
            <guid isPermaLink="false">https://hyperbliss.tech/blog/2026.01.26_context-engineering</guid>
            <pubDate>Mon, 26 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A presentation on turning AI collaboration from chaos into zero-rework implementation through deliberate context engineering. Real numbers from building a complex voice assistant with 20+ parallel agents.]]></description>
            <content:encoded><![CDATA[
## The Problem

We've all been there. Vague prompts lead to endless back-and-forth. Context gets lost between sessions. Requirements surface mid-build. Code gets thrown away.

This isn't an AI problem. It's a _context_ problem.

## The Solution

**Context engineering** is the art of structuring information to maximize AI agent effectiveness—turning hours of work into minutes of zero-rework implementation.

I built a presentation demonstrating these principles using a real case study: bootstrapping **Project Haven**, a complex privacy-first voice assistant, with Claude Code.

The numbers speak for themselves:

| What               | Result       |
| ------------------ | ------------ |
| Parallel agents    | 20+          |
| Research documents | 80           |
| Lines of research  | 175,000      |
| Production code    | 47,000 lines |
| Time invested      | ~6 hours     |
| Rework commits     | **Zero**     |

## The Workflow

```
Brainstorm → Handoff → Questions → Swarm → Deep Dives → Decisions → Track → Ship
```

The presentation walks through each phase with real prompts, real agent conversations, and the actual techniques that made it work:

- **File references** over pasting content
- **Parallel swarms** for comprehensive research
- **Version currency** triggers for current docs
- **Question invitations** to surface assumptions
- **Deferred synthesis** for breadth-first exploration
- **Forced recommendations** to turn research into decisions

## The Meta Layer

Here's the fun part: the presentation about context engineering was itself built using context engineering. I mined conversation history from Haven, synthesized an 85KB case study, created detailed specs, and let parallel agents build the slides while [Sibyl](https://github.com/hyperb1iss/sibyl) tracked the work.

It's context engineering all the way down.

## See It Live

**[View the interactive presentation →](https://hyperb1iss.github.io/context-engineering-demo)**

Use arrow keys to navigate, Space to start the conversation widgets, and F for fullscreen. Best experienced on a big screen.

Source code on [GitHub](https://github.com/hyperb1iss/context-engineering-demo).

---

_Don't just prompt. Engineer the context._
]]></content:encoded>
            <author>stefanie@hyperbliss.tech (Stefanie Jane)</author>
        </item>
        <item>
            <title><![CDATA[HyperShell: Unifying Windows and Linux]]></title>
            <link>https://hyperbliss.tech/blog/2024.10.3_hypershell</link>
            <guid isPermaLink="false">https://hyperbliss.tech/blog/2024.10.3_hypershell</guid>
            <pubDate>Tue, 15 Oct 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[HyperShell: a terminal setup that bridges Windows and Linux through WSL2, PowerShell, AstroNvim, and a curated set of modern CLI tools. One environment, both worlds.]]></description>
            <content:encoded><![CDATA[
## 🌟 Introduction

**HyperShell** is a terminal setup that bridges Windows and Linux. If you love Linux's flexibility but still use Windows for specific workloads, HyperShell gives you one cohesive environment across both.

---

## 🌈 The Hybrid Approach: Windows + WSL2

As a long-time Linux enthusiast who also appreciates certain aspects of Windows, I created HyperShell to blend the best of both worlds. Here's why this hybrid setup fits my workflow:

| Feature                  | Description                                                                                                       |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------- |
| 🐧 **Linux Development** | Full Linux environment through WSL2 for all development work.                                                     |
| 🎵 **Music Production**  | Windows' audio driver support for DAWs like Ableton Live.                                                         |
| 🎮 **Gaming & Graphics** | Windows' gaming and graphics stack without rebooting.                                                             |
| 🔧 **Tool Flexibility**  | Switch between Linux and Windows tools freely, whether debugging code, editing video, or tinkering with hardware. |

---

## 🧰 Key Components of HyperShell

HyperShell brings together a curated set of tools:

1. 🖥️ **Windows Terminal**: A sleek, customizable command-line interface.
2. 🐧 **WSL2 with Ubuntu**: A fully integrated Linux experience right inside
   Windows.
3. 🔧 **PowerShell**: Enhanced for smoother interactions with the Windows
   system, including useful keybindings and productivity enhancements.
4. 📝 **AstroNvim**: A turbocharged Neovim setup (more on this below).
5. 🚀 **Starship**: A cross-shell prompt that's as customizable as it is
   beautiful.
6. 🔍 **FZF**: A fuzzy finder that makes searching files and history lightning
   fast.
7. 🌈 **LSD**: A modern replacement for ls, with lots of visual enhancements.
8. 🐙 **Git**: Version control with custom aliases for quick operations.
9. 🐳 **Docker**: Containerization, plus helpful aliases and integrations.
10. 🔑 **Keybindings and Custom Aliases**: Configured in both PowerShell and Zsh
    to boost efficiency.

---

## 🛠️ Setting Up HyperShell

Getting started:

1. Clone the repository:

   ```bash
   git clone https://github.com/hyperb1iss/dotfiles.git %USERPROFILE%\dev\dotfiles
   ```

2. Run the installation script (you'll need admin privileges):
   ```bash
   cd %USERPROFILE%\dev\dotfiles
   install.bat
   ```

This script takes care of:

- Installing essential tools via Chocolatey (PowerShell Core, Windows Terminal,
  Git, VS Code, Node.js, Python, Rust, Docker, and more).
- Setting up PowerShell modules for extended functionality.
- Configuring Git with custom aliases for a smoother workflow.
- Installing VS Code extensions commonly used by developers.
- Setting up WSL2 for seamless Linux integration.
- Installing and configuring Starship for a consistent prompt across shells.
- Setting up AstroNvim with pre-configured settings.

---

## 🌠 AstroNvim: The Neovim Setup

AstroNvim is my go-to Neovim configuration. Feature-rich out of the box without much hassle.

### Key Features:

- 🚀 Blazing fast startup time
- 📦 Easy plugin management
- 🎨 Beautiful and functional UI
- 📊 Built-in dashboard
- 🔍 Fuzzy finding with Telescope
- 🌳 File explorer with Neo-tree
- 👨‍💻 Powerful LSP integration

With HyperShell, AstroNvim is automatically installed and configured. To start using it, simply open Neovim:

```bash
nvim
```

For customization, head over to `~/AppData/Local/nvim/lua/user/init.lua`. AstroNvim comes with a ton of features out of the box, and you can tweak it to your heart's content.

---

## 🤖 Using HyperShell: Key Features and Commands

Key commands and keybindings:

### 🔍 Fuzzy Finding with FZF

- `Ctrl+f`: Fuzzy find files in the current directory and subdirectories
- `Alt+c`: Fuzzy find and change to a directory
- `Ctrl+r`: Fuzzy find and execute a command from history

### 📂 Enhanced Directory Navigation

- `cd -`: Go back to the previous directory
- `mkcd <dir>`: Create a directory and change into it
- `lt`: List files and directories in a tree structure

### 🎛️ Linux-style Aliases

- `ls`, `ll`, `la`: Colorful directory listings with LSD
- `cat`, `less`: Use bat for syntax highlighting
- `grep`, `find`, `sed`, `awk`: Use GNU versions for extended functionality
- `touch`, `mkdir`: Create files and directories
- `which`: Find the location of a command

### 🐧 WSL Integration

- `wsld`: Switch to the WSL environment
- `wslopen <path>`: Open a WSL directory in Windows Explorer
- `wgrep`, `wsed`, `wfind`, `wawk`: Run Linux commands from PowerShell

### 🐙 Git Shortcuts

- `gst`: Git status
- `ga`: Git add
- `gco`: Git commit
- `gpp`: Git push
- `gcp`: Git cherry-pick

### 🐳 Docker Shortcuts

- `dps`: List running Docker containers
- `di`: List Docker images

### 🔄 Reloading HyperShell

- `reload`: Reload the PowerShell profile to apply changes

These are just a few examples—explore the HyperShell dotfiles to discover more custom functions and aliases to speed up your workflow.

---

## 💪 Tips and Tricks

1. **Customize Key Bindings**: Modify keybindings in the PowerShell profile
   (`$PROFILE`) to suit your preferences.
2. **Extend Functionality**: Add your own aliases, functions, and scripts to the
   profile files to further tailor HyperShell to your needs.
3. **Explore Tools**: Dive deeper into the capabilities of tools like FZF, LSD,
   and bat. They have a lot to offer!
4. **Leverage WSL**: Make full use of WSL for Linux-specific tasks. You can
   easily switch between Windows and Linux environments.
5. **Learn Keybindings**: Commit the custom keybindings to muscle memory.
   They're designed to minimize hand movement and boost efficiency.
6. **Stay Updated**: Pull the latest changes from the dotfiles repo regularly to
   get updates and improvements.

---

## 🎬 Wrap-Up

HyperShell bridges Windows and Linux into one flexible terminal environment for developers, sysadmins, and creatives who work across both.

Check out the [GitHub repository](https://github.com/hyperb1iss/dotfiles) and give it a try. Open an issue or share your thoughts!
]]></content:encoded>
            <author>stefanie@hyperbliss.tech (Stefanie Jane)</author>
        </item>
        <item>
            <title><![CDATA[Creative Coding: The Birth of CyberScape]]></title>
            <link>https://hyperbliss.tech/blog/2024.09.29_developing_cyberscape</link>
            <guid isPermaLink="false">https://hyperbliss.tech/blog/2024.09.29_developing_cyberscape</guid>
            <pubDate>Mon, 30 Sep 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[Building CyberScape: a high-performance interactive particle system inspired by the 8-bit demoscene, powering the header animation on hyperbliss.tech. Canvas2D, gl-matrix, object pooling, spatial partitioning, and adaptive rendering.]]></description>
            <content:encoded><![CDATA[
## Introduction

As a developer with roots in the 8-bit demoscene, I've always been fascinated by the art of pushing hardware to its limits to create stunning visual effects. The demoscene, a computer art subculture that produces demos (audio-visual presentations) to showcase programming, artistic, and musical skills[^1], taught me the importance of optimization, creativity within constraints, and the sheer joy of making computers do unexpected things.

When I set out to redesign my personal website, hyperbliss.tech, I wanted to capture that same spirit of innovation and visual spectacle, but with a modern twist. This desire led to the creation of CyberScape, an interactive canvas-based animation that brings the header of my website to life.

This post walks through how CyberScape works, the challenges I ran into, and the optimization techniques that make it run smoothly.

## The Vision

The concept for CyberScape was born from a desire to create a dynamic, cyberpunk-inspired backdrop that would not only look visually appealing but also respond to user interactions. I envisioned a space filled with glowing particles and geometric shapes, all moving in a 3D space and reacting to mouse movements. This animation would serve as more than just eye candy; it would be an integral part of the site's identity, setting the tone for the tech-focused and creative content to follow.

The aesthetic draws inspiration from classic cyberpunk works like William Gibson's "Neuromancer"[^2] and the visual style of films like "Blade Runner"[^3], blending them with the neon-soaked digital landscapes popularized in modern interpretations of the genre.

## The Technical Approach

### Core Technologies

CyberScape is built using the following technologies:

1. **HTML5 Canvas**: For rendering the animation efficiently. The Canvas API
   provides a means for drawing graphics via JavaScript and the HTML `<canvas>` element[^4].
2. **TypeScript**: To ensure type safety and improve code maintainability.
   TypeScript is a typed superset of JavaScript that compiles to plain JavaScript[^5].
3. **requestAnimationFrame**: For smooth, optimized animation loops. This method
   tells the browser that you wish to perform an animation and requests that the browser calls a specified function to update an animation before the next repaint[^6].
4. **gl-matrix**: A high-performance matrix and vector mathematics library for
   JavaScript that significantly boosts our 3D calculations[^7].

### Key Components

The animation consists of several key components:

1. **Particles**: Small, glowing dots that move around the canvas, creating a
   sense of depth and movement.
2. **Vector Shapes**: Larger geometric shapes (cubes, pyramids, etc.) that float
   in the 3D space, adding structure and complexity to the scene.
3. **Glitch Effects**: Occasional visual distortions to enhance the cyberpunk
   aesthetic and add dynamism to the animation.
4. **Color Management**: A system for handling color transitions and blending,
   creating a vibrant and cohesive visual experience.
5. **Collision Detection**: An optimized system for detecting and handling
   interactions between shapes and particles.
6. **Force Handlers**: Modules that manage attraction, repulsion, and other
   forces acting on shapes and particles.

## The Development Process

### 1. Setting Up the Canvas

The first step was to create a canvas element that would cover the header area of the site. This canvas needed to be responsive, adjusting its size when the browser window is resized:

```typescript
const updateCanvasSize = () => {
  const { width, height } = navElement.getBoundingClientRect()
  canvas.width = width * window.devicePixelRatio
  canvas.height = height * window.devicePixelRatio
  ctx.scale(window.devicePixelRatio, window.devicePixelRatio)
}

window.addEventListener('resize', updateCanvasSize)
```

This code ensures that the canvas always matches the size of its container and looks crisp on high-DPI displays.

### 2. Creating the Particle System

The particle system is the heart of CyberScape. Each particle is an instance of a `Particle` class, which manages its position, velocity, and appearance. With the integration of gl-matrix, we've optimized our vector operations:

```typescript
import { vec3 } from 'gl-matrix'

class Particle {
  position: vec3
  velocity: vec3
  size: number
  color: string
  opacity: number

  constructor(existingPositions: Set<string>, width: number, height: number) {
    this.resetPosition(existingPositions, width, height)
    this.size = Math.random() * 2 + 1.5
    this.color = `hsl(${ColorManager.getRandomCyberpunkHue()}, 100%, 50%)`
    this.velocity = this.initialVelocity()
    this.opacity = 1
  }

  update(deltaTime: number, mouseX: number, mouseY: number, width: number, height: number) {
    // Update position based on velocity
    vec3.scaleAndAdd(this.position, this.position, this.velocity, deltaTime)

    // Apply forces (e.g., attraction to mouse)
    if (vec3.distance(this.position, vec3.fromValues(mouseX, mouseY, 0)) < 200) {
      vec3.add(
        this.velocity,
        this.velocity,
        vec3.fromValues(
          (mouseX - this.position[0]) * 0.00001 * deltaTime,
          (mouseY - this.position[1]) * 0.00001 * deltaTime,
          0,
        ),
      )
    }

    // Wrap around edges
    this.wrapPosition(width, height)
  }

  draw(ctx: CanvasRenderingContext2D, width: number, height: number) {
    const projected = VectorMath.project(this.position, width, height)
    ctx.fillStyle = this.color
    ctx.globalAlpha = this.opacity
    ctx.beginPath()
    ctx.arc(projected.x, projected.y, this.size * projected.scale, 0, Math.PI * 2)
    ctx.fill()
  }

  // ... other methods
}
```

This implementation allows for efficient updating and rendering of thousands of particles, creating the illusion of a vast, dynamic space. The use of gl-matrix's `vec3` operations significantly improves performance for vector calculations.

### 3. Implementing Vector Shapes

To add more visual interest, we created a `VectorShape` class to represent larger geometric objects. With gl-matrix, we've enhanced our 3D transformations:

```typescript
import { vec3, mat4 } from 'gl-matrix'

abstract class VectorShape {
  vertices: vec3[]
  edges: [number, number][]
  position: vec3
  rotation: vec3
  color: string
  velocity: vec3

  constructor() {
    this.position = vec3.create()
    this.rotation = vec3.create()
    this.velocity = vec3.create()
    this.color = ColorManager.getRandomCyberpunkColor()
  }

  abstract initializeShape(): void

  update(deltaTime: number) {
    // Update position and rotation
    vec3.scaleAndAdd(this.position, this.position, this.velocity, deltaTime)

    vec3.add(this.rotation, this.rotation, vec3.fromValues(0.001 * deltaTime, 0.002 * deltaTime, 0.003 * deltaTime))
  }

  draw(ctx: CanvasRenderingContext2D, width: number, height: number) {
    const modelMatrix = mat4.create()
    mat4.translate(modelMatrix, modelMatrix, this.position)
    mat4.rotateX(modelMatrix, modelMatrix, this.rotation[0])
    mat4.rotateY(modelMatrix, modelMatrix, this.rotation[1])
    mat4.rotateZ(modelMatrix, modelMatrix, this.rotation[2])

    const projectedVertices = this.vertices.map((v) => {
      const transformed = vec3.create()
      vec3.transformMat4(transformed, v, modelMatrix)
      return VectorMath.project(transformed, width, height)
    })

    ctx.strokeStyle = this.color
    ctx.lineWidth = 2
    ctx.beginPath()
    this.edges.forEach(([start, end]) => {
      ctx.moveTo(projectedVertices[start].x, projectedVertices[start].y)
      ctx.lineTo(projectedVertices[end].x, projectedVertices[end].y)
    })
    ctx.stroke()
  }

  // ... other methods
}
```

This abstract class serves as a base for specific shape implementations like `CubeShape`, `PyramidShape`, etc. These shapes add depth and structure to the scene, creating a more complex and engaging visual environment. The use of gl-matrix's matrix operations (`mat4`) significantly improves the efficiency of our 3D transformations.

### 4. Adding Interactivity

To make CyberScape responsive to user input, we implemented mouse tracking and used the cursor position to influence particle movement:

```typescript
canvas.addEventListener('mousemove', (event) => {
  const rect = canvas.getBoundingClientRect()
  mouseX = event.clientX - rect.left
  mouseY = event.clientY - rect.top
})

// In the particle update method:
if (vec3.distance(this.position, vec3.fromValues(mouseX, mouseY, 0)) < 200) {
  vec3.add(
    this.velocity,
    this.velocity,
    vec3.fromValues(
      (mouseX - this.position[0]) * 0.00001 * deltaTime,
      (mouseY - this.position[1]) * 0.00001 * deltaTime,
      0,
    ),
  )
}
```

This creates a subtle interactive effect where particles are gently attracted to the user's cursor, adding an engaging layer of responsiveness to the animation.

### 5. Implementing Glitch Effects

To enhance the cyberpunk aesthetic, we added occasional glitch effects using pixel manipulation:

```typescript
class GlitchEffect {
  apply(ctx: CanvasRenderingContext2D, width: number, height: number, intensity: number) {
    const imageData = ctx.getImageData(0, 0, width, height)
    const data = imageData.data

    for (let i = 0; i < data.length; i += 4) {
      if (Math.random() < intensity) {
        const offset = Math.floor(Math.random() * 50) * 4
        data[i] = data[i + offset] || data[i]
        data[i + 1] = data[i + offset + 1] || data[i + 1]
        data[i + 2] = data[i + offset + 2] || data[i + 2]
      }
    }

    ctx.putImageData(imageData, 0, 0)
  }
}
```

This effect is applied periodically to create brief moments of visual distortion, reinforcing the digital, glitchy nature of the cyberpunk world we're creating.

## Performance Optimizations

Making it look good is one thing. Making it run smoothly on every device is where it gets interesting. In the spirit of the demoscene, where every CPU cycle and byte counts[^8], I optimized aggressively.

### 1. Efficient Rendering with Canvas

The choice of using the HTML5 Canvas API was deliberate. Canvas provides a low-level, immediate mode rendering API that allows for highly optimized 2D drawing operations[^9].

```typescript
const ctx = canvas.getContext('2d')

function draw() {
  // Clear the canvas
  ctx.clearRect(0, 0, canvas.width, canvas.height)

  // Draw background
  ctx.fillStyle = 'rgba(0, 0, 0, 0.1)'
  ctx.fillRect(0, 0, canvas.width, canvas.height)

  // Draw particles and shapes
  particlesArray.forEach((particle) => particle.draw(ctx))
  shapesArray.forEach((shape) => shape.draw(ctx))

  // Apply post-processing effects
  glitchManager.handleGlitchEffects(ctx, width, height, timestamp)
}
```

By carefully managing our draw calls and using appropriate Canvas API methods, we ensure efficient rendering of our complex scene.

### 2. Object Pooling for Particle System

To avoid garbage collection pauses and reduce memory allocation overhead, we implement an object pool for particles. This technique, commonly used in game development[^10], significantly reduces the load on the garbage collector, leading to smoother animations with fewer pauses:

```typescript
class ParticlePool {
  private pool: Particle[]
  private maxSize: number

  constructor(size: number) {
    this.maxSize = size
    this.pool = []
    this.initialize()
  }

  private initialize(): void {
    for (let i = 0; i < this.maxSize; i++) {
      this.pool.push(new Particle(new Set<string>(), 0, 0))
    }
  }

  public getParticle(width: number, height: number): Particle {
    if (this.pool.length > 0) {
      const particle = this.pool.pop()!
      particle.reset(new Set<string>(), width, height)
      return particle
    }
    return new Particle(new Set<string>(), width, height)
  }

  public returnParticle(particle: Particle): void {
    if (this.pool.length < this.maxSize) {
      this.pool.push(particle)
    }
  }
}
```

### 3. Optimized Collision Detection

We optimize our collision detection by using a grid-based spatial partitioning system, which significantly reduces the number of collision checks needed:

```typescript
class CollisionHandler {
  public static handleCollisions(shapes: VectorShape[], collisionCallback?: CollisionCallback): void {
    const activeShapes = shapes.filter((shape) => !shape.isExploded)
    const gridSize = 100 // Adjust based on your needs
    const grid: Map<string, VectorShape[]> = new Map()

    // Place shapes in grid cells
    for (const shape of activeShapes) {
      const cellX = Math.floor(shape.position[0] / gridSize)
      const cellY = Math.floor(shape.position[1] / gridSize)
      const cellZ = Math.floor(shape.position[2] / gridSize)
      const cellKey = `${cellX},${cellY},${cellZ}`

      if (!grid.has(cellKey)) {
        grid.set(cellKey, [])
      }
      grid.get(cellKey)!.push(shape)
    }

    // Check collisions only within the same cell and neighboring cells
    grid.forEach((shapesInCell, cellKey) => {
      const [cellX, cellY, cellZ] = cellKey.split(',').map(Number)

      for (let dx = -1; dx <= 1; dx++) {
        for (let dy = -1; dy <= 1; dy++) {
          for (let dz = -1; dz <= 1; dz++) {
            const neighborKey = `${cellX + dx},${cellY + dy},${cellZ + dz}`
            const neighborShapes = grid.get(neighborKey) || []

            for (const shapeA of shapesInCell) {
              for (const shapeB of neighborShapes) {
                if (shapeA === shapeB) continue

                const distance = vec3.distance(shapeA.position, shapeB.position)

                if (distance < shapeA.radius + shapeB.radius) {
                  // Collision detected, handle it
                  this.handleCollisionResponse(shapeA, shapeB, distance)

                  if (collisionCallback) {
                    collisionCallback(shapeA, shapeB)
                  }
                }
              }
            }
          }
        }
      }
    })
  }
}
```

This approach ensures that we only perform expensive collision resolution calculations when shapes are actually close to each other, a common optimization technique in real-time simulations[^11].

### 4. Efficient Math Operations with gl-matrix

One of the most significant optimizations we've implemented is the use of gl-matrix for our vector and matrix operations. This high-performance mathematics library is specifically designed for WebGL applications, but it's equally beneficial for our Canvas-based animation:

```typescript
import { vec3, mat4 } from 'gl-matrix'

class VectorMath {
  public static project(position: vec3, width: number, height: number) {
    const fov = 500 // Field of view
    const minScale = 0.5
    const maxScale = 1.5
    const scale = fov / (fov + position[2])
    const clampedScale = Math.min(Math.max(scale, minScale), maxScale)
    return {
      x: position[0] * clampedScale + width / 2,
      y: position[1] * clampedScale + height / 2,
      scale: clampedScale,
    }
  }

  public static rotateVertex(vertex: vec3, rotation: vec3): vec3 {
    const m = mat4.create()
    mat4.rotateX(m, m, rotation[0])
    mat4.rotateY(m, m, rotation[1])
    mat4.rotateZ(m, m, rotation[2])

    const v = vec3.clone(vertex)
    vec3.transformMat4(v, v, m)
    return v
  }
}
```

By using gl-matrix, we benefit from highly optimized vector and matrix operations that are often faster than native JavaScript math operations. This is particularly important for our 3D transformations and projections, which are performed frequently in the animation loop.

### 5. Render Loop Optimization

We use `requestAnimationFrame` for the main render loop, ensuring smooth animation that's in sync with the browser's refresh rate[^12]:

```typescript
let lastTime = 0

function animateCyberScape(timestamp: number) {
  const deltaTime = timestamp - lastTime
  if (deltaTime < config.frameTime) {
    animationFrameId = requestAnimationFrame(animateCyberScape)
    return
  }
  lastTime = timestamp

  // Update logic
  updateParticles(deltaTime)
  updateShapes(deltaTime)

  // Render
  draw()

  // Schedule next frame
  animationFrameId = requestAnimationFrame(animateCyberScape)
}

// Start the animation loop
requestAnimationFrame(animateCyberScape)
```

This approach allows us to maintain a consistent frame rate while efficiently updating and rendering our scene. By using `deltaTime`, we ensure that our animations remain smooth even if some frames take longer to process, a technique known as delta timing[^13].

### 6. Lazy Initialization and Delayed Appearance

To improve initial load times and create a more dynamic scene, we implement lazy initialization for particles:

```typescript
class Particle {
  // ... other properties
  private appearanceDelay: number
  private isVisible: boolean

  constructor() {
    // ... other initializations
    this.setDelayedAppearance()
  }

  setDelayedAppearance() {
    this.appearanceDelay = Math.random() * 5000 // Random delay up to 5 seconds
    this.isVisible = false
  }

  updateDelay(deltaTime: number) {
    if (!this.isVisible) {
      this.appearanceDelay -= deltaTime
      if (this.appearanceDelay <= 0) {
        this.isVisible = true
      }
    }
  }

  draw(ctx: CanvasRenderingContext2D) {
    if (this.isVisible) {
      // Actual drawing logic
    }
  }
}

// In the main update loop
particlesArray.forEach((particle) => {
  particle.updateDelay(deltaTime)
  if (particle.isVisible) {
    particle.update(deltaTime)
  }
})
```

This technique, known as lazy loading[^14], allows us to gradually introduce particles into the scene, reducing the initial computational load and creating a more engaging visual effect. It's particularly useful for improving perceived performance on slower devices.

### 7. Adaptive Performance Adjustments

We implement an adaptive quality system that adjusts the number of particles and shapes based on the window size and device capabilities:

```typescript
class CyberScapeConfig {
  // ... other properties and methods

  public calculateParticleCount(width: number, height: number): number {
    const isMobile = width <= this.mobileWidthThreshold
    let count = Math.max(this.baseParticleCount, Math.floor(width * height * this.particlesPerPixel))
    if (isMobile) {
      count = Math.floor(count * this.mobileParticleReductionFactor)
    }
    return count
  }

  public getShapeCount(width: number): number {
    return width <= this.mobileWidthThreshold ? this.numberOfShapesMobile : this.numberOfShapes
  }
}

// In the main initialization and resize handler
function adjustParticleCount() {
  const config = CyberScapeConfig.getInstance()
  numberOfParticles = config.calculateParticleCount(width, height)
  numberOfShapes = config.getShapeCount(width)

  // Adjust particle array size
  while (particlesArray.length < numberOfParticles) {
    particlesArray.push(particlePool.getParticle(width, height))
  }
  particlesArray.length = numberOfParticles

  // Adjust shape array size
  while (shapesArray.length < numberOfShapes) {
    shapesArray.push(ShapeFactory.createShape(/* ... */))
  }
  shapesArray.length = numberOfShapes
}

window.addEventListener('resize', adjustParticleCount)
```

This ensures that the visual density of particles and shapes remains consistent across different screen sizes while also adapting to device capabilities. This type of dynamic content adjustment is a common technique in responsive web design and performance optimization[^15].

## Challenges and Lessons Learned

Developing CyberScape wasn't without its challenges. Here are some of the key issues I faced and the lessons learned:

1. **Performance Bottlenecks**: Initially, the animation would stutter on mobile
   devices. Profiling the code revealed that the particle update loop and collision detection were the culprits. By implementing object pooling, spatial partitioning for collision detection, and adaptive quality settings, I was able to significantly improve performance across all devices. The introduction of gl-matrix for vector and matrix operations provided an additional performance boost.

2. **Browser Compatibility**: Different browsers handle canvas rendering
   slightly differently, especially when it comes to blending modes and color spaces. I had to carefully test and adjust the rendering code to ensure consistent visuals across browsers. Using the `ColorManager` class helped standardize color operations across the project.

3. **Memory Management**: Long running animations can lead to memory leaks if
   not carefully managed. Implementing object pooling, ensuring proper cleanup of event listeners, and using efficient data structures were crucial in maintaining stable performance over time. The use of gl-matrix's stack-allocated vectors and matrices also helped in reducing garbage collection pauses.

4. **Balancing Visuals and Performance**: It was tempting to keep adding more
   visual elements, but each addition came at a performance cost. Finding the right balance between visual complexity and smooth performance was an ongoing challenge. The adaptive quality system helped in maintaining this balance across different devices.

5. **Responsive Design**: Ensuring that the animation looked good and performed
   well on everything from large desktop monitors to small mobile screens required careful consideration of scaling and adaptive quality settings. The `CyberScapeConfig` class became instrumental in managing these adaptations.

6. **Code Organization**: As the project grew, maintaining a clean and organized
   codebase became increasingly important. Adopting a modular structure with classes like `ParticlePool`, `ShapeFactory`, and `VectorMath` helped in keeping the code manageable and extensible. The integration of gl-matrix required some refactoring but ultimately led to cleaner, more efficient code.

These challenges echoed many of the limitations I used to face in the demoscene, where working within strict hardware constraints was the norm. It was a reminder that even with modern web technologies, efficient coding practices and performance considerations are still crucial.

## Conclusion

CyberScape blends the spirit of the demoscene with modern web capabilities. Efficient Canvas rendering, object pooling, spatial partitioning, and gl-matrix for high-performance math operations all combine to produce complex interactive graphics that run smoothly on modest hardware.

The modular structure (`Particle`, `VectorShape`, `ColorManager`, `GlitchEffect`) makes it easy to experiment with new features. Next steps might include WebGL for GPU-accelerated rendering, more advanced spatial partitioning, or Web Workers for offloading heavy computations.

Building CyberScape reminded me why the demoscene hooked me in the first place: the joy of making computers do unexpected things within tight constraints. The tools have evolved dramatically, but the creative challenge is the same.

## References

[^1]: Polgár, T. (2005). Freax: The Brief History of the Computer Demoscene. CSW-Verlag.

[^2]: Gibson, W. (1984). Neuromancer. Ace.

[^3]: Scott, R. (Director). (1982). Blade Runner [Film]. Warner Bros.

[^4]: Mozilla Developer Network. (2023). Canvas API. https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API

[^5]: TypeScript. (2023). TypeScript Documentation. https://www.typescriptlang.org/docs/

[^6]: Mozilla Developer Network. (2023). window.requestAnimationFrame(). https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame

[^7]: gl-matrix. (2023). gl-matrix Documentation. http://glmatrix.net/docs/

[^8]: Reunanen, M. (2017). Times of Change in the Demoscene: A Creative Community and Its Relationship with Technology. University of Turku.

[^9]: Fulton, S., & Fulton, J. (2013). HTML5 Canvas: Native Interactivity and Animation for the Web. O'Reilly Media.

[^10]: Nystrom, R. (2014). Game Programming Patterns. Genever Benning.

[^11]: Ericson, C. (2004). Real-Time Collision Detection. Morgan Kaufmann.

[^12]: Grigorik, I. (2013). High Performance Browser Networking. O'Reilly Media.

[^13]: LaMothe, A. (1999). Tricks of the Windows Game Programming Gurus. Sams.

[^14]: Osmani, A. (2020). Learning Patterns. https://www.patterns.dev/posts/lazy-loading-pattern/

[^15]: Marcotte, E. (2011). Responsive Web Design. A Book Apart.
]]></content:encoded>
            <author>stefanie@hyperbliss.tech (Stefanie Jane)</author>
        </item>
        <item>
            <title><![CDATA[Designing for Emotion: Crafting Immersive Web Experiences Across Devices]]></title>
            <link>https://hyperbliss.tech/blog/2024.09.01_designing-for-emotion</link>
            <guid isPermaLink="false">https://hyperbliss.tech/blog/2024.09.01_designing-for-emotion</guid>
            <pubDate>Wed, 28 Aug 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[How to build web experiences that make people feel something, not just click through. Color psychology, interactive storytelling, multi-sensory design, and the hard parts of getting it right across every screen size.]]></description>
            <content:encoded><![CDATA[
We get consumed by the latest frameworks, coding techniques, and aesthetic trends. But step back for a second: _Are we just building websites, or are we creating experiences that people actually remember?_ Emotional design goes beyond functionality and visuals to forge genuine connections.

## Why Emotional Design Matters

Think about a website or app that left you feeling inspired, understood, or delighted. It wasn't just the interface or the load times. It was how it **made you feel**.

1. **Lasting impressions**: Turn mundane interactions into moments people
   remember.
2. **Loyalty**: Emotional resonance brings people back and builds trust.
3. **Engagement**: Emotionally compelling content invites sharing, discussion,
   and community.

## Crafting Emotional Experiences Across Devices

Users jump between desktops, tablets, and smartphones constantly. Designing with emotion means tailoring the approach for each screen.

### 1. Color Psychology: Evoke the Right Feelings

Colors are emotional triggers. Understanding color psychology lets you set the right mood on every device.

- **Desktop**: Use expansive screens for rich gradients and subtle hues that
  create immersive environments.
- **Tablet**: Adaptive color schemes that respond to interactions like swipes or
  taps provide immediate emotional feedback.
- **Smartphone**: High-contrast colors make essential elements pop on smaller
  screens.

> **Tip:** On mobile, a minimalist palette with one dominant color makes a
> stronger emotional statement than a complex array of hues.

### 2. Typography and Microcopy: Words That Speak Volumes

Font choices and copy profoundly affect how users perceive your brand.

- **Desktop**: Variable fonts and dynamic type that respond to scrolling add a
  layer of engagement.
- **Tablet**: Responsive typography that adjusts between portrait and landscape
  keeps things readable.
- **Smartphone**: Concise, friendly microcopy turns routine messages into
  personable interactions.

> **Example:** Swap a generic "Submit" button for "Let's Make Magic!" to bring
> excitement to a form interaction.

### 3. Storytelling Through Interactive Design

Humans are natural storytellers and story listeners. Interactive design elements weave a narrative users can participate in.

- **Desktop**: Parallax effects, animated illustrations, and interactive
  infographics guide users through a compelling story.
- **Tablet**: Gesture-based interactions like drag-and-drop or tilt-to-reveal
  features make users feel part of the experience.
- **Smartphone**: Vertical storytelling formats (like social media stories)
  present content in engaging, bite-sized pieces.

> **Case study:** An environmental org could show a virtual tree that grows as
> the user navigates through the site, symbolizing their contribution.

### 4. Personalization: Unique User Journeys

Personal touches make users feel valued.

- **Desktop**: Customizable interfaces where users tailor dashboards or themes
  to their preferences.
- **Tablet**: Contextual cues like time of day or user behavior to adapt content
  dynamically.
- **Smartphone**: AI-driven suggestions for content or features based on
  individual patterns.

### 5. Engaging the Senses: Beyond Visuals

Appealing to multiple senses deepens emotional connections.

- **Desktop**: Subtle soundscapes that enhance the mood without overwhelming.
- **Tablet**: Haptic feedback providing tactile responses to interactions, like
  a gentle vibration upon completing a task.
- **Smartphone**: Micro-animations that respond to user input, giving satisfying
  visual feedback.

> Always provide options to mute sounds or disable animations. Accessibility
> matters.

### 6. Inclusivity in Emotional Design

Emotional design should work for a diverse audience.

- **Desktop**: Adjustable text sizes, screen reader compatibility, proper
  contrast.
- **Tablet**: Universal symbols and clear language that transcend barriers.
- **Smartphone**: Voice commands and dictation for users with different
  abilities.

Inclusivity broadens your audience _and_ enriches the emotional depth of your design by acknowledging and valuing diversity.

## Balancing Emotion with Functionality

Emotional richness doesn't replace practical design.

- **User-centric**: Emotional elements should _enhance_ usability. Navigation
  and core functions stay efficient.
- **Consistency**: A harmonious emotional tone across all touchpoints builds
  trust.
- **Performance**: Optimize load times and responsiveness. Emotional engagement
  evaporates when a site is sluggish.
- **Ethics**: Use emotional triggers responsibly. No manipulative tactics.

## Measuring Emotional Impact

Don't just build it and hope.

- **User feedback**: Interviews, surveys, and usability tests focused on
  emotional responses.
- **Behavioral analytics**: Session duration, click-through rates, conversion
  rates.
- **A/B testing**: Compare emotional design approaches and measure what
  resonates.

Use insights to refine continuously. The design should evolve with your users.

## Looking Ahead

- **AI personalization**: Adapting content in real-time based on interaction
  patterns.
- **VR/AR**: Immersive technologies open new frontiers for emotional engagement.
- **Biometric feedback**: Wearable devices provide real-time data on user state,
  enabling adaptive interfaces that respond to stress or excitement levels.

## Bringing It All Together

Emotional design is a shift toward more human-centered digital experiences. By integrating emotional elements across devices, you build meaningful relationships with your users, not just functional platforms.

Next time you're deep in design and development, look beyond the code and pixels. Ask: **How will this make users feel?** The answer could transform your project from something forgettable into something people talk about.
]]></content:encoded>
            <author>stefanie@hyperbliss.tech (Stefanie Jane)</author>
        </item>
    </channel>
</rss>