Analog TV Simulator
The physics-accurate old TV and VHS simulator for iPhone, iPad and Mac. Camera tube to CRT phosphor chemistry — simulated from first principles. Every artefact emerges from the signal itself. No shortcuts.
Early access via TestFlight: iPhone & iPad · Mac · Mac alpha build (unsigned, testing only)
See It In Action
After
Before
NTSC — 3.58 MHz subcarrier dot crawl
After
Before
PAL — Hanover bars (V-switch phase error)
After
Before
SECAM — FM pre-emphasis fire at colour transitions
After
Before
System A 405-line — Image Orthicon, P4 phosphor
After
Before
Multipath — composite-domain echo interference
After
Before
VHS — oxide dropout, head-switch noise, colour-under drift
Degauss — PTC thermistor field decay sweeps a rainbow wash across the shadow mask
Worn VHS tape — oxide dropout, head-switch noise, colour-under drift
VCR tracking loss — head/tape azimuth misalignment noise bars
V-hold roll — vertical sync separator threshold drift
SSAVI/Z-TAC — sinusoidal H-oscillator hunt + whole-field colour-negative inversion
The Full Simulation Pipeline
Real composite television worked end-to-end: a broadcast camera tube converted the scene to electrons, those electrons were encoded into a single voltage waveform, optionally recorded to magnetic tape, transmitted over RF, received, decoded, and finally driven into a CRT phosphor screen. The app replicates the complete chain — 9 GPU compute and render passes per frame — with no stage skipped and no effect faked in post.
Before a single sample of composite signal is written, the source image passes through a broadcast camera model calibrated against primary source measurements. Each of eight tube and sensor types applies, in order: telecine pulldown jitter (NTSC 3:2 cadence-gated — slipping only at A-frame boundaries, not every frame), 3-tube prism-block colour registration error (R and B channels independently offset; G is reference), single-tube stripe-filter chroma bandwidth limiting (Newvicon: horizontal Gaussian on chroma only, luma stays sharp — models the ~1/3 chroma bandwidth of a frequency-interleaved stripe-filter faceplate), aperture MTF with horizontal-only scan ringing (tube optics diffract along the scan direction), sensor gamma and black-level pedestal (γ = 0.65 from Hamamatsu Sb₂S₃ datasheets; γ ≈ 1.0 for Plumbicon PbO), white-balance colour temperature drift that settles as the tube warms up from a cold start, highlight bloom and knee compression, Image Orthicon asymmetric dark halo (secondary-electron scatter weighted toward the scan direction per RCA datasheets), vertical smear upward only (charge drains toward the shift-register readout end), illumination-dependent temporal lag (non-linear IIR — brighter scenes produce less lag, as the electron beam sweeps more charge from a brighter target), a slow burn-in accumulator that persists for approximately 115 seconds at 60 fps, spatially-coherent film grain, and signal-dependent electronic noise scaling as √signal per Poisson statistics. Test patterns bypass this stage entirely.
The camera output is converted to a composite signal buffer — a flat array of floating-point voltage samples. Sample counts range from 790 per line for 405-line System A to 1135 for PAL, all sampled at 4× the colour subcarrier frequency. Subcarrier phase geometry (radians per sample and radians per line) is pre-computed on the CPU and passed as shader parameters: every colour standard uses the same GPU encoder with no hardcoded format branches. SECAM is the exception — FM requires continuous phase across adjacent samples, so a dedicated per-line CPU kernel handles it, writing into the same composite buffer consumed by the decode shader.
The buffer now contains what would be on the coaxial cable between a broadcast transmitter and your television aerial. Sync pulses at −40 IRE, blanking at 0, picture from 0 to 100 IRE, colour burst on the back porch — all in one waveform. Luminance, chrominance, and timing are inseparable, exactly as they were in real hardware. Everything downstream operates on this buffer. There is no separate colour signal, no separate luma signal — there is only the composite.
Six effects are injected directly into the composite buffer before decode: multipath ghosts (three independent delayed echoes at configurable amplitudes and delays, each operating on the full composite waveform including subcarrier — chroma smear and hue shifts from multipath are genuine consequences, not approximations), airplane flutter (two-oscillator beat modulation of ghost amplitude at ~0.68 and ~0.51 Hz), mains hum bars (50/60 Hz sinusoidal baseline modulation that rolls upward at the mains–field beat frequency), co-channel interference (beating carrier producing drifting diagonal "venetian blind" pattern), impulse noise (broadband RF spikes from vehicle ignition and motors), and IF filter ringing (a 15-tap Kaiser-windowed FIR bandpass kernel run at the composite sample rate, reproducing the passband roll-off and stop-band ripple of a real IF strip — transitions ring at the subcarrier frequency, overshoot luminance edges, and smear chroma in the time domain). Because every effect enters before the decoder, they interact authentically with the comb filter, chroma demodulator, and SECAM FM discriminator — not as a visual overlay, but as corrupted signal.
Receiver-side dynamic effects are modelled alongside the static signal chain. Noise is injected with a 1/f spatial spectrum — 70% flat Gaussian at the composite sample rate plus 30% spatially-correlated low-frequency grain (4-sample and 16-sample horizontal groupings, 2-line vertical grouping, each octave at −3 dB) — giving the characteristic coarser-textured snow of real analog receivers rather than fine uniform digital grain. The colour decoder runs two feedback loops: an Automatic Chrominance Control (ACC) loop measures burst amplitude each field and adjusts chroma gain with a 1-field attack / 2-field decay time constant — at high noise levels the burst estimate is degraded and gain oscillates, producing a subtle saturation modulation at 2–5 Hz characteristic of low-SNR analogue colour. An Automatic Frequency Control (AFC) subcarrier PLL tracks burst phase through a 1 Hz loop filter; when signal is weak the noisy burst estimate drives a slow hue wobble at the AFC bandwidth, a behaviour familiar from marginal fringe reception. Ref: Benson & Whitaker, Television Engineering Handbook (1992).
When a VCR format is selected, the composite buffer passes through a tape-domain processing stage before decode — exactly as recording on a VCR introduces a lossful recording and playback loop between the tuner and the television. Eight formats are modelled from primary specifications. Consumer formats (VHS SP/LP/EP, Betamax, S-VHS, Hi8) use a colour-under heterodyne system: chroma is stripped, downconverted to a low carrier (629 kHz for VHS, 688 kHz for Betamax, 732 kHz for Hi8), recorded separately, and upconverted on playback. Speed variation in the tape transport FM-modulates this low carrier, producing hue instability proportional to Δφ = 2πf·Δt — visible as colour shimmer on saturated areas. Luma is recorded as an FM carrier whose deviation directly determines horizontal resolution: VHS SP (3.4–4.4 MHz, 1.0 MHz deviation → 240 TVL), S-VHS (5.4–7.0 MHz, 1.6 MHz deviation → 420 TVL), Hi8 (5.7–7.7 MHz, 2.0 MHz → 415 TVL). U-Matic uses the same colour-under principle at 688 kHz but with a higher-quality transport giving lower flutter. Betacam SP records Y, R-Y, and B-Y as separate FM tracks (Y: 6.8–8.8 MHz; C: 5.6–7.3 MHz, CTDM), eliminating colour-under entirely — no hue jitter, Y-SNR above 50 dB. Mechanical artifacts are also modelled: tape skew (tension-induced timing error that grows linearly from the bottom to the top of the frame, bending the picture header), wow and flutter (capstan speed variation of 0.02–0.1% wrms producing sinusoidal horizontal line-timing oscillation at 0.5–3 Hz), tape dropout (random white-level substitutions of 1–15 µs from missing oxide particles, Poisson-distributed across lines), head-switching noise (the two-head transition 6.5H before vertical sync, visible in the lower overscan as a noise band), and the aperture-correction detail-enhancer circuit present in consumer decks from the mid-1980s (high-frequency shelf +3–6 dB at ~2 MHz, producing the characteristic slight ringing on VHS-enhanced material). Generation loss is modelled discretely: each re-recording adds a full tape noise floor, further limits bandwidth, and de-saturates chroma.
Decode runs as sequential GPU compute passes on the composite buffer, operating at the full composite sample rate throughout — no downsampling occurs until the final texture write. First, a 1-line comb filter (signalYCSeparate) separates luminance and chrominance into independent float buffers: Y = (cur + prev) / 2, C = (cur − prev) / 2. This works identically for NTSC and PAL because the subcarrier inverts phase every line, so averaging cancels chroma and differencing cancels luma. The dot-crawl slider blends the separated signals back toward raw composite, reproducing the imperfect comb behaviour of televisions without a delay-line chip. Second, signalChromaFIR demodulates the C buffer to baseband I/Q or U/V using Hann-windowed quadrature FIR filters with the correct per-channel bandwidth from the broadcast specification: NTSC I channel (1.3 MHz, 11 taps), NTSC Q channel (0.4 MHz, 35 taps), PAL U/V (1.3 MHz symmetric, 13 taps), VCR-format chroma (0.5 MHz, matching the colour-under carrier bandwidth). PAL V-line switching is applied at this stage. Third, for PAL, signalPALComb applies a 2-line glass delay-line comb to the U/V buffer: iq[n] = (iq[n] + iq[n−1]) / 2. Because the PAL subcarrier advances 270° per line, U is in phase on adjacent lines while V inverts; averaging doubles U and cancels residual V error — the glass delay-line principle that eliminates Hanover bars (the fine horizontal hue banding visible on sets without a delay line chip). The first active line is passed through unchanged to avoid saturation loss from averaging with the zeroed VBI above it. Fourth, signalLumaFIR limits luma bandwidth with a 13-tap Hann-windowed sinc LPF to 4.2 MHz for NTSC or 5.0 MHz for PAL — scaled by the front-panel bandwidth control. A final GPU pass reads the separated Y and I/Q (or U/V) buffers, applies the standard colour matrix, and writes the output texture.
SECAM takes a separate path entirely. A dedicated signalSECAMFMDecode kernel runs before the Y/C stages, scanning the composite buffer with a 12-tap Hann-windowed bandpass filter centred on 4.250 MHz (even lines, Db) or 4.406 MHz (odd lines, Dr), demodulating each channel via FM limiting and differential phase discrimination, and writing pre-computed Y, Db, and Dr into dedicated float buffers. The absent channel on each line is reconstructed from the previous line's delay buffer. The final composite decode pass reads these pre-computed values directly so that the FM chain runs once at the composite sample rate rather than per output pixel — and all SECAM artifacts (fire, chroma smear, FM noise interactions) emerge from the signal without special-casing in the texture write. The artifacts that emerge across all standards — dot crawl, cross-colour, chroma smear, ringing, SECAM fire — are genuine consequences of the decode algorithm encountering real signal content.
A state machine models the CRT power cycle — preheat, on, and cooling. Each frame it computes: raster geometry during deflection collapse, EHT discharge voltage, shutdown spot parameters (beam core intensity, phosphor halo radius, position jitter scaling with tube age), the high-voltage sag coefficient that causes the raster to breathe with scene brightness, arc probability, and ageing level. Every downstream rendering decision — phosphor excitation gating, EHT brightness scaling, arc flash rendering — is derived from this physics state, not from ad-hoc visual parameters.
The decoded frame drives a per-pixel temporal accumulator gated by the beam scan window. Inside the beam window each pixel accumulates new excitation weighted by a per-channel decay constant drawn from measured phosphor persistence data (RCA TPM-1508A, 1961). Outside the window each pixel simply fades — exactly as a real phosphor does while the beam is elsewhere on screen. For single-compound phosphors (P31 green, P45 white, P4 B&W, P3 amber) the decoded signal is first collapsed to luma and then re-tinted with each phosphor's measured CIE emission chromaticity — eliminating false-colour subcarrier cross-talk that would arise if a full-colour path were applied to a chemically monochromatic screen.
The final render pass reads the accumulated phosphor texture and the tube physics state. During normal operation: EHT sag UV scaling (raster contracts 1–5% on bright scenes, proportional to ageing), scanline darkening from beam width, halation bloom from lateral glass scattering, shadow mask / aperture grille / slot mask overlay (sub-pixel, screen-pixel-locked), barrel distortion from the curved faceplate, convergence geometry offset per gun, vignette. During power transitions: warmup centre spot, or the full shutdown sequence — vertical raster collapse, horizontal bright line, centre spot rendered as two overlapping Gaussians (tight beam core + wide phosphor halo, P22 green-biased), HV arc flashes (probability scaling with tube age), and optional spot-killer blanking. All service-menu geometry (pincushion, trapezoid, tilt) is applied at UV sampling.
Broadcast Camera Emulation
Before a frame enters the composite encoder, the simulation models what happened in the studio. Each tube and sensor type is calibrated against primary source measurements — manufacturer datasheets, peer-reviewed papers, and broadcast engineering literature — so that the physical characteristics of the original hardware determine the parameter values, not aesthetic intuition. Lag, halation, noise, burn-in, vertical smear, and film grain enter the composite encoder as if the camera had produced them. Every artifact that emerges downstream — dot crawl, comb filter interaction, chroma smear — is there because the signal carried it from the sensor.
Full camera controls reference →
The most widely deployed broadcast camera tube of the 1950s and 60s. The Vidicon projects the scene onto a photoconductor (antimony trisulfide) that accumulates a charge pattern proportional to light intensity, then scans it with a low-velocity electron beam. Low cost and reasonable sensitivity made it standard for news cameras, studio cameras, and early video surveillance.
The photoconductor's slow charge-release rate produces the Vidicon's defining flaw: temporal lag. When a bright object moves, the charge pattern doesn't clear instantly — it leaves a comet tail streaking behind the object. Hamamatsu Sb₂S₃ datasheets (types N513, 7262A, 7735A, 8844) measure ~25% residual at the 3rd frame under dim illumination (0.05 fc) dropping to ~5% at 2 fc: the electron beam sweeps more charge from a brighter target. This illumination dependence is physically modelled — lag is not a fixed constant but scales with scene brightness. Static bright objects burn a slowly-decaying residue into the photoconductor; measured persistence in tube literature is on the order of minutes, corresponding to a decay half-life of approximately 115 seconds. The aperture MTF diffraction is horizontal (scan-direction), γ = 0.65 is the value stated explicitly in Hamamatsu datasheets for all Sb₂S₃ types, and colour stability during warm-up is modelled from cold start.
Developed by Philips in 1965, the Plumbicon replaced antimony trisulfide with a lead oxide (PbO) p-i-n photoconductive layer that drained its charge pattern far more rapidly. Levitt (SMPTE, 1970) measured less than 2% signal remaining at the 3rd field under 2 fc studio lighting. Mullard's XQ1410 production datasheet (1987) records 7% at Is=20 nA under dim conditions — the discrepancy is illumination: the PbO junction sweeps nearly completely at studio brightness but retains measurable lag at minimum sensitivity. These two measurements were reconciled and used directly to calibrate the simulation's illumination-dependent lag model. Later diode-gun variants (Mullard XQ5002, Philips/Amperex) reached 59–60 dB SNR and 700+ TVL limiting resolution.
Colour Plumbicon cameras used a 3-tube prism block — one tube per primary colour — whose physical registration drifted with temperature and mechanical age. This produced colour fringing on high-contrast edges — coloured halos around text and object boundaries — one of the most recognisable signatures of vintage broadcast footage. The green tube is the phase reference; red and blue tubes drift independently, producing the characteristic asymmetric fringing seen on original broadcast recordings. Highlight bloom from PbO saturation and colour temperature drift during warm-up are calibrated from the same Mullard datasheet series.
Introduced by Hitachi in 1974, the Saticon uses a selenium–arsenic–telluride (SeAsTe) photoconductor with a tighter, sharper aperture MTF response than the Plumbicon — Hitachi's PF-Z31A reached 750 TVL limiting resolution and 58 dB SNR. Temporal lag is comparable to the Plumbicon and slightly higher in controlled tests (engineering literature consistently places Saticon lag between Plumbicon and Vidicon). Studio cameras equipped with Saticons were prized for clean, crisp rendering of fine detail. The 3-tube Saticon configuration became the preferred choice for high-end production cameras in the late 1970s and 1980s.
Like the Plumbicon, 3-tube Saticon cameras produced colour fringing from tube-to-tube registration drift — typically more pronounced than Plumbicon because the thinner SeAsTe target layer required tighter mechanical tolerances to achieve its MTF advantage. The Saticon's other distinctive artifact is a subtle vertical smear — a short trail above bright objects — caused by photocarrier diffusion along the vertical axis of the selenium layer. Because diffusion is a physical process with a defined direction relative to the readout geometry, the smear is unidirectional: it extends upward from saturated highlights, not downward or bidirectionally.
The Image Orthicon dominated professional broadcast from the late 1940s until the Vidicon displaced it in the early 1960s. Photons struck a photocathode that emitted electrons which were magnetically focused onto a target; a low-velocity return beam scanned the target and fed secondary electrons into an electron multiplier. The result was extraordinary sensitivity — Image Orthicons could operate in candle-lit conditions that would defeat any later tube.
That sensitivity came at a price. The returning secondary electron beam produced a distinctive dark halo around bright objects — the IO's most recognisable signature. RCA technical documentation records a secondary emission ratio of 4–5 electrons emitted per incident photoelectron: those that miss the collection mesh return to the target and suppress signal in the surrounding area. Critically, secondary electrons scatter preferentially along the horizontal scan direction due to the geometry of the electron optics and collection mesh — making the halo asymmetric rather than circular. The strongest darkening appears on the horizontal leading edge of bright objects (the crescent shape visible on original IO footage), with weaker suppression on vertical edges. The halo model uses scan-direction-weighted gradient magnitudes with a leading-edge asymmetry term derived from this physical geometry. Combined with heavy temporal lag, a ~40 dB SNR ceiling (IO noise has a low-frequency spatial structure distinct from the white noise of later tubes), and a long burn-in decay, the Image Orthicon produces a visual aesthetic unlike any camera before or since.
Developed by Matsushita (Panasonic) in 1974, the Newvicon uses a cadmium zinc telluride (CdZnTe) photoconductor in place of the antimony trisulfide of the Vidicon. CdZnTe offers higher sensitivity, lower dark current, and a flatter spectral response into the red, but retains meaningful temporal lag — Mullard's 1987 production data measures 8–15% residual signal at 60 ms (≈ 3 fields at PAL), compared to 15–25% for Sb₂S₃ Vidicons under equivalent conditions. The result is comet tails that are shorter than a Vidicon's but far more pronounced than any Plumbicon or Saticon of the same era. Modelled on the Panasonic PK-958 service manual specification: 300 horizontal TVL, >45 dB SNR, minimum illuminance 7 lux at F/1.4, 2/3" tube diameter.
The Newvicon's most distinctive feature is its single-tube colour design. Where Plumbicon and Saticon cameras split light through a glass prism block onto three separate tubes — one per primary colour — the PK-958 achieves colour by placing a frequency-interleaved stripe filter directly on the Newvicon's faceplate. Alternating filter stripes at sub-millimetre pitch spatially encode colour as a high-frequency carrier on the photoconductive surface; the camera's internal electronics demodulate this carrier and reconstruct the colour signal. The demodulation low-pass filter is the bandwidth bottleneck: the colour output bandwidth is approximately one-third of the luminance bandwidth — roughly 100 TVL where the tube delivers 300 TVL of luminance detail. The simulation models this as a horizontal Gaussian low-pass filter applied exclusively to the chroma channel before composite encode; luminance is left untouched. The net effect is that colours bleed and smear horizontally — a soft, watercolour quality — while fine monochrome detail remains sharp. The blurred-chroma signal then propagates through the full composite encode and decode chain, where composite chroma bandwidth (~1.3 MHz NTSC/PAL) acts on it further.
Eliminating the prism block also eliminates 3-tube registration drift entirely — there is only one tube to misalign. The PK-958's colour fringing is effectively zero under normal conditions, in contrast to Plumbicon and Saticon cameras where prism-block drift produces characteristic coloured halos around high-contrast edges. The trade-off — reduced colour resolution for better portability and lower cost — made the Newvicon and related single-tube designs (Trinicon, Saticon-stripe) the dominant choice for ENG (electronic news gathering) and consumer camcorders through the late 1970s and 1980s.
The first generation of charge-coupled device cameras entered broadcast production in the early 1980s. CCDs eliminated lag, burn-in, and comet trails entirely — but introduced new electronic artifacts. The most distinctive is vertical smear: when a bright light source saturates a pixel well, excess charge bleeds through the vertical shift register during readout. The physics of CCD readout are asymmetric — charge transfers in one direction along the column — so the smear streak is strictly upward, not bidirectional. This is a commonly misrepresented artifact: many simulations show a bilateral pillar, but physical shift-register overflow produces a one-sided column above the overloaded source.
Early 3-chip CCD cameras had larger prism-block manufacturing tolerances than tube cameras — the silicon die mounting process was less precise than the glass tube alignment used in 3-tube Plumbicon rigs — producing more severe colour registration drift across the frame. Signal-dependent shot noise in silicon follows Poisson statistics: noise amplitude scales as √signal, so absolute noise peaks in the midtones rather than the shadows, unlike the fixed-floor dark noise of tube cameras.
Before videotape became practical for feature film broadcast in the 1970s, movies and pre-recorded programming reached audiences via the film chain — a movie projector aimed at a broadcast camera in a darkened telecine suite. Converting 24 fps film to 25 fps PAL or 29.97 fps NTSC required a mechanical intermittent mechanism that introduced its own characteristic instabilities.
The film chain's characteristic instabilities all have mechanical causes. Pulldown jitter — vertical frame position variation — occurs at film-frame boundaries, when the intermittent sprocket engages and releases. In NTSC 3:2 pulldown, a 24 fps film frame is held for either 2 or 3 video fields in the repeating A-A-B-B-B pattern: jitter occurs at the A-frame transitions (every 5 video frames), not continuously. Simulating jitter on every frame, as most approaches do, produces a visually incorrect result — too much motion, wrong cadence. Weave (horizontal instability from film shrinkage and gate pressure variation) is independent of the pulldown cadence and occurs frame-to-frame. Film grain is photochemical — silver halide crystals clustered in spatial patches, correlated within a frame, independent between frames. Film gamma (≈ 0.75) and the extended shadow detail of photochemical processes give film-originated material its distinctive tonal separation from directly-captured video.
By 1983 the pieces of portable video had existed for a decade — VHS recorders, shoulder-mount camera heads, battery packs — but they remained separate, heavy, and expensive enough to be the domain of broadcasters and well-funded news organisations. The Panasonic PK-958 was among the first wave of equipment aimed at the emerging prosumer market: a compact shoulder-mount VHS movie camera that coupled directly to a portable VHS recorder deck and placed broadcast-style video production within reach of sports clubs, schools, and serious enthusiasts. It was not a fully integrated "camcorder" in the later all-in-one sense, but the camera head and deck were designed as a matched pair and routinely sold together — the combination weighed around 4.5 kg, compared to over 15 kg for an equivalent professional ENG rig.
The camera section used a single 2/3" Newvicon tube (Matsushita's cadmium zinc telluride photoconductor) with a frequency-interleaved stripe colour filter on the faceplate. The specified performance — 300 TVL horizontal resolution, greater than 45 dB signal-to-noise ratio, minimum scene illuminance 7 lux at F/1.4 — was competitive with professional ENG cameras of a few years earlier. Single-tube colour design eliminated the prism block and three-tube registration problems of broadcast cameras, accepting reduced colour bandwidth (roughly one-third of luminance resolution, ~100 TVL colour vs 300 TVL luminance) in exchange for lower weight, lower cost, and better low-light sensitivity from the full faceplate area contributing to luminance.
A feature that distinguished the PK-958 from simpler consumer cameras of the period was its built-in character title generator, driven by a Fujitsu MB88303 TVDC (Television Display Controller) chip. The MB88303 is a dedicated single-chip character generator: it maintains a display RAM, drives a 5×7 dot matrix character ROM containing 64 characters across four groups (A–M, N–Z and symbols, digits and punctuation, extended symbols and kanji), and outputs two video signals — VOW (video output white, the character pixels at full white level) and VOB (video output black, a black-level background box behind each character cell). A Display Control Register on the chip controls three independent functions: BLK blanks the entire display, BLKB disables the background boxes while leaving character pixels visible, and BLINK enables global character blinking at approximately 1 Hz. Individual characters in display RAM carry a per-character blink flag (bit 6 of each RAM location), so specific words or digits can be made to blink independently of others. Four grid modes selected by the HSZ and VSZ registers set the character size: from a fine 9-row × 20-column layout to a coarse 3-row × 7-column layout suitable for large text. The PK-958's physical keyboard exposed the alphanumeric characters and the most common punctuation marks, but the full MB88303 ROM — including the up-arrow, filled and background-only block characters, double-dash, raised comma, asterism (※), moon and sun kanji (月, 日), and house/telephone glyph — was not all directly accessible from the factory keyboard.
The app models the PK-958 at two levels. The Newvicon camera tube simulation (described above) reproduces the optical and electronic characteristics of the 2/3" CdZnTe tube — stripe-filter colour bandwidth reduction, lag, temporal smearing, and sensitivity. Entirely separately, the title generator is a complete software emulation of the MB88303 chip: the full 64-character ROM is reproduced from the original datasheet patterns, all four grid size modes are implemented, the Display Control Register bits (BLK, BLKB, BLINK) are individually controllable, and per-character blink flags are stored per-cell. Title output is injected directly into the composite signal before the encoder, so characters pass through the full encode–decode chain and emerge with the same bandwidth limiting, dot crawl, chroma bleed, and phosphor glow as anything else in the signal. Unlike the original PK-958 keyboard, the app's on-screen keyboard provides access to the complete MB88303 character set — every glyph in the ROM, including the extended symbols and kanji, is typeable.
All Simulated Standards
Every standard gets its own timing geometry, subcarrier frequency, and signal path. (NTSC, PAL, SECAM and more.) The app does not reskin one format to approximate another. Four further standards — D-MAC, D2-MAC, MUSE, and HD-MAC — are covered in the MAC & HDTV section below.
The BBC launched 405-line television in 1936, making it the oldest electronic TV standard. The signal carries only luma — no colour subcarrier is ever generated. System A used AM vestigial-sideband transmission with positive modulation (full white = peak carrier) — the opposite polarity from every later standard.
The 525-line 29.97 fps frame structure was standardised before NTSC colour was added in 1954. Monochrome variant uses the same timing as NTSC-M but generates no subcarrier. Sampled at 14.318 MHz (4× the NTSC colour subcarrier frequency) for timing consistency.
Before PAL and SECAM arrived in the mid-1960s, most of Europe broadcast 625-line monochrome. Uses the same line and frame geometry as PAL and SECAM but carries only luma. Sampled at 17.734 MHz — the highest resolution of the 25 fps standards at 1135 samples per active line.
System C had 819 lines — more than any colour system that came after. Broadcast on VHF with an 8 MHz video bandwidth, it delivered extraordinary vertical resolution for its era. Requires a 22 MHz sampling rate; active line width is 1074 samples. Pure luma — no colour subcarrier.
Colour is encoded as two signals — I (orange–cyan) and Q (green–magenta) — modulated 90° apart onto a 3.58 MHz subcarrier. A 7.5 IRE pedestal lifts black level above blanking. Because receivers must lock an oscillator to a burst that can drift, poorly-adjusted sets produce a visible hue shift — giving rise to "Never The Same Colour".
Electrically identical to NTSC-M except the 7.5 IRE pedestal is absent — black is at blanking level (0 IRE). The result is a slightly wider usable contrast range. On a correctly calibrated monitor the difference is invisible; on a misaligned set, NTSC-J material through an NTSC-M decoder looks slightly washed out.
NTSC-4.43 uses the 525-line 29.97 fps line structure of NTSC-M but shifts the colour subcarrier to 4.43361875 MHz — the PAL frequency. This format arose as a practical solution: VCRs sold in PAL countries often recorded from an NTSC source, and multistandard televisions could display the PAL-frequency chroma directly without a converter.
The colour encoding remains YIQ (NTSC colour matrix) with no V-switch. The burst is fixed at 180°, the same reference as standard NTSC. Because the subcarrier is higher at the same line rate, the composite buffer is wider — 1127 samples per line versus 910 for NTSC-M. The dot-crawl pattern is denser and higher-frequency than standard NTSC, matching the faster subcarrier.
Uses quadrature modulation like NTSC but flips the phase of the V (red–cyan) component on every line. A TV with a 1-line delay averages adjacent lines, cancelling phase errors. This self-correcting mechanism makes PAL far more tolerant of signal reflections and transmitter drift — no hue knob required.
Brazil is the only country that combined the PAL V-switching colour system with the 525-line 29.97 fps NTSC frame structure. The result requires a unique subcarrier frequency — 3.575611 MHz — to keep harmonics aligned with the line rate. Incompatible with both standard NTSC and PAL.
Uses 625-line 25 fps timing but shifts the subcarrier to 3.582056 MHz for harmonic compatibility with South American IF infrastructure inherited from NTSC-era equipment. Restores the 7.5 IRE pedestal that standard PAL lacks, matching NTSC-M black levels. Active line width is 917 samples.
PAL-60 uses PAL colour encoding — YUV colour matrix, V-switch, ±135° burst alternation — on the 525-line 29.97 fps NTSC frame structure. The subcarrier is 4.43361875 MHz, the same as standard PAL. It was widely used by game console manufacturers to run NTSC-speed games on PAL-region displays: the TV saw a familiar PAL subcarrier with PAL encoding, and most late-model PAL sets could lock to it.
The V-switch gives PAL-60 the same hue-error self-cancellation as standard PAL. Its dot-crawl texture differs visibly from NTSC-4.43 — the V-switch creates a finer herringbone pattern compared to NTSC-4.43's coarser directional bands. The composite buffer is 1127 samples per line (NTSC line period at PAL sample rate), versus 1135 for standard PAL.
SECAM uses frequency modulation — like FM radio — instead of amplitude/phase. The two colour signals (Db, Dr) are transmitted on alternating lines: even lines carry Db on a 4.250 MHz subcarrier, odd lines carry Dr on a 4.406 MHz subcarrier. The receiver stores the previous line in a glass delay line and combines current and stored values to reconstruct both components at every line. FM is immune to amplitude variations, giving excellent colour stability across weak or multipath signals — at the cost of being incompatible with amplitude-modulated systems.
Three signal-processing stages are applied to reach broadcast accuracy. First, LF pre-emphasis: a one-pole 85 kHz high-pass filter amplifies colour transitions before FM modulation, countering the FM noise spectrum that rises with frequency — the same principle as FM radio pre-emphasis but optimised for the SECAM deviation parameters. Second, the Bell/Cloche amplitude filter: the transmitted carrier amplitude varies with instantaneous frequency according to A(f) = M₀ × √(1 + 16F²) / √(1 + 1.26F²), where F = f/f₀ − f₀/f and f₀ = 4.286 MHz. This weighted-carrier pre-distortion keeps the demodulated baseband flat when FM receiver detectors have the characteristic FM noise response. Third, decoder de-emphasis: a one-pole IIR integrator (inverse of the encoder HPF) restores flat colour response after FM discriminate — applied per-line so that state resets at each horizontal sync, matching real SECAM decoder hardware.
No colour burst is transmitted. SECAM identification uses a sequence of nine alternating-polarity bursts during the vertical blanking interval (lines 7–9 of each field), allowing switching equipment to distinguish SECAM from PAL.
SECAM-L is the terrestrial broadcast variant of SECAM used in France from 1967 to the analogue switch-off in 2011. The FM chroma encoding is identical to standard SECAM — Db on 4.250 MHz, Dr on 4.406 MHz, with the same Bell/Cloche amplitude filter and LF pre-emphasis. What sets System L apart is the video modulation polarity: while every other major broadcast standard (NTSC, PAL, SECAM-B/G/D/K) uses negative modulation (white = minimum carrier, black = maximum), System L uses positive modulation (white = maximum carrier, black = minimum). This is the same polarity used by Systems A, B, and C before colour.
The physical consequence is a distinctive noise character. In negative modulation, thermal noise concentrates near white (high-frequency region of the AM envelope), where it is perceptually least visible because human vision is less sensitive to noise in bright areas. In positive modulation, the AM envelope is small at black, so noise — ∝ 1/carrier amplitude — concentrates in dark, shadow areas. The result is a characteristic grain in underexposed or shadow regions, perceptually quite different from the highlight-adjacent noise of PAL and NTSC. The FM discriminator's capture effect naturally rejects this AM noise from the chroma path, so only luma is affected.
System L also specifies a wider luma bandwidth: 6 MHz versus the 5 MHz of PAL-B/G and standard SECAM, allowing nominally higher horizontal resolution — matching the 6 MHz used by NTSC-M. The simulation applies this via a per-standard normalised FIR cutoff (6.0/17.734 MHz = 0.3384), independently scalable by the front-panel bandwidth control. Positive-modulation noise scaling is applied in the signal domain, on the same composite buffer that feeds the FM discriminator — ensuring the FM capture effect correctly rejects chroma contamination without any special-casing in the decode path.
MAC & Satellite HDTV Standards
MAC (Multiplexed Analogue Components) abandoned the quadrature subcarrier entirely. Instead of hiding colour inside a bandwidth-shared composite waveform, MAC time-division multiplexes luma and chroma into separate slots on every line — colour before brightness, no cross-interference possible. Developed in Europe and Japan through the 1980s and 1990s, these standards were designed for direct-broadcast satellite and drove the world's first public HDTV services.
These standards run a GPU signal-domain encoder — one Metal thread per scan line, computing time slots in parallel with proper bandwidth-limited luma sampling and quincunx sub-pixel offsets (MUSE). Time-division multiplexing is inherently data-parallel, making the GPU both the faster and more accurate choice.
D-MAC was the first mass-market MAC standard, transmitted by British Satellite Broadcasting (BSB) and early Astra satellite services. Each 64 µs line is divided into three regions: a 12 µs digital data burst carrying sound (adaptive delta modulation, 6 channels) and DATV control data, followed by a chroma slot carrying one colour component at 3:1 time compression (351 samples out of a 20.25 MHz stream), followed by a luma slot at 3:2 compression (702 samples). Chroma is line-sequential — Cb on even active lines, Cr on odd — requiring a one-line delay for reconstruction at the receiver, identical in principle to SECAM's delay line but entirely in the digital domain.
Because luma and chroma never share the same time slot, there is no cross-colour, no dot crawl, and no chroma bandwidth limitation from subcarrier interference. The only chromatic artefact is the one-line chroma delay, which on a moving vertical edge produces a slight colour fringe — the MAC equivalent of Hanover bars.
D2-MAC is D-MAC at half the sample rate — 10.125 MHz instead of 20.25 MHz, giving 648 samples per line. The half-bandwidth design fit within existing cable TV channel allocations (7 MHz PAL channels) while preserving the time-division architecture and digital audio. Transmitted by TDF's Telecom 1 satellite, Canal+ Numerique, and used extensively for cable television in France, Germany, and Scandinavia through the early 1990s.
Picture quality is comparable to a good PAL broadcast — the half sample rate reduces luma horizontal resolution to approximately the PAL equivalent, but without any of PAL's cross-colour or dot-crawl artifacts. The chroma slot carries 175 samples (3:1 compression of the 525-sample chroma bandwidth); luma carries 351 samples.
MUSE (Multiple Sub-Nyquist Sampling Encoding) was NHK's solution to transmitting 1125-line Hi-Vision HDTV within the bandwidth of a single analogue satellite transponder. The 16.2 MHz sample stream carries 480 samples per line in a 29.63 µs line period. Each line is divided into three fields: an 11-sample HD digital header carrying frame synchronisation, subframe ID, and digital sound; a 94-sample C chroma field carrying one colour component at 4:1 time compression; and a 374-sample YM luma field carrying full-bandwidth luminance.
The core MUSE innovation is 4-field quincunx sub-Nyquist spatial sampling. Static scenes accumulate samples across four successive fields, each offset by half a pixel in horizontal and vertical, reconstructing the full 1125-line Hi-Vision resolution. Moving areas — detected by inter-field difference — revert to a single-field lower resolution, trading resolution for temporal accuracy on motion. For static content, MUSE achieves essentially the full Hi-Vision picture quality within the 16.2 MHz bandwidth. NHK began public MUSE broadcasts via BS-2b in June 1989, making it the world's first public HDTV satellite service, operating continuously until July 2007.
The app simulates the full 4-field quincunx reconstruction: each field is encoded into its own composite buffer with the correct sub-pixel spatial offset, and the decoder averages all four to recover full Hi-Vision luma resolution on static content. Moving content blurs softly across the 4-frame accumulation window — the same characteristic the original MUSE receivers exhibited when motion adaptation was inactive.
HD-MAC (also called D2-HDMAC) was the European Broadcasting Union's answer to NHK's MUSE — a 1250-line 50 Hz HDTV system transmitted within the D-MAC 20.25 MHz bandwidth. The 1250/50 source signal was compressed by NEC/Philips' Band-width Reduction and Enhancement (BRE) processing into a 625/50 compatible stream. Three BRE modes were defined in ETS 300 352: an 80 ms frame buffer for static content (full HDTV resolution), a 40 ms mode, and a 20 ms mode for fast motion. The transmitted line structure carries 349 chroma samples and 697 luma samples per 64 µs line — all parameters specified precisely in the ETS 300 352 standard from the EBU/ETSI Joint Technical Committee.
HD-MAC reached limited public service at the 1992 Barcelona Olympics and 1992 Albertville Winter Games, making it the first European HDTV broadcast. EBU field trials continued through 1993–1994, but the MAC family was superseded by DVB digital broadcasting before HD-MAC reached mass deployment. The standard was formally withdrawn around 1995.
Before Electronic Scanning
Version 2 — In TestFlight BetaMechanical television predates electronic scanning entirely. A motor-driven Nipkow disk — a flat aluminium plate with holes punched in an Archimedean spiral — spins at a fixed speed, sweeping each hole across the scene one line at a time. The resulting signal is pure amplitude-modulated luminance with a bandwidth of only ~12 kHz. At the receiver, the same signal drives a neon lamp viewed through an identical spinning disk. No colour subcarrier. No sync pulses embedded in the waveform. No composite encoding at all. The BBC broadcast the world's first regular mechanical television service in September 1932, using John Logie Baird's 30-line system, and shut it down in November 1935 when the 405-line electronic system proved decisively superior.
This is a completely separate signal pipeline from the composite/CRT chain. The app's mechanical TV mode bypasses every stage of the electronic pipeline — no YIQ encoding, no RF modulation, no CRT phosphor model — and replaces it with a 1D AM waveform processed through six GPU compute passes.
In January 1884, a 23-year-old German engineering student named Paul Gottlieb Nipkow filed patent DE30105 — titled Elektrisches Teleskop — with the Imperial Patent Office in Berlin. He described a rotating disk with holes punched in an Archimedean spiral that could, in principle, scan a scene one line at a time and transmit it electrically. Each hole was offset inward from the previous by exactly one line pitch; as the disk spun, the holes swept the entire image area in sequence. The patent was purely theoretical. No photoelectric device capable of converting the resulting light variations into an electrical signal with sufficient sensitivity existed. Nipkow never built a working system. The patent expired unworked. Four decades passed.
The technology eventually caught up in the mid-1920s. In June 1925 Charles Francis Jenkins demonstrated "radiovision" — a 48-line system — in Washington DC. In October 1925 Scottish inventor John Logie Baird produced the first recognisably televised human face in his attic laboratory in Soho, London: the face of a ventriloquist's dummy named Stooky Bill, scanned by a 30-hole disk spinning at 750 RPM. Within four years the BBC was broadcasting a regular mechanical television service — the world's first — on 261 metres medium wave. You could buy a Baird Televisor from a shop for £26 and watch fifteen minutes of television a day. In Germany, Fernseh AG (a consortium including Baird Television and Bosch) pushed the technology further: 90-line broadcasts from Berlin in 1932, then 180-line transmissions in 1935 that were used to cover the Berlin Olympics. In the United States, CBS engineer Peter Goldmark built a sequential-field colour system using a spinning filter wheel — approved by the FCC as the US colour standard in October 1950, then overturned in 1953 when the all-electronic NTSC system proved incompatible with existing black-and-white receivers. Mechanical television was always transitional: extraordinary for its moment, obsolete the moment something better arrived.
A Nipkow disk is a flat aluminium plate with N holes arranged in an Archimedean spiral — one hole per scan line, each displaced inward by one line pitch from the last. As the disk spins, each hole sweeps across a fixed rectangular aperture cut into a mask in front of it. On the transmitter side, a bright lamp shines through the spinning disk from behind, projecting a moving spot of light that traverses the subject one line at a time — 30 lines at 12.5 frames per second, 750 RPM. A subject sat in a narrow illuminated zone inches from the disk, in an otherwise darkened enclosure, under intense arc-lamp illumination. Because the photocell of the era required maximum contrast, studio subjects wore heavy black-and-white stage makeup. The photocell behind the subject converted the reflected spot brightness into a varying electrical voltage — pure AM luminance, nothing more. No blanking pedestal. No sync pulses. No colour subcarrier. Just brightness.
At the receiver, the same signal drove a small neon discharge lamp — an indicator tube containing neon gas at low pressure — placed close to an identical spinning disk. Neon glows orange-red; the emission spectrum, measured against the CIE colour-matching functions, produces chromaticity coordinates of approximately x=0.578, y=0.350 — a deep reddish-orange. The image appeared through the aperture in the receiver disk as the holes swept it. The Baird Televisor image measured roughly 35 mm × 50 mm — about the size of a postage stamp. Viewed through the water-lens magnifier that the production Televisor included (a glass tank of water acting as a cylindrical lens), it could be enlarged to perhaps 100 mm wide. It had to be viewed in a completely darkened room; ambient light from a window would wash it out entirely. It flickered at 12.5 frames per second — well below the flicker-fusion threshold of human vision — so the image pulsed visibly with every frame. Contemporary accounts from BBC viewers describe it as "a reddish glow" and "a flickering, dancing light — one becomes accustomed to it after a few minutes." The image had approximately 30 horizontal lines and perhaps 30 distinguishable picture elements per line. A close-up face was recognisable. A landscape was not. And yet this was television — a word so new it barely existed — and it was miraculous.
Every numerical parameter in the simulation is traced to a primary source — no values are assumed or borrowed from other simulators. The Nipkow disk geometry comes from physical measurements of a surviving Baird Televisor disk documented in the literature; the neon lamp persistence from Cobine's 1941 analysis of gaseous discharge physics; the signal bandwidth from Denton's 1933 oscilloscope measurements of the live BBC 30-line waveform; the arc sag formula from Nipkow's own 1884 patent geometry. Primary sources accessed include: Nipkow's original patent (DE30105, 1885); four Baird patents (GB269658, GB285738, GB292185, GB440485); Denton's definitive 1933 engineering paper in the Journal of the IEE; Burns's two-volume history from the IEE; the BBC Written Archives; the NIST Atomic Spectra Database for the neon emission spectrum; Goldmark's 1942 Proceedings of the IRE paper on the CBS colour wheel; Cobine's Gaseous Conductors; engineering literature from Fernseh AG, Jenkins, and Bell Labs; and the collections of the National Science and Media Museum, Bradford.
Every artefact is injected into the 1D luminance waveform before reconstruction — not into the output texture. Horizontal softness comes from bandwidth-limiting the waveform; trailing smear follows the arc scan path because it is applied to the waveform in arc-scan order; jitter is a timing phase error on the arc-start of each line, not a post-process x-translate. No effect is faked in the spatial domain.
Samples input luminance along circular arc paths — not horizontal lines. Each hole traces a parabolic arc as the disk rotates (sag ≈ 0.785 mm at mid-radius for the Baird Televisor; Burns 2000 §4.3). Arc geometry is derived from Nipkow's 1885 patent DE30105. The output is a 1D float buffer [N lines × W samples].
Zero-phase single-pole IIR lowpass applied to each scan line in both directions (Parks & Burrus §2.2). The Baird 30-line system had a physical bandwidth of ~12.5 kHz — a cutoff of ~9.5% of Nyquist at 700 px output. The resulting blur follows the arc path, not horizontal, because it is applied in arc-scan order. This gives ≈ 33 resolvable picture elements per line, matching Denton's 1933 measurement.
Gaussian noise added to the 1D waveform (Box-Muller, Knuth Vol.2 LCG), modelling photocell noise and atmospheric interference on the AM signal. Because noise is added before persistence, the neon lamp smears it — exactly as in the physical system. Historical SNR ≈ 20 dB for strong 30-line reception.
Asymmetric single-pole IIR in arc-scan order: the neon ionises near-instantly (~µs) but deionises in ~1 ms (Cobine 1941 §8). Modelled as state = max(input, α·prev + (1−α)·input). At Baird parameters, τ ≈ 131 samples, α ≈ 0.9924. The smear follows the arc curvature — a forward-only pass in waveform order.
Reconstructs the waveform into the output texture as arc strips with inter-line dark gaps (hole fill fraction 0.48, measured from a surviving Televisor disk; Burns 2000 §4.3). Per-line arc-start jitter (from motor speed variation) shifts where each line's arc begins. Neon colour from the Ne emission spectrum (NIST: CIE 1931 x≈0.578, y≈0.350 → orange-red).
Two-pass separable Gaussian bloom (horizontal then vertical) applied to the reconstructed image. A neon discharge tube's penumbra extends several millimetres beyond the plasma column (Cobine 1941). This is the only post-reconstruction stage; it is a spatial post-process because it models a physical light-spreading mechanism, not a signal artefact.
Five named historical systems are available as one-tap presets, each setting the correct lines, frame rate, and aspect ratio from primary sources.
John Logie Baird's system, transmitted by the BBC from September 1929 as experimental broadcasts and from September 1932 as a regular public service — the world's first. The disk had 30 holes in an Archimedean spiral; at 750 RPM, each hole swept one line at 375 lines/second. The unusual 7:3 portrait aspect (wider than tall when viewed through the disk aperture) resulted from the hole geometry and the placement of the viewing window relative to the disk spin axis. The BBC closed the service in November 1935 after the Selsdon Committee selected the 405-line Marconi-EMI electronic system. Bandwidth ≈ 12.5 kHz; ≈ 33 resolvable pixels/line (Denton 1933); neon lamp deionisation ≈ 1 ms (Cobine 1941).
Charles Francis Jenkins broadcast 48-line mechanical television from W3XK in Wheaton, Maryland from July 1928. Unlike Baird's system, Jenkins used a prismatic disk rather than a Nipkow disk for some transmitters, but the 48-line parameter set is the best-documented. Jenkins's broadcasts initially carried only silhouettes (the photocell sensitivity of the era required high-contrast sources); halftone images became practical later. The FRC granted him an experimental licence in 1929; commercial aspirations stalled, and the station closed in 1932 after the FRC declined to renew it.
Fernseh AG (a consortium including Baird Television Ltd and Bosch) developed a 90-line mechanical system demonstrated in Berlin from 1932. Transmitting at 25 fps aligned with the 50 Hz European mains frequency, avoiding flicker at the scanning rate. The signal bandwidth for 90 lines at 25 fps is approximately 135 kHz — ten times Baird's 30-line system — requiring a different receiver design. The 90-line system was a transitional technology; Germany moved to electronic scanning in 1935 for the Berlin Olympics service.
The 180-line system demonstrated by Fernseh AG at the 1935 Berlin Radio Exhibition represented the practical ceiling of mechanical TV. At this resolution the disk had 180 holes in its spiral at an outer diameter that made precise manufacturing extremely challenging. The signal bandwidth reached ~270 kHz. The system was rendered obsolete at the same exhibition, where the Telefunken-developed 441-line electronic system was demonstrated in the adjoining hall. Germany's public TV service launched at the 1936 Berlin Olympics used electronic scanning at 180 interlaced lines, then quickly expanded to 441 lines — the mechanical era was over.
CBS engineer Peter Goldmark developed a sequential-field colour system using a rotating three-sector filter wheel — red, green, and blue segments that synchronised with the disk — instead of a three-tube electronic camera. The NTSC approved the CBS Color system as the US colour standard in October 1950, but RCA challenged it legally and in the marketplace with their fully electronic, NTSC-compatible system. CBS Color required a spinning disk at the receiver; existing black-and-white sets were incompatible. The FCC reversed its decision in 1953 and adopted the RCA/NTSC system. The CBS system is the only mechanical TV standard to have been approved as a national broadcast colour standard. At 405 lines / 48 fps, the required bandwidth is approximately 5 MHz. Reference: Goldmark et al., Proc. IRE Vol. 30, April 1942.
Retro Camera App
Version 2 — In TestFlight BetaAnalogCam is a second mode inside the app: a point-and-shoot camera experience built on top of the full signal domain simulation. The same physics that power Expert mode — composite signal encoding, camera tube characteristics, VCR tape degradation, CRT phosphor chemistry — runs underneath. The interface gets out of the way: a look strip, a shutter, a flip button. Everything the simulation engine knows how to do is accessible through a swipe or a tap.
The simulation is not simplified or approximated for AnalogCam. A look named "Home Movie" is not a filter applied over a clean frame — it is a specific configuration of the same GPU compute pipeline that Expert mode exposes as forty sliders. The difference is presentation: twelve curated built-in looks with evocative names instead of technical labels, and a signal side panel for the four controls that matter most visually. Users who want to dig deeper can cross into Expert mode at any time and refine a look from AnalogCam down to the photocell noise floor.
Each look is a complete signal configuration — TV standard, camera tube, VCR format, CRT phosphor — tuned to a specific era and aesthetic. Applied with one tap.
Warm amber cast, limited colour bandwidth, dot crawl on high-contrast edges, P22 phosphor bloom. What colour television looked like when it first arrived.
Clean signal, warm whites, strong scanlines, well-maintained P22 phosphor. The look of American primetime at its broadcast quality peak.
Y/C crosstalk, chroma smear from the 500 kHz colour bandwidth, tracking noise, head-switch glitch at frame bottom. Worn tape from a cardboard sleeve.
Cleaner than VHS, boosted colour saturation, slight halation around bright objects. The neighbourhood barbecue, the school play, the holiday that never gets watched again.
The superior colour accuracy of PAL's delay-line decoder, a cool blue-green cast, faint Hanover bars in saturated areas. Public television from the continent.
Heavy phosphor bloom, hot whites that crush the highlights, kinescope grain. Before colour. Before composite. Before any of it. The original television aesthetic.
High contrast, harsh white balance, near-clean signal with the Betacam SP Y FM carrier at 6.8–8.8 MHz. The look of the evening news from a tape that was recorded over twice.
FM chroma fire at colour transitions (the SECAM artefact that no other standard produces), warm contrast, vivid flare on vertical colour edges. The look of French television when it was still SECAM.
Wide bandwidth, minimal noise, near-clinical sharpness from the MAC time-multiplexed signal. What the dish on the side of the house was for.
Desaturated, gritty, low bandwidth. Shot on a single-tube camera with available light. The texture of archive documentary footage from a era when light was expensive.
Deeper blacks than American NTSC, slightly cooler white point, crisp chroma. The different gamma target that Japanese studios used for CRTs calibrated to 0 IRE black.
Switches the full app into mechanical TV mode. Orange-red neon glow, 12.5 fps flicker, 30-line arc-scan geometry. The Baird 30-line standard — described in detail above.
The signal side panel — accessible by swiping from the right edge of the viewfinder — exposes four controls: Noise, Colour, Bloom, and Curvature. These are the four parameters that produce the most immediate visual change in any look. They are the same parameters that exist in Expert mode, affecting the same underlying simulation, with no approximation or shortcut. Users who want finer control can cross into Expert mode at any time and refine a look down to individual signal domain parameters — then return to AnalogCam with those settings applied.
Photos and video are captured to the Camera Roll with look metadata embedded in EXIF. Created looks can be saved by name and shared as preset files. Custom looks appear in the look strip alongside the built-in catalog and persist across sessions. The look strip, shutter, and side panel all work one-handed in portrait — designed to the same standard as the best camera apps on the platform.
Authentic Artifacts
These imperfections aren't applied over a clean image. They arise naturally from simulating the actual signal path — the same cause as on real hardware.
Full signal controls reference →
PAL decodes colour by adding the current line's chroma to a stored copy from the line before (the 1H delay line), relying on the V-switch polarity inversion to cancel colour errors. If the gain of the delay-line path differs from the gain of the direct path — a manufacturing tolerance in the glass delay line or its drive amplifier — the cancellation is incomplete. The residual V component alternates sign every line because the V-switch alternates, so even lines get a little extra red–difference and odd lines get a little less. The result is horizontal bars of alternating saturation running across the entire picture, one bar per line pair — named after the Hanover television fair where PAL was first demonstrated publicly in 1967 and where the artefact was prominently discussed.
The physics: with a fractional gain mismatch g, the residual V amplitude is Vresidual = V × g × 0.4 on one polarity and −V × g × 0.4 on the next. The HANOVER slider sets g from 0 (perfect delay line) to 1.0 (±40% V-gain error — an extreme, degraded set). On grey and luma-only content the bars are invisible: there is no V component to be mishandled.
The PAL 1H delay line must store exactly one line period — 64 µs at the composite sample rate, corresponding to 283.75 subcarrier cycles. Real glass delay lines are cut to this length to within a fraction of a cycle, but temperature, ageing, and manufacturing spread mean the delay is never perfect. A timing error of δ samples at 4×fsc (≈ 56 ns per sample) rotates the delayed subcarrier by δ × 90°. The comb adder no longer receives a perfect phase conjugate from the delayed line: the U/V cancellation is incomplete, and U and V components mix into each other in proportion to sin(δ × 90°).
At δ = ½ sample (45° error) the effect is a faint hue shimmer on saturated content. At δ = 1 sample (90°) hues rotate noticeably around the colour wheel and saturation reduces. At δ = 2 samples (180°) the delayed signal is phase-inverted: U and V effectively swap, turning red content cyan and blue content yellow. Positive and negative offsets produce mirror-image hue rotations — δ = +1 and δ = −1 shift opposite colours in opposite directions. Luma is unaffected at any offset because it is extracted before the comb runs. The DL ERR slider ranges from −2 to +2 samples; fractional values are interpolated.
VCR Tape Emulation
In the 1970s–1990s, the path from tuner to television almost always passed through a VCR. Each recording format imposes a distinct set of physics constraints: the FM carrier frequency limits horizontal resolution, the colour-under heterodyne system introduces chroma phase noise, the mechanical transport produces skew, flutter, dropout, and head-switching bands. All format parameters are derived from primary specifications — IEC 60774 series, Sony engineering documents, and SMPTE standards. The simulation does not apply these as post-process filters over a decoded image — each stage operates directly on the composite waveform at the full 14.318 MHz (NTSC) or 17.734 MHz (PAL) sampling rate, including true colour-under IIR bandwidth limiting and FM noise triangle modelling. The Fisher-Price PXL-2000 is a deliberate outlier: it records a 120×90 pixel grayscale image directly onto audio cassette using FM modulation at 180 kHz — an entirely different architecture that bypasses the composite pipeline and the conventional tape-signal chain entirely.
The dominant consumer format (1976–2008). Luma is FM-modulated at 3.4–4.4 MHz NTSC (3.8–4.8 MHz PAL), a 1.0 MHz deviation that limits horizontal resolution to approximately 240 TVL. Chroma is heterodyned down to 629 kHz (NTSC) / 627 kHz (PAL) — the colour-under system — and recorded separately below the luma FM carrier. On playback the up-converter is phase-locked to the recovered sync, but residual tape-speed jitter that differs between the sync and chroma paths produces hue instability (Δφ = 2π × 629 kHz × Δt). LP and EP modes use the same carrier frequencies but narrower track pitches (LP: ~29 µm NTSC; EP: ~12.5 µm vs SP 58 µm), increasing azimuth crosstalk and dropout rate while leaving bandwidth nominally unchanged. A consumer aperture-correction circuit (high-frequency shelf +3–6 dB at ~2 MHz) was standard from ~1985, adding characteristic edge ringing. Y-SNR ≈ 42 dB.
Sony's competing consumer format. The head drum diameter is 74.5 mm versus VHS's 62 mm, giving a 21% faster head writing speed (6.9 m/s vs 5.8 m/s). This allows a wider FM deviation of ~1.3 MHz (3.5–4.8 MHz NTSC) versus VHS's 1.0 MHz, yielding approximately 250 TVL — marginally sharper than VHS SP. The colour-under carrier is 688 kHz (NTSC) / 689 kHz (PAL), higher than VHS's 629 kHz. Because Δφ = 2πfΔt, a higher carrier produces a larger phase error per unit of time-jitter; however, Betamax machines tended to have slightly tighter servo tolerances which compensated. The format war was decided by tape duration, not picture quality: early Beta I held only 60 minutes.
Super-VHS raises the luma FM carrier to 5.4–7.0 MHz (NTSC), widening the deviation to 1.6 MHz. This lifts horizontal resolution to approximately 420 TVL — exceeding NTSC broadcast (~330 TVL). Critically, the colour-under system is unchanged from standard VHS: chroma is still heterodyned to 629 kHz (NTSC). S-VHS therefore has dramatically sharper luma but identical chroma quality and stability to standard VHS. The S-Video connector separates Y and C in the baseband output to prevent luma-chroma cross-talk at the TV input, but the tape itself still records colour-under. S-VHS camcorders were a popular prosumer format through the 1990s.
Sony's 8mm upgrade applies the same principle as S-VHS to the 8mm cassette: the FM carrier is raised to 5.7–7.7 MHz with a 2.0 MHz deviation, achieving approximately 415 TVL. The colour-under carrier is 732 kHz (NTSC) / 743 kHz (PAL) — higher than VHS's 629 kHz, giving marginally less hue jitter per unit tape-speed variation. Hi8 camcorders competed directly with S-VHS for the prosumer market through the 1990s; the 8mm cassette was more compact, making Hi8 the preferred format for shoulder-mount ENG cameras by many independent broadcasters.
The first cassette format adopted for broadcast ENG. The wider ¾" tape, faster head writing speed (8.54 m/s), and higher FM deviation (1.6 MHz, 3.8–5.4 MHz lo-band NTSC) give approximately 280 TVL from a machine with significantly better servo engineering than consumer decks. Flutter is typically below 0.02% wrms. Colour-under sits at 688.373 kHz (NTSC lo-band). U-Matic was the dominant ENG format from the mid-1970s until Betacam displaced it in the early 1980s; hi-band SP variants (5.6–7.2 MHz) pushed resolution to ~330 TVL. The look of U-Matic lo-band — moderately sharp, slightly warm, occasional dropout — is the visual language of 1970s TV news.
Betacam SP solves the fundamental limitation of colour-under by abandoning it entirely. Luma (Y) and the two colour-difference signals (R-Y, B-Y) are recorded as separate FM tracks: Y at 6.8–8.8 MHz (2.0 MHz deviation), R-Y and B-Y time-compressed 2:1 and interleaved on the C track at 5.6–7.3 MHz (CTDM). Because there is no heterodyne, there is no tape-speed-induced hue jitter. Y-SNR exceeds 50 dB; C-SNR exceeds 52 dB — roughly 8 dB better than VHS SP in every channel. The precision direct-drive drum and capstan servo keeps flutter below 0.02% wrms. Betacam SP became the standard for broadcast field acquisition from the mid-1980s through the mid-2000s, when it was displaced by IMX and DVCAM digital formats. No edge enhancement circuit — the signal goes directly to downstream equipment.
The PXL-2000 (1987–1989) was a toy camcorder that recorded not to video tape but to a standard audio cassette running at approximately 16.875 IPS — roughly nine times the normal playback speed. The image signal was FM-modulated at a 180 kHz system clock (90 kHz Nyquist limit) and written to the left channel of the cassette. Playback recovered the FM signal and fed it into a digital framestore driven by a Sanyo LA 7306M ASIC, which read out a 120×90 pixel monochrome image at the NTSC field rate (~30 Hz) from a CCD array captured at 15 fps. The result is a striking visual language: blocky 120×90 pixel resolution, monochrome, with a characteristic grainy low-bit-depth texture, and subtle horizontal warping from motor flutter scaled to the nine-times-faster transport. There is no FM luma carrier, no colour-under system, and no composite signal — the PXL-2000 simulation bypasses the normal tape-signal pipeline entirely.
What the simulation can confirm from primary sources: The 120×90 CCD array and 10,800 total pixel count are specified in US Patent 4,875,107A ("90×120 element matrix"). The 15 fps acquisition rate, ~16.875 IPS (9×) tape speed, 180 kHz system clock, 90 kHz low-pass filter cutoff, and Sanyo LA 7306M ASIC designation are all from the same patent. The cassette transport frequency shift — motor cogging at ~4.5/27/72 Hz versus a standard deck's ~0.5/3/8 Hz — follows directly from the 9× speed ratio.
What the simulation estimates (no primary measurement found): CCD gamma is left linear (γ = 1.0) — the patent does not document it. Fixed pattern noise (column ~3%, row ~1%) and temporal noise (~5% amplitude) are typical values for CCD sensors of the era, not measured from actual hardware. Quantisation depth is estimated at 6-bit (64 levels) — the patent's "90 exposure levels" refers to auto-iris range, not image bit depth. Vertical CCD smear (overflow threshold ~85%, coefficient ~10%) and wow/flutter amplitude (~0.25/0.15/0.08%) are engineering approximations that have not been validated against real footage. All estimated parameters are explicitly marked in the shader source.
All nine format presets snap the tape, mechanical, and signal sliders to documented values. Individual controls remain fully adjustable — combine VHS EP with maximum dropout and tracking error for aged-tape degradation, or set Betacam SP generations to 3 to model a dub chain that lost its way.
The colour-under and FM luma processing runs on the raw composite waveform — at 14.318 MHz (NTSC) or 17.734 MHz (PAL) — not on a decoded image. Each scanline is processed sequentially so the IIR filter state that real tape physics require is maintained across every sample in the line. Four stages run in order for every active scanline in every frame:
Luma (Y) and chroma (C) are separated at the composite sampling rate using a 1-line delay comb: C[n] = (composite[n] − composite_prev_line[n]) / 2. At 4× subcarrier sampling the adjacent-line subcarrier phase difference is exactly 180° for NTSC and 270° for PAL, so subtracting the previous line cancels chroma and the half-sum preserves it. This is the same operation a real VHS deck's output comb filter performs — but here it runs on the encoded waveform before any decode pass, so downstream effects (IF ringing, group delay, ghosting) have already shaped the signal correctly.
In FM recording, thermal noise density is proportional to carrier frequency. The VHS FM carrier sweeps from 3.4 MHz at blanking/black to 4.4 MHz at peak white (NTSC), so brighter areas are recorded at a higher frequency and arrive at the playback head with more noise. The simulation adds Gaussian noise to the extracted luma channel with amplitude scaled by 0.4 + 0.6 × (Y / 0.714) — 40% of the baseline noise floor at black, rising to 100% at white (100 IRE ≈ 0.714 normalised units). Bright highlights on a VHS tape are therefore roughly 2.5× noisier than deep blacks in SNR-limited conditions, matching the characteristic grainy, slightly fizzy appearance of highlights on a worn tape.
The extracted chroma passes through a 1-pole IIR lowpass at ~400 kHz — the one-sided recording bandwidth of the VHS colour-under track. The pole is placed at α = 0.839 at the NTSC composite sampling rate and α = 0.868 at the PAL rate, derived as α = exp(−2π × 400 kHz / fs). Because the IIR state is maintained across every sample in the scanline — rather than reset per pixel — the filter integrates causally over the full active line, exactly as it would on a real signal. This bandwidth limit is what restricts VHS chroma to approximately 40 TVL horizontal resolution versus ~240 TVL for luma, producing the characteristic horizontal colour smearing where saturated edges bleed into adjacent areas. Each additional generation of dubbing compounds the colour-under bandwidth by −10% per generation, matching observed chroma degradation in multi-generation dub chains.
Residual capstan speed variation shifts the phase of the recovered colour-under carrier on playback. The phase error is δφ = 2π × fcu/fs × timing_error, where fcu is the colour-under centre frequency (629 kHz NTSC, 688 kHz PAL). The bandwidth-limited chroma is rotated by δφ using a causal identity that is exact at 4× subcarrier sampling: since adjacent samples are π/2 apart in subcarrier phase, the IIR output from the previous sample is the quadrature component of the current one. The rotation is therefore C_new[n] = Cf[n]·cos(δφ) − Cf[n−1]·sin(δφ) — an exact carrier rotation with no lookahead into the filter output. Timing jitter is modelled as the beat of two capstan oscillators (~1.2 Hz wow × ~0.8 Hz flutter) scaled by the Flutter control, with an abrupt phase step near the head-switch line (~7 lines before V-sync) modelling servo phase mismatch between the two rotating drum heads.
VHS helical-scan mechanics produce a distinct family of distortions for each transport mode. The drum (62 mm diameter, 1,800 rpm NTSC / 1,500 rpm PAL) rotates continuously while tape speed determines how many recorded tracks the heads cross per revolution.
When the tape stops, the drum continues rotating at full speed. Each head traces a straight diagonal line across the stationary tape — a path that is not aligned with the recorded helical tracks (~3° helix angle). Partway through each revolution, the head crosses from one track onto the adjacent track (which has opposite azimuth, ±6° difference). This transition produces a horizontal noise bar spanning 4–20 scan lines, its vertical position adjustable by the tracking control (which shifts the exact tape tension and therefore the crossover point). Above the bar the picture is clean; below it the head is reading the adjacent azimuth-mismatched track, causing ~25 dB of signal attenuation and chroma phase scrambling. Three-head decks position a third video head to land cleanly on the track centre in pause mode, eliminating the bar entirely.
On pause engagement, the CTL (control track) pulse train — the servo reference that keeps the drum phase locked to the tape's recorded fields — is no longer available. The VCR's capstan servo loses its timing reference and the drum's phase relationship to the recorded sync pulses shifts by a random amount, typically 35–220 scan lines. The television's vertical hold PLL (a second-order loop with a ±40-line pull-in window) must re-acquire sync from the new position. If the shift is within the pull-in range, the frame rolls then re-locks within 2–4 frames. If it exceeds the capture window, the picture rolls continuously until the PLL hunts to the new sync position. After lock, residual drum servo jitter (±2–3 lines, correlated noise with τ ≈ 300 ms) produces a subtle ongoing frame waver characteristic of low-grade consumer decks. The simulation models this with a 2nd-order PLL: on pause entry a random phase offset is injected into the PLL's sync target; the loop then rolls, overshoots, and settles with authentic timing constants.
At ×5 tape speed (166.75 mm/s NTSC SP), the tape advances 5,565 µm per drum revolution — enough to cross approximately five recorded tracks (track pitch along tape = 1,113 µm). The heads alternate azimuth (A: +6°, B: −6°) between adjacent tracks. The playback head reads: same-azimuth track → full signal; opposite-azimuth track → ~30 dB rejection → noise. This produces the characteristic venetian-blind effect: alternating horizontal bands of recognisable picture and luminance noise, each band occupying approximately 1/10 of the frame height at ×5. Chroma is completely absent — the VHS colour-under phase-coherence system requires adjacent-field phase alignment that is destroyed at off-speed transport. The stripe pattern scrolls upward during cue (forward) and downward during review (reverse), at a rate of approximately (S−1)/S × frame_height per frame, where S is the speed ratio. At ×5, the full pattern cycles through one frame in about 167 ms.
Consumer VHS decks achieve slow motion by electronic field repetition — playing at normal tape speed and repeating each captured field 3 or 5 times before advancing to the next. The still-frame noise bar is present and stationary during each repeated field; motion appears as discrete steps rather than smooth interpolation. Professional 3-head decks (Panasonic AG series, JVC editing decks) position additional heads to land cleanly on the track at 1/3 or 1/5 tape speed, eliminating the noise bar and producing smooth sub-speed motion. Audio pitch is reduced proportionally to tape speed (linear audio and HiFi both shift by the speed factor).
From 1975 through the mid-2000s, three analogue copy protection systems were deployed commercially on VHS releases. All three operate on the composite waveform — the resulting artefacts appear only on copies made through a VCR, never on the original signal as received by a television. All are simulated here from first principles on the live composite buffer. Requires Signal Domain mode. Full controls reference →
Two distinct pulse types are inserted into the Vertical Blanking Interval — commonly lines 10–20 per field — in pairs. The first is a positive bright-white pulse: 100–125 IRE, approximately 150% of peak white. On a television this is invisible — VBI lines are blanked — but the recording VCR’s AGC circuit reads the excessive level and attenuates gain to compensate. Active video then records dark. Over 6–8 seconds the AGC slowly recovers, the picture brightens, then the next VBI field fires again — a rhythmic 0.13 Hz cycling. When composite level falls far enough, burst amplitude drops below the chroma decoder’s ACC lock threshold and colour disappears entirely.
Alongside each bright-white pulse is a separate negative pseudo-sync pulse: a narrow dip to sync-tip level or below (≥1 µs), immediately followed by the positive peak-white pulse (≥3 µs). The H-sync separator cannot distinguish these from genuine horizontal sync; each false H-sync leaves the H-PLL at a wrong phase when active video begins. All affected lines near the top of frame share the same per-frame horizontal offset — a coherent block displacement decaying exponentially over ~40 lines as the PLL re-locks: the “flagging” artefact. These negative pulses also reach the V-sync separator and, at high amplitude, cause continuous rolling.
Susceptibility varies considerably by deck design. Many JVC VHS decks manufactured after approximately 1985 incorporated countermeasures that reduce or eliminate the AGC effect on copies. Some televisions exhibit faint flagging from protected content even on direct display, if their sync separator bandwidth is sufficient to respond to VBI pulses. Later implementations also embedded a CGMS-A flag in VBI line 20, signalling “copy never” to compliant digital recording devices.
All Macrovision AGC and sync disruption artefacts, plus Analogue Copy Protection colorstripe. At regular intervals (every 17 scan lines for Level 2, every 25 for Level 3), a group of 2 or 4 consecutive scan lines has its colour burst phase inverted: 180° for NTSC, alternating ±90° for PAL. The VHS colour-under AFC circuit cannot ignore the inverted burst — it locks to the wrong phase during each group. On playback the chroma decoder sees phase-inverted colour information and renders complementary hues: a face appears cyan, a blue sky turns orange, greens go magenta.
At each group boundary the AFC must hunt back to the correct phase — a settling process spanning ~4 scan lines where chroma amplitude collapses near-zero. At full ACP Level these dropout zones occupy a significant fraction of each period. In the inverted group, the decoded colours are the complementary hues of the original at full saturation: a face appears cyan, a blue sky becomes orange, greens go magenta. The alternating bands of wrong colours and grey dropout make protected content completely unrecognisable on a copy.
The earliest deployed system and mechanically the simplest. The V-sync equalising pulses and serrations are truncated from 4.7 µs to approximately 1.5 µs. Standard television sets tolerate the truncated pulses and lock normally — the CCIR-specified V-sync separator window is wide enough to accept them. VCR recording circuits, however, use a narrower gate tuned to the standard 4.7 µs serration timing. The truncated pulses fall below threshold; the VCR’s V-oscillator loses its sync reference and free-runs at a slightly off-nominal frequency determined by the V-Hold pot setting. On playback the recorded copy rolls continuously, one full frame approximately every 10–20 seconds under typical conditions — unwatchable, but not from any electronic noise, purely from vertical synchronisation failure.
signalCopyProtectColorstripe and the chroma demodulator AFC model in signalChromaFIR.signalCopyguardVSync and the V-oscillator drift model in MetalRenderer.RF Interference Simulation
Every effect is injected into the composite signal itself — not applied as a post-process filter — so it interacts with the decoder's comb filters and demodulators exactly as it would on real hardware.
Audio Simulation
Analog TV audio was a complete engineering system — FM subcarriers, intercarrier mixing, stereo multiplexing, pre-emphasis curves, and finally a small speaker in a wooden box. Every artefact in this chain is modelled and audible in the app.
Full audio controls reference →
Television Receiver Controls
These sliders model the physical adjustment controls found inside a real CRT television. They operate exclusively on the decoded composite signal — the encoded waveform is never modified — so sync pulses and colour burst remain pristine.
Full receiver controls reference →
Every colour television contained a fixed colour decoder tuned to a single broadcast standard. In the real world, sets sometimes received signals they were never designed to decode — an NTSC set picking up PAL-M in Brazil, a PAL set brought to North America, or a consumer importing a TV from a different region. The results were not clean black-and-white: they produced a specific, physically deterministic set of artefacts that this app models from first principles.
Subcarrier beat. Every colour standard places its chroma subcarrier at a slightly different frequency. The encoder modulates colour onto its carrier; the decoder demodulates with its own reference oscillator. When the two frequencies don't match, they produce a beat at the difference frequency — a sinusoidal phase error that sweeps across the picture horizontally and vertically, creating a travelling herringbone pattern of colour oscillation. The beat frequency determines how many times the colour phase cycles across the active picture width:
| Signal | Decoder | Beat frequency | Effect |
|---|---|---|---|
| NTSC-M (3.579 MHz) | PAL (4.433 MHz) | ~854 kHz | Rapid colour oscillation, ~14 cycles across line + continuous roll |
| PAL (4.433 MHz) | NTSC (3.579 MHz) | ~854 kHz | Same beat, wrong colour matrix, V-switch not applied |
| PAL-M (3.576 MHz) | NTSC (3.579 MHz) | ~3.9 kHz | Extremely slow beat — barely one cycle every ~4 lines; subtle shimmer |
| PAL-M (3.576 MHz) | PAL-N (3.582 MHz) | ~6.0 kHz | Very close match; slight hue wobble across wide areas |
| PAL-N (3.582 MHz) | PAL (4.433 MHz) | ~851 kHz | Strong beat, same frame rate so no V-Hold roll |
PAL V-switch on NTSC. PAL encodes the V (red-difference) chroma component with alternating polarity on successive lines — +V on even lines, −V on odd — to make colour errors cancel in adjacent pairs. This line-alternating inversion, called the V-switch, requires a 1H delay line in the decoder. An NTSC signal has no V-switch; feeding it into a PAL decoder means the delay line alternately adds and subtracts V from adjacent lines rather than cancelling error. The result is strong vertical colour banding at every pair of lines — essentially a manufactured version of Hanover bars — superimposed on the beat pattern.
Frame-rate mismatch and V-Hold roll. NTSC standards run at ~29.97 fps; PAL and SECAM at 25 fps. A receiver calibrated for one field rate cannot lock vertical sync to the other: its vertical oscillator runs at its own free-running frequency while sync pulses arrive at the wrong rate, and the picture rolls continuously. The Colour Decoder picker automatically adds the correct fractional frequency offset to the vertical oscillator model so the picture rolls at the physically correct rate — approximately 5 Hz for PAL/NTSC cross-use. The V-Hold knob becomes a trim around this forced offset, exactly as on a real set.
Wrong colour matrix. NTSC and PAL use slightly different colour-difference matrices — YIQ vs. YUV — and different hue reference conventions. Decoding a PAL signal through an NTSC matrix produces shifted hues and desaturated colours; decoding NTSC through a PAL matrix over-saturates reds and shifts greens toward yellow. Neither is dramatic on its own but adds to the overall cross-standard appearance.
SECAM cross-standard. SECAM is fundamentally different from NTSC and PAL: it transmits colour as FM deviation on line-sequential carriers (4.25 MHz for Db, 4.406 MHz for Dr) rather than as quadrature amplitude modulation. This makes cross-standard decoding qualitatively different from the NTSC/PAL/PAL-M/PAL-N family.
A QAM receiver (NTSC or PAL) presented with a SECAM signal tries to demodulate the FM carriers as if they were quadrature colour. The NTSC bandpass is centred at 3.58 MHz — the SECAM carriers at 4.25–4.41 MHz are almost completely outside it, so almost no chroma energy reaches the demodulator. The PAL bandpass at 4.43 MHz is closer to Db (4.25 MHz) and Dr (4.41 MHz), so slightly more energy gets through, but it is frequency-modulated: the QAM demodulator sees phase variations from the FM deviation as low-level, incoherent colour noise. Either way, the result is a largely monochrome picture with a faint chromatic shimmer — exactly what French emigrants experienced when they brought their SECAM sets to Germany or the UK.
A SECAM receiver presented with an NTSC or PAL signal tries to FM-discriminate a QAM signal. The QAM subcarrier has constant instantaneous frequency (it encodes colour in the amplitude and phase of a carrier at a fixed frequency), so the FM discriminator sees almost zero frequency deviation. The output is predominantly luma with negligible colour — again effectively monochrome.
All effects are computed from the underlying signal physics — subcarrier phase maths, actual line-phase offsets, the PAL delay-line model, and the SECAM FM discriminator — not post-processed.
Conditional Access — Pay TV
From the mid-1970s to the mid-1990s, cable and satellite operators used analog scrambling to restrict premium channels to paying subscribers. These techniques operated entirely on the composite video signal — modifying sync pulses, inverting picture polarity, shifting the active video window, or reordering scan lines — before it reached the viewer's receiver. Without a matching descrambler, the result was a specific and distinctive visual artefact for each system. All five major techniques are simulated here from first principles, operating on the same composite waveform used by the rest of the signal chain.
The horizontal sync pulse normally sits at −40 IRE — below blanking level — where the television's sync separator circuit detects it and locks the horizontal oscillator to 15,625 Hz (PAL) or 15,734 Hz (NTSC). Sync suppression raises the sync tip to blanking level (0 IRE), so the separator finds no negative-going edges to lock to. The horizontal oscillator then free-runs at its own frequency, which drifts continuously relative to the incoming signal.
The result: each successive scan line starts at a slightly different horizontal position, shearing the picture diagonally. Within a fraction of a second the picture tears completely and begins rolling. The picture content is tonally intact — correct brightness and colour — but the geometry is destroyed. On some sets with wide-hysteresis sync separators, a brief ghost of the image flickers through before sync is lost entirely.
Used by Oak Orion, Jerrold, and early MSO cable head-ends. Later combined with video inversion in more sophisticated systems. Defeated by "sync restoration" circuits (an LM1881 sync separator driving a monostable to regenerate clean sync) available as hobby kits for under $30 by 1985.
VideoCipher II, deployed by General Instrument in 1986, combined two techniques: sync suppression (to destroy picture lock) and per-line pseudo-random video inversion (to make the picture content unrecognisable even if lock were somehow restored).
Video inversion flips the active picture on each scan line about the blanking level: what was white becomes black and vice versa. The inversion decision for each line is drawn from a linear feedback shift register (LFSR) seeded from an encrypted value delivered in the vertical blanking interval. Without the correct seed, approximately half the lines are inverted — chosen differently on every line — producing a dense horizontal noise pattern with no coherent image information. The audio was independently encrypted using 56-bit DES, making VC-II the first consumer satellite system with true cryptographic protection.
The visual result is unmistakeable: a rolling, tearing field of alternating normal and inverted scan lines. Individual lines are sharp — the encoder has not added any noise — but the combination of sync suppression and line inversion makes the content completely unrecognisable. VC-II was eventually defeated by pirate descramblers that extracted the LFSR seed from the VBI by brute force; the successor VC-II RS (Renewable Security) moved the seed storage to a smartcard.
Videocrypt, designed by News Datacom (later NDS Group) and deployed on Sky's Astra 1A satellite from 1989, used a more sophisticated technique: line cut-and-rotate. The sync pulses and colour burst were left completely intact — the receiver could lock normally and the picture appeared stable — but each of the 287 active scan lines was circularly rotated by a different pseudo-random amount.
Circular rotation treats each scan line as a ring buffer. If the line has N samples and the cut point is k, the output is: samples k through N−1, followed by samples 0 through k−1. This is equivalent to cutting the line at a random horizontal point and swapping the two pieces. With a different cut point on every line — chosen from 256 possible byte-aligned positions in the real system — the picture is shredded horizontally. The lines are all correctly timed, correctly coloured, and correctly positioned vertically, but each one starts at a different horizontal position.
The visual result is highly distinctive: a stable, locked picture where every horizontal line is shifted sideways by a different amount. Large flat-colour regions (sky, skin, backgrounds) appear as a staircase of correctly-coloured horizontal strips at random horizontal offsets — a pattern immediately recognisable to anyone who owned a satellite dish in 1990s Britain. In Videocrypt-1, the audio was transmitted in the clear and was fully audible on any receiver. The image was unrecognisable; the sound was not.
The cut-point for each line changed every 2.5 seconds (5 PAL fields), derived from an LFSR seeded by data in VBI lines 16–17, decrypted by the subscriber's smartcard. Videocrypt was defeated in stages: first by Season Interface hardware, then by emulator cards exploiting weaknesses in each successive card generation (07, 09, 0B, 0C, Sky 10). Each card generation was broken within months of release.
Nagravision, developed by Kudelski Group, scrambled vertically rather than horizontally. The active scan lines of each field were divided into groups of 32 and the lines within each group were permuted — reordered according to a pseudo-random permutation derived from the current decryption key. The pixel content of each line was completely untouched; only the temporal sequence of lines within the group was rearranged.
This is the complement of Videocrypt's approach: where Videocrypt destroys the horizontal coherence of each line while preserving vertical order, Nagravision destroys the vertical order of lines while preserving the horizontal content of each one. The sync and colour burst are untouched; the picture locks perfectly. Individual scan lines are sharp, correctly coloured, and correctly proportioned — they simply appear at wrong vertical positions.
The visual effect is a horizontal shredder: recognisable fragments of the image (a mouth, part of a logo, a strip of sky) appear in the correct horizontal position but at random heights, distributed across the frame according to the shuffled group's permutation. A viewer without a descrambler sees what looks like a correctly-exposed, correctly-coloured photograph that has been cut into 32-line horizontal strips and randomly reassembled.
Nagravision used a 64-bit key (Nagra 1), refreshed every few fields, delivered in the VBI. Nagravision 2 strengthened the crypto and introduced smartcard renewal. The system was used by Canal+, TPS, and many cable operators across France, Switzerland, and Belgium.
SSAVI (Suppressed Sync, Active Video Inversion) was developed by Scientific Atlanta and used by Zenith Electronics under the Z-TAC name. It combined two techniques on the composite waveform: sync tip raising and field-level video inversion.
Rather than fully suppressing the H-sync pulse, SSAVI raised the sync tips to blanking level (0 IRE) — leaving a partial sync reference in the waveform. This subtle difference had a dramatic consequence: the receiver's horizontal oscillator did not lose lock entirely and free-run (which produces a linear sawtooth displacement), but instead its AFC loop began to hunt — oscillating sinusoidally around the true line frequency. The result was a smooth sinusoidal S-curve pattern sweeping across the frame, with approximately 3–4 complete cycles per picture height. The picture content within each line slid left and right in a continuous wave rather than in independently-random jumps. Where the displacement was large, colour fringing appeared naturally at the wrap points as the subcarrier phase was discontinuous.
At the same time, a VBI control bit toggled the polarity of the entire active video on a field-by-field basis — approximately 50% of fields were inverted. Inversion was a full composite polarity flip: luma was complemented (white became black, black became white, mid-grey stayed mid-grey) and the chroma phase shifted 180°, producing a colour-negative image with all hues transposed to their complements. On non-subscribing receivers this toggled pseudo-randomly each field, giving an alternating negative/positive flicker superimposed on the sinusoidal tearing.
Three additional receiver-side effects compounded the scrambling. The burst gate is triggered by the H-sync separator; with raised sync preventing the separator from firing, the gate never opened — the colour PLL received no burst reference and its flywheel drifted slowly in phase, cycling hue over several seconds. The Automatic Colour Control loop, receiving no burst amplitude signal, ramped its gain to its clamp maximum, boosting chroma well above nominal and giving the colour-negative image its characteristic vivid, oversaturated appearance. And because SSAVI's H-oscillator was hunting (not free-running), the V-countdown circuit accumulated phase errors from the varying H-period, causing the picture to drift and re-lock episodically rather than rolling continuously — the SSAVI FAQ describes the result as a "scrolling, tilting, and swinging picture."
The combined visual effect — smooth sinusoidal S-curves sweeping through the image, with every other field an oversaturated colour negative and intermittent vertical drift — was immediately recognisable and unlike any other scrambling system of the period.
These systems were used in US cable television, SMATV, and MDS/MMDS wireless cable. Scientific Atlanta supplied SSAVI equipment to TCI, Cox, and Cablevision. Zenith Z-TAC was used by subscription TV operators in several US markets. Defeated by sync-restoration and inversion-detection circuits that identified the toggled field flag and the raised-sync displacement pattern.
All five techniques operate on the composite waveform before the IF chain — exactly as they did in real transmitters. The receiver's sync circuits, noise, IF ringing, and colour decoder all act on the already-scrambled signal, so interactions between scrambling artefacts and other effects (VCR recording of a scrambled channel; cross-standard decoding of a scrambled signal) emerge naturally from the signal maths.
CRT Display Simulation
Most CRT simulations post-process a finished frame with scanline overlays and bloom. This app models the full physical lifecycle of a real CRT — phosphor chemistry, electron-gun scan geometry, and the complete vacuum-tube power sequence: heater warmup, raster collapse, EHT discharge, and the glowing centre spot left on the screen after the beam finally dies. All from first principles, not image filters.
Every real CRT phosphor continues glowing after the electron beam has moved on. The persistence curve is exponential: when the beam cuts to black, luminance follows L_n = decay^n × L_0 — halving every fixed number of frames. The app models this with a per-pixel, per-channel IIR filter running at full resolution in a Metal compute kernel:
phosphor_new[c] = decay[c] × phosphor_prev[c] + (1 − decay[c]) × excitation[c]
The (1 − decay) weight is critical: it normalises the accumulation so that a steady constant input converges to exactly that input level. There is no brightness creep regardless of the decay factor — a consequence of the IIR steady-state theorem (setting phosphor_new = phosphor_prev = ss gives ss = excitation). Per-channel decay constants model the P22 phosphor triad used in virtually all consumer colour CRTs: red (Y₂O₂S:Eu) is the most persistent (decay multiplied by 1.15), green (ZnS:Cu,Al) is the reference (1.0×), and blue (ZnS:Ag) decays fastest (0.85×). All values are clamped to ≤ 0.985 so the system always converges. The user-facing Phosphor Decay slider scales the midpoint; the per-channel spread is always maintained.
access::read (the previous frame's accumulated phosphor), the other as access::write (the current output). On the next frame the roles swap. This ping-pong scheme provides full 16-bit floating-point precision per channel with zero GPU readback or CPU round-trips. The final CRT render pass always samples the just-written (current) phosphor texture, not the raw decode.beamScanLine) that advances by speed each frame and wraps at 1.0. The shader receives two parameters — beamScanTop and beamScanBottom — that define the active excitation window for the current frame; rows outside this range receive only the decay term. When speed ≥ 1.0 the accumulator is forced to [0, 1] and the scan line counter is reset, preventing a stale mid-screen window from persisting after the user turns the effect off.Eight authentic phosphor types are available, each calibrated from primary source measurements — manufacturer CIE datasheets, RCA technical publications, and peer-reviewed phosphor literature. Each type shifts the IIR decay constants, rendered emission colour, and shutdown spot tint. Full technical detail and source citations for all eight types are in the dedicated Phosphors section below.
P22 (colour), P31 (oscilloscope green), P45 (white, sub-ms), P4 (B&W television white), P3 (amber radar), P7 (dual-persistence radar: white flash → yellow-green afterglow), P11 (deep blue photographic), P24 (cyan-green flying-spot scanner).
Monochrome phosphors (all except P22) convert the decoded composite signal to luma before accumulation, eliminating false-colour chroma artifacts that would otherwise accumulate on a physically single-compound screen. P7 uses this same luma path despite its two-layer composition — the white-to-yellow-green colour shift emerges from per-channel asymmetric decay of a luma input, not from the composite chroma.
P22 is a three-component phosphor system — separate red, green, and blue chemical compounds in a dot triad — that can reproduce colour by independent excitation of each dot. All other types (P31, P45, P4, P3, P7, P11, P24) are physically incapable of colour output: each is a single spectral emitter, or in P7's case a two-layer cascade whose colour behaviour arises from temporal decay physics rather than colour-channel separation.
This creates a problem for a composite television simulator. The decoded signal is always RGB, and naively accumulating that RGB into P31, P45, P4, P3, P11, or P24 produces false-colour artifacts — visible chroma dot-crawl and hue fringing across what should be a uniform-colour display. A P31 oscilloscope screen doesn't show residual NTSC subcarrier as a red-green-blue shimmer: it shows monochrome green.
The fix is physically correct. When a monochrome phosphor is selected, the decoded RGB frame is collapsed to luma using ITU-R BT.709 coefficients (Y = 0.2126R + 0.7152G + 0.0722B) before entering the IIR accumulator. The scalar is then multiplied by the phosphor's characteristic emission colour to produce the native tint — green for P31, warm white for P4, cool white for P45, amber for P3, deep blue for P11, cyan-green for P24. Single-compound phosphors also use a uniform decay constant across all three output channels.
P7 uses the same luma path despite its dual-layer composition. The white-flash → yellow-green afterglow colour shift emerges from per-channel asymmetric decay applied to a luma input: blue decays fast (minimal 563 nm emission from the slow layer), green decays slowest. Accumulating raw composite RGB would instead let subcarrier chroma artifacts drift independently per channel, producing false colour — not the phosphor's physical emission.
A real vacuum-tube CRT television goes through distinct physical phases when powered on and off that are entirely absent from solid-state flat-panel simulations. The app models both the tube and solid-state modes.
Power-on warmup (tube mode): When mains is applied, the cathode heater filament requires 15–30 seconds to reach operating temperature. Two independent physical processes run in parallel: the B+ rail and EHT build slowly on an RC curve (τ ≈ 30% of warmup time), controlling raster size; cathode emission follows Richardson-Dushman thermionic kinetics (T² × exp(−W/kT), τ ≈ 20% of warmup time), controlling brightness. The result is a brief undeflected spot at screen centre while the H-oscillator locks (~5% of warmup time), followed by a slowly expanding raster that remains dim until cathode emission builds — the experience familiar from any B&W TV of the era. A configurable warmup duration (3–30 s) and HV discharge time control the exact pacing.
Warmup spot colour is phosphor-dependent: Different phosphor compounds require different threshold beam voltages to activate. At cold-cathode beam voltages the spot colour may differ significantly from the phosphor's operating colour: for P4 (B&W TV white), the ZnS:Ag blue sub-component requires higher voltage than ZnCdS:Ag, so the cold spot appears yellow-green (ZnCdS:Ag only) rather than white; for P7 (dual-layer radar), only the fast ZnS:Ag layer fires initially; for P22 (colour TV), the green gun dominates at low current, making the spot green-white. Single-compound phosphors (P31, P3, P11, P24) maintain constant hue from first emission because there is only one chemical compound to activate — the spot is the same colour as steady-state, just dimmer.
Shutdown sequence (tube mode): When mains is removed, the deflection oscillators stop instantly. The vertical scan capacitor drains first (small µF, τ ≈ 0.1–0.35 s): the picture squeezes vertically into a bright horizontal line — the same beam energy now concentrated on a thin phosphor strip, producing a dramatic brightness increase. The horizontal scan capacitor drains slightly slower (τ ≈ 0.35–0.55 s): the line shrinks horizontally toward a single bright centre spot. The EHT capacitor bleeds through the bleeder resistor (τ ≈ 0.4–1.5 s via ~10 MΩ): residual 20–30 kV continues driving the beam. On sets without a spot killer, this glowing spot was bright enough to permanently burn the phosphor in seconds — a notorious hazard for careless engineers.
Shutdown spot model: The spot is rendered as two overlapping Gaussians — a tight "beam core" (σ ≈ 1.2% screen radius) representing the physical beam diameter, and a wide "phosphor halo" (σ ≈ 13%) from light scattering through the glass faceplate. On P22 phosphor the spot has a green-white tint (the green gun has the highest output efficiency at high current density). As tube age increases, weaker EHT regulation produces a brighter discharge — up to 60% more intense at maximum ageing. Worn capacitors add position jitter as the discharge current becomes noisier.
EHT sag / size breathing: A bright scene draws more beam current, which loads the flyback transformer and droops the extra high tension supply. Lower EHT means less deflection amplitude — the raster contracts by 1–4% on very bright scenes. The app computes this per-frame from scene brightness × ageing factor, applied as a UV scale before rendering. The effect is negligible on new tubes (ageing below 20%) and becomes clearly visible on old, unregulated sets.
HV arcing: Residual 20–30 kV during EHT bleed-down can arc across aged or worn insulation on the anode cap or flyback secondary winding. The brief plasma discharge emits intense blue-white light (ionised gas). Arc probability scales with tube age and remaining EHT level, so worn tubes arc more frequently and unpredictably during shutdown.
Spot killer: Many post-1970 sets included a spot killer circuit — a relay that blanks the cathode beam the instant mains power is cut, preventing the shutdown spot from burning the phosphor. With the spot killer enabled, shutdown is clean and dark. Disabling it reveals the full shutdown glow sequence as it would appear on an unprotected tube.
Solid-state mode: From the mid-1980s, solid-state switch-mode power supplies replaced the vacuum-tube HV section. Warm-up time dropped to near zero and beam cutoff on power-down is essentially instant. In solid-state mode all power transitions are immediate — no warmup spot, no raster collapse, no shutdown sequence — matching the behaviour of sets like the Sony Trinitron KV-series from 1987 onward.
A real CRT television contained a hidden service menu — internal adjustments accessible only to a repair technician — covering the geometry, convergence, and high-voltage regulation of the tube. The app exposes the same controls, allowing the display to be deliberately mis-adjusted as an ageing set would gradually drift, or corrected back to factory spec.
HV regulation: Compensates EHT sag on bright scenes. An automatic HV regulator in a well-maintained set slightly expands the raster to counteract the droop caused by high beam current. Adjusting this control below its nominal value replicates the look of an unregulated or worn set where the picture visibly "breathes" in size with scene content.
Pincushion: Corrects the inward or outward bowing of horizontal and vertical lines caused by the geometry of the deflection yoke relative to the curved faceplate. A common fault on ageing yoke assemblies as the ferrite core relaxes. Increasing pincushion makes the sides of the raster bulge inward; decreasing it bows them outward.
Trapezoid: Corrects asymmetric vertical height between the left and right edges of the raster, caused by asymmetric inductance drift in the left and right halves of the deflection coil. A trapezoidal raster is characteristic of early or worn horizontal yoke windings.
Tilt: Rotates the entire raster slightly clockwise or anticlockwise, correcting physical CRT neck rotation or yoke ring misalignment. On vintage sets this was adjusted by physically rotating the yoke clamp; the service menu models it as a pure rotation applied before all other geometry.
Convergence: Adjusts the relative landing positions of the red, green, and blue electron beams on the phosphor triad. Perfect convergence means all three beams land on their respective phosphor dots at every point on the screen. As the yoke epoxy ages and relaxes, the beams drift apart — colour fringing appears at edges and fine detail becomes a rainbow smear. The app models convergence drift as a position-dependent UV offset per gun that increases toward the screen edges, matching the real yoke creep pattern.
Every colour CRT manufactured from the late 1960s onward included a built-in degaussing coil — a loop of wire wound around the inside of the bezel, driven through a positive-temperature-coefficient (PTC) thermistor. When a cold set is powered on, the PTC draws a large initial current through the coil. As the thermistor self-heats over ~1–2 seconds, its resistance rises dramatically, reducing the coil current to near zero. The coil therefore generates a naturally decaying alternating magnetic field — exactly the waveform needed to demagnetise the shadow mask.
Rather than painting a rainbow tint over the image, the app models the actual beam physics. The Lorentz force F = q(v × B) deflects each electron beam laterally. The coil wraps the bezel, so by the Biot-Savart law its field is tangential at every screen position — perpendicular to the radius vector from screen centre. This tangential field drives the characteristic swirling pattern. As the PTC heats and current decays, the field oscillation frequency slows and the amplitude envelope collapses, sweeping increasingly coarser arcs across the mask before dying away.
Inline gun differential: In a colour CRT the red, green, and blue cathodes are arranged horizontally in the gun neck (the inline geometry). The same external magnetic field deflects them by slightly different amounts because each travels a marginally different path to its phosphor dot. The simulation models this as a per-gun horizontal UV offset proportional to the local field amplitude — producing genuine colour fringing from adjacent image content rather than an additive tint.
Dot-pitch cross-excitation (P22 only): When a beam is displaced by a non-integer number of phosphor dot pitches, it partially excites the adjacent wrong-colour dot type. The three P22 dot types (R, G, B) are spaced 120° apart in the triad, so the cross-excitation weighting follows a cos² partition at 0°/120°/240° — cycling smoothly through correct, +1 cyclic shift, and −1 cyclic shift as displacement varies across the screen. This produces the characteristic colour bands on a uniform-grey input even without any colour in the source. Monochrome phosphors (P31, P4, P45, P3, P7, P11, P24) emit a single spectral band and therefore show no colour during degauss — only the geometric beam displacement and a brief PSU brightness perturbation.
PSU sag / flash: The coil current peak loads the mains power supply rail, causing a brief uniform-brightness flash at the start of the cycle followed by a slight overall dim as the supply droops. This is applied as a global scalar to the final rendered frame — the power supply has no spatial selectivity, so it affects every pixel identically.
Duration: ~1.8 s (built-in auto-coil) · P22 cross-excitation: cos² triad model · Monochrome: PSU sag only, no colour
Children in the 1970s and 80s discovered that touching a colour television screen with a refrigerator magnet produced spectacular — and often permanent — colour distortion. The app recreates this with an interactive simulation: tap the MAGNET button, then drag a finger across the video area. The image reacts as if a real permanent disc magnet were being pressed against the glass faceplate.
Beam deflection physics: A permanent magnet held against the glass generates a concentrated fringe field. At the screen surface the field is predominantly radial — pointing toward or away from the touch point. The Lorentz force F = q(v × B) acts on beams travelling in −z toward the screen: a radial in-plane B component creates beam deflection perpendicular to B — that is, tangential rather than radial. The visual result is a swirl or vortex: all beams near the touch point are deflected in the same angular direction (clockwise), with concentric colour rings appearing at multiples of the phosphor dot pitch. Displacement magnitude falls off as a dipole-like 1/(r² + soft²) from the contact point and is capped at 10% of screen width at the pole.
Inline gun differential (P22 only): A colour CRT's inline trigun has red and blue cathodes at ±gunSep from the central green cathode along the horizontal axis of the neck. The same external field therefore deflects them by slightly different amounts, creating lateral colour fringing that grows with the horizontal component of the magnet field — a visually distinctive signature of inline-gun geometry. Monochrome phosphors (P31, P4, P45, P3, P7, P11, P24) have a single electron gun; the gun differential does not apply and is suppressed.
Dot-pitch cross-excitation (P22 only): As each beam is displaced from its nominal phosphor dot, it begins to excite adjacent wrong-colour dots. The cos² 120°-triad cross-excitation model (shared with the degauss simulation) determines the RGB mix reaching each output pixel. Near the touch point, displacement can exceed several dot pitches, producing intense colour scrambling. The cross-excitation strength saturates at ~55% at large displacements. Monochrome phosphors show only the geometric warp — the single-compound emitter has no colour to scramble.
Shadow mask magnetisation — permanent purity damage: In a real CRT, the steel shadow mask is a ferromagnetic material with significant remanence. While the magnet is held against the screen, the mask accumulates magnetisation proportional to the magnet's field strength and the contact duration — hold longer or use a stronger magnet to build up a more severe stain. When the finger lifts, the external field is gone but the mask retains its remanence. This residual field continues to deflect beams via the same tangential Lorentz mechanism: on a colour (P22) CRT it produces fixed colour swirl patches (purity errors); on monochrome phosphors it produces geometric warp without colour scrambling, since there is only one beam and one phosphor compound. Both persist across sessions. Dragging the magnet across the screen paints a continuous smear of damage — a line, arc, or any shape — exactly as a real magnet dragged across a television would leave a trail of purity errors across the shadow mask. The magnetisation map is stored as a 128×128 field texture and persisted permanently to device storage; it survives app restarts exactly as physical damage would. Pressing DEGAUSS triggers the PTC coil sequence, applying a decaying AC field that clears the entire texture over ~1.8 seconds.
Field strength reference: A permanent ring magnet at 8 mm produces approximately 10 mT (100 gauss) at the surface. True dipole field falloff is 1/r³; the simulation uses 1/(r² + soft²) which matches the near-field character of a contact magnet where r is much smaller than the magnet diameter. At 1 cm the field is roughly 8× weaker than at contact; beyond 5 cm it is negligible — consistent with why CRT televisions survived unshielded speakers at normal distances.
Strength slider: 0.05–1.0 · CW tangential swirl warp: all phosphors · Colour scramble: P22 only · Drag to paint smears/lines · 128×128 magnetisation field · Persists across restarts
In 1949, the Soviet Union released the КВН-49 — the USSR's first mass-produced television receiver. The name was an acronym of its three designers: Королёв, Варшавский, and Николаевский. It was tiny, it was expensive (900 roubles — about six months' salary for an average worker), and it had a problem that Soviet engineers solved in a way that became one of the most recognisable objects in post-war Soviet domestic life: the screen was only 105 × 140 mm — about the size of a cigarette packet.
The solution was a separate accessory that attached to the front of the set: a rectangular tank filled with distilled water or glycerol, sealed behind a curved glass face and clamped onto the bezel. This water-filled magnifier enlarged the tiny image to something watchable from across a room. It worked. It was also optically terrible in the most wonderful ways.
Barrel distortion — the fishbowl effect: A water-filled positive meniscus lens produces strong barrel distortion. The centre of the image is magnified significantly more than the edges, pulling the corners inward and giving the characteristic convex, fishbowl appearance. On a КВН-49, a moderately curved face produced barrel coefficients far higher than any CRT faceplate — the entire raster appeared to be painted on the inside of a glass bowl. The simulation maps a 0–1 strength slider to a barrel coefficient of 0–2.4×, compared to 0–0.25× for the standard CRT curvature control, reproducing the dramatic curvature of the original lens at full strength.
Chromatic aberration: Water has an Abbe number of approximately 55.9, meaning it disperses blue light more than red. Through a thick curved water lens, the red, green, and blue components of the image focus at slightly different distances and are displaced by slightly different angles. Toward the periphery, the three colour channels split visibly — sharp edges grow coloured fringes, and fine detail blooms into a rainbow smear at the outer third of the image. The simulation models this as a radial per-channel UV offset growing with distance from the lens centre: blue displaced slightly outward (more magnified — water bends short wavelengths more strongly), red inward (less magnified), green held at the nominal position.
Fluid surface ripple: The water or glycerol in the lens tank was not perfectly still. Thermal convection from the CRT's own heat, vibration from the power transformer, and the natural meniscus curvature of the liquid all contributed a slow, gentle wave motion to the lens geometry. A КВН-49 that had been on for an hour would have a noticeably different optical character than one just switched on. The simulation applies two beating sinusoidal waves per axis at approximately 2 Hz, driven by the frame counter, producing a slow optical undulation that varies across the surface without ever quite repeating.
Circular lens mount vignette: The magnifier tank had a circular aperture cut into its metal front panel — the viewing window. Outside this aperture, the metal frame was fully opaque. The circular boundary, with a soft optical roll-off at the edge from the lens glass terminating against the mount, produced a hard circular vignette that the user would have seen as a black bezel around the enlarged image. The simulation renders this as photometric black beyond the aperture radius, with a 22 mm soft edge matching the optical termination of a real lens mount.
Gyroscope parallax (iPhone & iPad): Tilting a real magnifying lens relative to the object behind it shifts which part of the image is enlarged — tilt right and the magnified centre moves left across the CRT face. On iPhone and iPad, the simulation reads device attitude from CoreMotion at 60 Hz, using roll and pitch to shift the effective centre of the lens warp in real time. The reference attitude is captured when the lens is first enabled, so whatever orientation the device is held in becomes the neutral position. Parallax sensitivity scales with lens strength — a more strongly curved lens shifts more visibly for the same physical tilt angle, matching the angular geometry of real optics.
КВН-49 in Soviet culture: More than a technical curiosity, the КВН-49 became a genuine cultural artefact. By the early 1950s, groups of neighbours would crowd into whichever apartment had one — the magnifier lens pushed viewable distance out to perhaps three metres, enough for ten people if they sat close together. The name КВН was later borrowed for Клуб Весёлых и Находчивых (Club of the Merry and Resourceful), the Soviet comedy improvisation programme that debuted in 1961 and is still broadcast today — one of the longest-running television formats in history, spanning the Soviet era, its collapse, and thirty years of independent broadcasting. Surviving КВН-49 sets, complete with their glass tank, are prized by Soviet-era collectors. The combination of the tiny screen, the ungainly magnifier, and the ritual of gathering to watch state television through a tank of water is one of the more specific windows into everyday Soviet domestic life that a physical object can offer.
Screen: 105 × 140 mm · Standard: 625-line 50 Hz (OIRT System D) · Lens fill: distilled water or glycerol · Production: 1949–1962 · Barrel coefficient: 0–2.4× · CA: Abbe V_d ≈ 55.9 (water) · Parallax: CoreMotion attitude 60 Hz, iOS/iPadOS
CRT Phosphor Chemistry
The final stage of the simulation models the chemical physics of the phosphor screen itself. Every phosphor type is characterised from primary source data: EIA designation tables, RCA technical publications, manufacturer CIE chromaticity datasheets, and peer-reviewed phosphor literature. Each type is rendered using an IIR temporal accumulation kernel running at full resolution: phosphor_new = decay × phosphor_prev + (1 − decay) × excitation. The (1 − decay) normalisation prevents brightness creep: a steady input always converges to exactly that input level regardless of decay factor.
Colour derivation methodology. Every phosphor emission colour in the simulation is derived from published CIE xy chromaticity coordinates using a fixed three-step pipeline: (1) convert CIE xy to XYZ tristimulus values at Y=1 — i.e. X = x/y, Z = (1−x−y)/y; (2) multiply by the 3×3 ITU-R BT.709 / sRGB matrix to get linear Rec.709 RGB values; (3) normalise so the dominant channel equals a reference brightness chosen to avoid clipping in the shader accumulator. Negative RGB values (which occur for colours outside the Rec.709 gamut, such as the deep blue of P11 or P24) are clamped to zero rather than extrapolated — the clamped colour is still the closest Rec.709 representation of the true emission. The per-channel ratios produced by this pipeline determine the rendered hue exactly; the overall scale factor is a free parameter that sets display brightness without changing colour. No value in the simulation is hand-tuned — all are computed from this formula applied to published source data.
P22 is a composite EIA designation for the tri-phosphor chemistry used in virtually every consumer colour CRT produced between 1954 and the end of CRT production. Three separate phosphor compounds form the dot triad: Red — Y₂O₂S:Eu (europium-doped yttrium oxysulfide, 611 nm), Green — ZnS:Cu,Al (530 nm), Blue — ZnS:Ag (450 nm).
Each compound has a different decay rate. Red is the slowest (~3–6 ms to 10%), blue the fastest (~0.5–2 ms). The asymmetry creates the characteristic warm trailing fringe visible on fast-moving objects on original CRT footage. The simulation uses per-channel IIR decay multipliers: red × 1.15, green × 1.0 (reference), blue × 0.85 — calibrated from the RCA TPM-1508A measured persistence hierarchy for each compound. White point: 9300 K, the standard for Japanese consumer sets from the 1980s.
CIE primaries: R(0.610, 0.340) G(0.290, 0.600) B(0.150, 0.060) — SMPTE-C / EBU P22 standard. Decay: R 3–6 ms, G 1–3 ms, B 0.5–2 ms.
P31 (ZnS:Cu or ZnS:Cu,Ag) is the phosphor behind the "green screen" aesthetic of 1970s–80s computing — the Apple IIe, DEC VT terminals, Amdek and Zenith monitors, and the vast majority of analogue oscilloscopes. Sub-millisecond persistence (0.01–1 ms to 10%) means no visible trailing at 60 Hz. P31 is 2.2× brighter than the older P1 willemite green.
The RCA TPM-1508A (1961) measures the emission at CIE x=0.245, y=0.523 — vivid green, not yellow-green. Applying the XYZ→Rec.709 pipeline: X=0.468, Y=1.000, Z=0.443 → R=0, G=1.439, B=0.291. The B≈0.29 is real — the P31 emission band extends from ~490 nm to ~560 nm where z̄ is non-negligible, so some blue tristimulus is present even though the visual appearance is pure green. Normalised to G=1.15 for the shader accumulator: colourTemp = (0.000, 1.150, 0.232). The hue ratio B/G = 0.202 is preserved exactly. Phosphor Technology Ltd independently confirms CIE xy (0.287, 0.521) for P31, consistent with the slightly varying ZnS:Cu formulations across manufacturers.
Peak: 520–530 nm · CIE (0.287, 0.521) · Decay: 0.01–1 ms · Rel. luminance: 100 (reference)
P45 has a very flat spectral response across the visible range and an approximately D65 white point — neither warm nor cool. It was developed for applications requiring accurate tonal reproduction at high scan rates: broadcast camera viewfinders, radar displays requiring fast phosphor response, and some early medical imaging CRTs. Sub-millisecond decay (shorter than P31) makes it suitable for high-repetition-rate applications where persistence would cause temporal smear.
Decay: < 1 ms · White point: ~D65 · Uniform RGB spectral response
P4 is a two-component blend designed to produce a visually white emission for consumer monochrome television receivers. The blue sub-component (ZnS:Ag, ~455 nm) and yellow-green sub-component (ZnCdS:Ag, ~565 nm) combine to give a measured CIE blend of x=0.270, y=0.300 — a cool white of approximately 8500 K when measured from a fresh tube. The RCA TPM-1508A (1961) records the blend at this chromaticity; decay to 10% is 60 µs. Applying the XYZ→Rec.709 pipeline: X=0.900, Y=1.000, Z=1.433 → R=0.665, G=1.063, B=1.361, normalised to G=1: colourTemp = (0.626, 1.000, 1.281) — a distinctly blue-biased white, matching the cool 8500 K colour temperature.
The warm, slightly yellowed appearance associated with vintage B&W television sets is not the phosphor's natural emission — it is caused by yellowing of the aluminium backing layer (and sometimes the front glass) with age and UV exposure. A fresh P4 tube is distinctly cool-white. The simulation uses the measured cool-white emission; use the brightness and contrast controls to simulate an aged, warm set.
Power-on spot colour: At cold-cathode beam voltages (below ~1–2 kV), the ZnS:Ag blue sub-component does not activate — its threshold voltage is higher than that of ZnCdS:Ag. The warmup centre spot therefore appears yellow-green rather than white, showing only the ZnCdS:Ag emission at ~565 nm. CIE coordinates for this component alone are approximately (0.462, 0.526) → Rec.709 (1.24, 1.00, 0.00). As the EHT rises during warmup, the ZnS:Ag component activates and the spot transitions to the full cool-white blend. The shutdown spot, driven by residual EHT at normal operating voltage, shows the full P4 white.
CIE blend (0.270, 0.300) · Warmup spot CIE (0.462, 0.526) · Decay: 60 µs to 10% · Source: RCA TPM-1508A (Oct 1961) · ~8500 K fresh
P3 (zinc beryllium silicate with manganese activator — Zn₈BeSi₅O₁₉:Mn, also written ZnBeSiO₄:Mn) was one of the first phosphors used in operational radar displays, deployed extensively in WWII ground and airborne radar equipment. Its amber-orange emission at 602 nm was chosen partly for night-vision preservation — the long-wavelength amber does not suppress scotopic (dark-adapted) vision as severely as green or white phosphors, an important consideration for operators in darkened radar rooms.
P3 has a 13 ms exponential decay — substantially longer than P31 — giving a visible trailing afterglow on moving targets. CIE xy approximately (0.54, 0.43), computed from the 602 nm spectral locus: deep saturated amber-orange, similar to sodium street lighting. The beryllium compound is toxic and carcinogenic; production ceased around 1950 and it was replaced by safer alternatives. It is essentially unencountered today except in surviving WWII-era equipment. Decay model: exponential (Dyall 1948 classification).
Applying the XYZ→Rec.709 pipeline at CIE (0.54, 0.43): X=1.256, Y=1.000, Z=0.039 → R=1.363, G=1.012, B=−0.003. The blue value is negative and clamped to zero. This is not a rounding error — it is a physical constraint. The CIE z̄(λ) colour-matching function is essentially zero above 560 nm; ZnS:Mn emitting at 602 nm therefore produces zero blue tristimulus by physics. Any non-zero blue in a P3 model is wrong. colourTemp = (1.363, 1.060, 0.000), normalised to G=1.06.
Peak: 602 nm · CIE ~(0.54, 0.43) · Rec.709: (1.363, 1.06, 0) · B=0 by physics (z̄≈0 at 602 nm) · Decay: 13 ms · Obsolete ~1950
P7 is a cascade (two-layer) phosphor used throughout WWII and post-war radar PPI (Plan Position Indicator) displays and in early oscilloscopes for capturing one-time transient events. Its distinctive character comes from two separate decay mechanisms operating simultaneously.
The outer fast layer (ZnS:Ag or B*-ZnS:Ag on CdS substrate) is directly excited by the electron beam. It emits blue-white at 440 nm and decays within ~1 ms — essentially instant at display frame rates. The inner slow layer ((Zn,Cd)S:Cu) is excited photoluminescently by the UV and blue output of the fast layer. It emits yellow-green at 563 nm and has an inverse power-law decay lasting well over one minute in low ambient light.
The visual result: a fresh radar sweep trace appears blue-white; as the beam moves on and the fast layer decays, the slow layer's yellow-green persistence becomes dominant. Old traces glow warmly amber-green; new traces are cold blue-white. Many P7 radar displays were fitted with an orange-amber plastic filter over the tube face to suppress the blue flash entirely, making the display appear uniformly yellow-orange.
The simulation approximates the dual-layer physics with per-channel asymmetric IIR: B × 0.62 (fast — minimal blue in 563 nm emission), G × 1.22 (slowest — yellow-green dominant), R × 1.02 (moderate red in 563 nm). A separate slow-decay buffer would more accurately replicate the power-law tail, but the IIR approximation correctly captures the white-flash → yellow-green fade within typical playback timescales. Luma input is used to prevent composite chroma artifacts accumulating as false colour on a display that should show only phosphorescence colour.
Peaks: 440 nm (fast, < 1 ms) + 563 nm (slow, > 60 s) · Composite CIE ~(0.358, 0.524) · jniemann66 gist
P11 (ZnS:Ag,Cl) emits a deep blue at 460 nm with approximately 2 ms persistence. Its primary use was in oscilloscopes equipped with camera hood attachments for recording waveforms on photographic film — a common technique before digital storage oscilloscopes. Blue light at 460 nm exposes orthochromatic and panchromatic film far more efficiently than P31's green, allowing high-brightness waveform photography without overdriving the tube.
The phosphor's relative luminance to the human eye is only 25% of P31 (the eye is less sensitive at 460 nm than at 520 nm), but its relative writing speed — a measure of how quickly the trace can be written before the film is underexposed — equals P31 at 100% because the film's blue sensitivity compensates. A P11 oscilloscope trace appears visually dim compared to a P31 scope at the same beam current, but the photographs were well-exposed. Notable application: the Apollo Guidance Computer's DSKY (Display-Keyboard) indicator unit used a P11 CRT for its numeric readout.
Applying the XYZ→Rec.709 pipeline at the spectral locus near 450 nm (CIE x≈0.135, y≈0.060): X=2.250, Y=1.000, Z=13.417 → R=−0.934→0, G=0.253, B=14.102. The G/B ratio = 0.253/14.102 = 0.01794 — green is 56× weaker than blue. Red is negative (outside Rec.709 gamut) and clamped to zero. colourTemp = (0.000, 0.025, 1.380), scaling G/B to B=1.38. This is a physically pure deep blue with essentially no contamination from other primaries. Any significant green or red component in a P11 model is wrong.
Peak: ~450–460 nm · G/B = 0.01794 (56× blue dominant) · Rec.709: (0, 0.025, 1.38) · Decay: 2 ms · Rel. luminance: 25 · Apollo DSKY
P24 (ZnO:Zn) has one of the shortest persistence times of any designated phosphor: approximately 1.5 µs to 10% (Jankowiak 2010; consistent with the 1–10 µs range in EIA TEP-116 literature). Its peak emission is 505–507 nm — a cyan-green distinctly bluer than P31 — with CIE xy approximately (0.19, 0.40), placing it in the aquamarine-teal region of the chromaticity diagram.
The primary application is the flying-spot scanner: a CRT spot scans across photographic film or a transparency at very high speed, and a photomultiplier or photodiode array samples the transmitted light frame by frame. The 1.5 µs decay is essential to avoid image smear at scan speeds measured in millions of pixels per second — any persistence longer than the sampling interval would blur the output. P24 is also used in some beam-index colour CRT displays where a single-gun tube requires a precise timing reference. Its relative luminance is only 8% of P31 — P24 is visually very dim, a characteristic tradeoff for extreme temporal speed.
Peak: 505–507 nm · CIE (0.19, 0.40) · Decay: 1.5 µs · Rel. luminance: 8 · Flying-spot scanners
Test Cards & Patterns
Every major analogue television operation began each broadcast day by transmitting a test card — an image engineered to reveal faults in the entire signal chain from camera to living room. The app includes fourteen industry-standard patterns and one geometric test chart, spanning 1939 to the 1990s.
Seven vertical bars at 75% amplitude provide a reference for every primary and secondary colour plus white, black, and blanking. An engineer adjusting a monitor to these bars could guarantee colour and luminance accuracy across the entire chain from studio to transmitter. SMPTE standardised the 75% variant (rather than 100%) to keep peak white within the safe operating range of VTR heads.
Full-saturation 100% bars drive every colour to its legal peak. Used for encoder calibration and to verify that a system can handle maximum colour excursion without clipping. On a vectorscope, the six colour dots should land precisely on their target boxes. The pattern exposes gain errors invisible at 75%.
The definitive NTSC colour-bar test pattern, developed at the CBS Technology Center by Hank Mahler. Seven 75%-saturation bars fill the top two-thirds. A narrow castellation strip below the bars enables blue-only alignment — when the blue channel is isolated, the bars and castellations merge into four equal blue bands, revealing any hue or saturation error instantly. The bottom quarter carries −I and +Q chrominance squares for vectorscope phase alignment, a 100% white reference, and the PLUGE (Picture Line-Up Generation Equipment) trio: sub-black (invisible), true black, and just-above-black, for precise monitor black-level calibration. CBS deliberately placed it in the public domain; it became the universal broadcast reference bar standard and earned a Technology & Engineering Emmy in 2002.
Eight segments rotating through the full colour wheel in 45° hue steps. The defining feature is the vectorscope display: the six non-white segments form a smooth circle rather than six isolated dots, making gain and phase errors immediately obvious as distortions of the ring. A favourite field-alignment tool because it reveals problems invisible on conventional bars.
Ten equal-voltage steps from video black to white — a staircase waveform on the waveform monitor. Used to check gamma correction (the spacing should appear perceptually uniform on a correctly gamma-corrected display), contrast linearity, and the step response of the decoder. One of the oldest electronic test patterns, predating colour television.
Six zones of sinusoidal bursts at increasing frequencies — 500 kHz, 1.0, 2.0, 3.0, 3.58, and 4.2 MHz for NTSC. Amplitude attenuation across zones shows the frequency response of the complete video path. Engineers could read off the usable bandwidth at a glance without a spectrum analyser. A staple of broadcast quality-control checks on every link in a video chain.
Evenly-spaced horizontal and vertical lines on black, often supplemented with safe-action and safe-title boundary boxes. Used to measure geometric linearity (straight lines should stay straight), pincushion and barrel distortion in CRT monitors, and to confirm that titles will remain within the guaranteed visible area on domestic sets. Generated electronically — no physical card required.
Named for Philips instrument code 5544, this card became the standard close-down and alignment image for PAL broadcasters across Europe for nearly three decades. Its central circle tests horizontal and vertical linearity simultaneously; colour bars verify chroma accuracy; corner circles and dot matrix reveal convergence errors. Broadcast nightly by dozens of national services from the BBC to ARD and ORF before each station signed off.
SVG: ebnz / Wikimedia Commons · CC BY 2.5
Designed by BBC engineer George Hersee and first broadcast on BBC Two on 2 July 1967, Test Card F became the defining close-down image of British television. At its centre, Hersee's eight-year-old daughter Carole plays noughts-and-crosses with a Pierrot clown doll — a deliberate choice of a natural, flesh-toned subject to challenge colour cameras. Surrounding her: a central cross-hatch for linearity, grey steps, colour patches, and a resolution wedge. Test Card F remained in continuous use on the BBC for over three decades and still appears occasionally during technical breakdowns, making it one of the longest-running broadcast images in television history.
Image: © BBC · English Wikipedia (fair use)
The Soviet Union's unified engineering test card, standardised under GOST 7845 and mandatory for all Warsaw Pact broadcasters. Combines a resolution wedge, colour patches, geometric shapes, and a central circle for simultaneous evaluation of resolution, colour accuracy, and deflection linearity. Transmitted daily by Gostelradio and remained in service through the 1990s. The designation УЭИТ stands for Универсальная Электронная Испытательная Таблица — Universal Electronic Test Chart.
SVG: Tucvbif / Wikimedia Commons · Public Domain
A CRT convergence alignment chart for colour monitors. Dense concentric circles, crosshairs, and dot matrices enabled engineers to align the three electron guns of Trinitron and shadow-mask tubes to within fractions of a millimetre. The "бис" suffix denotes a revised edition of the original ТИТ-0249 specification. Used throughout Soviet broadcast facilities and television factories for studio monitor calibration.
SVG: Tucvbif / Wikimedia Commons · CC BY-SA 3.0
Designed for the US Air Force to measure aerial camera resolution. The chart's six groups of six element-pairs — each group scaled by a factor of √2 — allow visual determination of the limiting resolution in line pairs per millimetre. Television engineers adopted it to characterise camera and lens systems; the central circle indicates depth-of-field uniformity. Still the reference resolution target in optics and imaging, seven decades after its introduction.
SVG: Setreset / Wikimedia Commons · CC BY-SA 3.0
A resolution test chart standardised specifically for television cameras and systems by the Electronic Industries Association. Radial wedges at the chart perimeter give horizontal and vertical resolution simultaneously; the ratio between their limiting frequencies shows how well a system preserves the aspect ratio of fine detail. Complemented the USAF chart by addressing TV-specific considerations like interlace and bandwidth asymmetry.
SVG: Wikimedia Commons · Public Domain (PD-self)
Designed by RAI engineer Renato Moretti for Italy's national broadcaster, the Monoscopio — literally "single screen" — appeared as a close-down pattern on RAI 1 and RAI 2 throughout the 1950s and 60s. Its characteristic white oval on a black field, surrounded by geometric precision marks, made it one of the most distinctive close-down images in European television. Broadcast at the end of each evening's schedule, it became a cultural marker for a generation of Italian viewers.
The most recognised monochrome test card in US broadcast history. Designed by RCA for NBC, the card depicted a Native American chief in headdress surrounded by resolution wedges, aspect-ratio marks, and a grey scale. It appeared during off-air hours from 1939 until the mid-1950s, when NBC transitioned to colour test patterns. A generation of American engineers learned to align a television receiver against this image — it remains an icon of the early broadcast era.
SVG: Alex Microbe / Wikimedia Commons · Public Domain
macOS — External Video Sources
The Video Input source routes any connected capture device through the full analogue simulation pipeline. Capture cards, USB webcams, virtual cameras, and iPhone Continuity Camera all appear as selectable inputs. Pick a device, and its output becomes the signal that the composite encoder, VCR emulation, RF chain, and CRT renderer all operate on — as if it were a broadcast camera feed.
Elgato, AVerMedia, Magewell, Blackmagic Design, and other HDMI/SDI capture cards appear automatically in the device list. Connect a games console, VCR, camcorder, or any HDMI source and route its output through the entire simulation — composite encoding, comb filter artefacts, RF noise, and CRT phosphor — at full frame rate. A Plumbicon tube or Early CCD emulation layer can be applied on top before the signal enters the composite chain, so the capture feed acquires the same temporal lag, colour fringing, and noise floor as original broadcast camera footage.
This is the primary use case for Video Input on macOS: running real hardware output through a physically-modelled analogue broadcast chain, rather than approximating the look in post. The capture card's raw HDMI frames are never stored; processing is entirely on-device in real time.
Any software virtual camera — OBS Virtual Camera, Camo by Reincubate, mmhmm, and similar tools — appears alongside hardware devices in the picker. Route a live composited scene, a media player output, or any other virtual camera source directly into the analogue simulation. Because macOS treats virtual cameras as standard AVCaptureDevice instances, the integration is seamless: no special configuration required.
With macOS Ventura and later, a nearby iPhone can act as a high-resolution wireless camera. As a Continuity Camera it appears as a standard capture device in the list. The iPhone's 4K sensor feeds into the simulation at full quality — then gets degraded through whichever analogue path is selected. The full camera tube emulation layer applies, so you can run the iPhone sensor through a 1960s Vidicon model, composite encode it at NTSC bandwidth, and display it on a P4 shadow-mask phosphor.
Any UVC-compliant USB webcam appears in the device list alongside the built-in FaceTime HD camera. Video Input lets you select a specific device explicitly, rather than accepting whatever macOS designates as the system default. Snapshot and recording are available for all Video Input sources, exactly as they are for the built-in camera.
Video Input is macOS only. Device enumeration uses AVCaptureDevice.DiscoverySession — no additional entitlements or screen recording permissions are required beyond the standard camera authorisation already needed for the built-in camera source. Frames are processed entirely on-device; no video data is stored or transmitted.
macOS — Live Screen Region
The Screen Capture source lets you select any region of your screen and feed it live into the analogue simulation pipeline. A resizable 4:3 selection box floats over your desktop — whatever is inside that box becomes the signal source. Games, video players, design tools, other apps: anything visible on screen can be routed through the composite encoder, VCR emulation, RF chain, and CRT renderer in real time.
A floating, resizable panel with a cyan border appears over your screen. Drag it anywhere and resize it — the 4:3 aspect ratio is locked, so the captured region always matches analogue TV's native geometry. Move the box over any part of your desktop and the simulation updates in real time. The selection window itself is invisible to the capture API; only its contents are captured.
Position the selection box over a game running in a window and every frame passes through the full composite encode, luma/chroma separation, VCR tape noise, RF inter-carrier buzz, and P22 phosphor render. Modern 3D rendering becomes a period-accurate CRT image at broadcast quality — without any video capture hardware. Snapshot and screen recording are available for the result.
Place the box over a video playing in QuickTime, IINA, VLC, Safari, or any other player and the live frames enter the simulation. Colour graded footage, archival transfers, or digitised VHS — the signal is re-encoded at composite baseband rates, degraded by the configured standard and artefacts, and rendered through the CRT model as if it were a live broadcast transmission.
Capture a presentation, UI prototype, or design mock and show your audience what it would have looked like on a period television. The simulation runs at 30 fps with sub-frame latency, so live interaction — typing, scrolling, animations — is visible through the CRT in real time. Use screen recording to export the result as a video file.
Screen Capture is macOS only and requires Screen Recording permission (prompted on first use). Capture uses ScreenCaptureKit — no video data is stored or transmitted. The selection window is excluded from capture so the cyan border frame does not appear in the output.
Video Sources — Studio & Network
Everything that can feed the analogue chain, and everything the processed signal can reach. Each source enters at the same point in the pipeline — the composite encoder receives voltage samples regardless of origin. NDI®, Syphon, and RTMP let the output travel beyond the screen.
The device's built-in camera feeds live frames into the simulation. On iPhone and iPad, both the front and rear cameras are selectable; the rear camera delivers the highest resolution. On Mac, the built-in FaceTime HD camera is the default. The camera tube emulation layer sits in front of the composite encoder — raw sensor output passes through whichever tube model is selected (Vidicon, Plumbicon, Image Orthicon, Early CCD) before any composite processing begins.
A still image or video file from your photo library can be loaded as a static or looping source. The selected media is decoded to a pixel buffer and fed into the composite encoder exactly as the camera would be — the same tube emulation, encoding, VCR, and CRT stages apply. This is useful for applying the full analogue chain to existing footage or reference images. All processing is on-device; the original file is never modified. Video files play back with a draggable progress bar for seeking; in full-screen mode the transport controls auto-hide after three seconds of inactivity and reappear on any tap.
Video files can be imported directly from the Files app on iOS ("Import from Files…") or from the file system on macOS ("Import Video File…", ⌘⇧O), supplementing the Photos library picker. Supported containers include MP4, MOV, WebM, AVI, and WMV. A two-path decode architecture uses AVAssetReader for fast MP4/MOV demux and falls back to AVPlayer for containers — such as AVI — that AVFoundation can open but AVAssetReader cannot demux directly. All files pass through the same composite encode, VCR, RF, and CRT pipeline as any other source; the original file is never modified. The transport bar is seekable by drag, and auto-hides in full-screen mode.
Select any NDI® source on the local network and its video stream feeds directly into the analogue simulation pipeline. Any NDI-capable software or hardware on the same LAN — vMix, Wirecast, OBS with the NDI plugin, PTZ cameras, Newtek hardware, or another device running the app — appears in the source picker automatically via mDNS discovery. The received frames pass through the composite encoder, VCR emulation, RF chain, and CRT renderer identically to any other source. On both iOS and macOS the app connects at the highest available bandwidth, enabling reception of uncompressed or near-uncompressed NDI sources without quality reduction. NDI® is a registered trademark of Vizrt NDI AB.
NDI® Input and Output are available as a one-time optional in-app purchase from within the app.
Fifteen reference test patterns — SMPTE 75% and 100% colour bars, SMPTE ECR-1-1978, Rainbow Bars, Gray Steps, Multiburst, Crosshatch/Safety Grid, Philips PM5544, BBC Test Card F, УЭИТ-2, ТИТ-0249-бис, USAF-1951, EIA-1956, RAI Monoscopio, and RCA Indian Head — feed the composite encoder as built-in static sources. Eight arcade games (Paddle Ball, Alien Invasion, Brick Breaker, Worm, Space Rocks, City Defense, Bug Hunt, Gravity Lander) and a Libretro core loader provide animated sources that produce genuine analogue artefacts because their pixel output is re-encoded at composite baseband — dot crawl, chroma smear, and subcarrier interference all arise naturally from the signal, not from an overlay.
Outputs
The fully processed frame — after composite encode, comb filter decode, VCR noise, RF chain, and CRT phosphor — can be broadcast as an NDI® stream on the local network. Any NDI-capable receiver (vMix, OBS NDI plugin, Resolume, Wirecast, another device running the app) can subscribe and receive the output in real time. The stream is named from the device hostname. NDI output and NDI input can run simultaneously: the app can receive a source from one machine, process it through the analogue chain, and feed the result to another.
NDI® is a registered trademark of Vizrt NDI AB. NDI® Input and Output are available as a one-time optional in-app purchase from within the app.
On the direct-download Mac build, the rendered Metal texture can be published as a Syphon server each frame. Any Syphon-compatible application — VDMX, Resolume, MadMapper, CoGe, Millumin, Isadora, TouchDesigner with the Syphon plugin, and others — can subscribe and receive the output with sub-frame latency, with no screen capture or video encoding involved. Syphon is unavailable in the App Store build because the App Store sandbox prevents the cross-process texture sharing that Syphon requires.
Syphon is copyright © 2010 bangnoise (Tom Butterworth) & vade (Anton Marini), distributed under the BSD 3-Clause License.
Stream the processed CRT output live to YouTube Live, Twitch, Facebook Live, Restream, or any RTMP-compatible ingest endpoint. Enter your RTMP URL and stream key in the NETWORK section of the sidebar.
Mac direct streaming: Toggle "Stream from Mac" in the NETWORK section. No iPhone or iPad required. The rendered output is blitted from the GPU each frame, H.264-encoded by HaishinKit, and pushed to the ingest server in real time alongside the app's audio output.
iOS streaming via Broadcast Extension: On iPhone or iPad, open Analog TV and then long-press the screen-record button in Control Center. Select Analog TV as the broadcast destination and tap Start Broadcast. The extension captures the CRT output and streams it via RTMP using the credentials configured on the Mac (shared via App Group).
Both paths share the same RTMP URL and stream key — configure once on the Mac, use either path. A 2–5 second end-to-end delay is normal for RTMP regardless of platform.
NDI® is a registered trademark of Vizrt NDI AB. The NDI SDK is used under the NDI SDK License Agreement. Syphon is copyright © 2010 bangnoise (Tom Butterworth) & vade (Anton Marini), distributed under the BSD 3-Clause License.
Video Sources — Vintage Games
Eight built-in games feed live output directly into the composite encode/decode pipeline — the same path used by the camera, test cards, and every other source. There is no special handling. The game renders a frame, that frame becomes voltage samples, those samples pass through sync insertion, subcarrier modulation, comb filtering, quadrature demodulation, and finally the CRT phosphor stage. Every artifact that applies to a real video signal applies here too. A ninth option, Libretro, allows loading any compatible emulator core — effectively an unbounded library of additional hardware.
All eight built-in games are independent, clean-room implementations written from scratch. Game mechanics — rules, physics, and scoring — are not protectable by copyright under applicable law (ideas and systems are explicitly excluded from copyright protection; 17 U.S.C. § 102(b) and equivalent provisions in other jurisdictions). The original arcade hardware patents have all expired. No code, sprite graphics, audio samples, or other assets from any commercial release are reproduced or included. The games are presented as signal-source demonstrations.
A custom-written paddle-ball game rendered at 320×240, feeding through the full composite encode/decode chain. Left paddle: W/S keys on macOS, touch on iOS. The right paddle is AI-controlled. Score is rendered with a chunky 5×7 pixel font — the kind of bitmap lettering a hardware character generator of 1973 would produce.
Because the game output enters the simulation at the encoder stage, the analog artifacts aren't applied over the top — they arise from the signal itself. Chroma crawl appears at the paddle edges. The ball's high-contrast transitions produce dot crawl at the subcarrier frequency. Scanline structure and phosphor bloom are determined by whichever CRT and phosphor preset is active. A Vidicon source will add lag to the ball's trail; a Plumbicon will bloom the ball highlight. Set the standard to PAL and the subcarrier geometry changes; set it to NTSC-J and the pedestal drops. The game does not know any of this — it just writes pixels.
Paddle Ball is useful as a diagnostic as much as a game. The high-contrast geometry — white rectangles on black — stresses every stage of the pipeline in a predictable, repeatable way. The ball velocity is constant, so temporal lag, motion blur, and phosphor persistence are easy to compare across camera tube settings.
Original implementation inspired by Pong (Atari, 1972). The ball-and-paddle mechanic is a game rule, not protectable by copyright; the original Magnavox/Atari patent (US3659285) expired in 1989. No code or assets from any commercial release are included.
Inspired by Space Invaders (Taito, 1978). Eleven columns of five alien rows — Squid, Crab, and Octopus types — each rendered with authentic two-frame sprites. A UFO crosses the top at irregular intervals. Four shields decompose under fire at the pixel level: each hit removes a circular radius of bitmap, replicating the original shield erosion behaviour. Three lives, wave system, marching bass-note rhythm that accelerates as the alien count falls. The monochrome pixel field passes through the composite chain, where horizontal bands of colour (an artifact of the original arcade's cellophane overlay) appear as the subcarrier interacts with the high-contrast grid geometry.
Original implementation. The descending-alien-grid mechanic is a game rule and is not copyrightable. All sprite graphics are original pixel artwork; the audio is synthesised from scratch. No code or assets from any commercial release are reproduced.
Inspired by Breakout (Atari, 1976). Eight rows of fourteen bricks in four colour bands — Red, Amber, Green, Yellow, each worth increasing points. Ball angle is determined by where it strikes the paddle: centre returns straight, edges return at up to ±70°. Speed is clamped to 220 px/s; the simulation adds a slight speed increase on brick destruction to replicate the original game's pacing. Each brick colour has a distinct pitch, so the audio composition changes as layers are cleared. The high-contrast brick field and rapid ball transitions stress the composite pipeline's chroma demodulator — dot crawl appears at every brick edge.
Original implementation. The brick-breaking mechanic is a game rule and is not protectable by copyright. All graphics and audio are written from scratch. No code or assets from any commercial release are reproduced.
Inspired by Snake (Nokia, 1998) and Blockade (Gremlin Industries, 1976). A 40×28 grid of 8×8 pixel cells. The intro screen lets the player choose between wall-death and wrap-around modes — the latter allows the worm to re-enter from the opposite edge, which changes the risk profile entirely. Speed ramps by 0.8 cells per second for every five apples collected. The blocky cell grid and bright-on-dark pixels produce the characteristic crawl artifacts that a real television would show on this kind of content: subcarrier dot patterns at every green edge, scanline structure visible in each 8×8 cell.
Original implementation. The worm-eats-food genre derives from Blockade (Gremlin Industries, 1976) and predates any single rights-holder; game mechanics are not copyrightable. No code or assets from Nokia or any other commercial release are reproduced.
Inspired by Asteroids (Atari, 1979). A vector arcade game rendered in software using a P31 phosphor buffer — the green phosphor used in Asteroids arcade monitors. Each frame, the phosphor decays by ×0.80, producing the characteristic slow-green trail on every line drawn. The result is rendered as BGRA pixels (r=p/4, g=p, b=p/8) and fed into the composite chain. Three rock sizes split on impact; a UFO appears at intervals; hyperspace teleports the ship to a random position with approximately a 20% chance of appearing inside a rock. Ship inertia is modelled as a two-axis velocity accumulator with zero friction. The slow-decay phosphor, when passed through the composite encoder and a Vidicon camera tube, produces layered motion trails that go well beyond any post-process filter.
Original implementation. Rock physics, ship inertia, and saucer mechanics are game rules and are not protectable by copyright. The polygon geometry, phosphor simulation, and all audio are written from scratch. No code or assets from any commercial release are reproduced.
Inspired by Missile Command (Atari, 1980). Three missile bases (10 rounds each) defend six cities against incoming warheads. A crosshair tracks mouse position or arrow keys; fire launches an interceptor arc to the crosshair position, detonating on arrival as an expanding then contracting circular explosion. On iOS, touch controls the crosshair directly. The wave system escalates: surviving cities and bases score a bonus at wave end. At game over the full city panorama detonates in sequence. The arcing interceptor trajectories, large circular blast zones, and the multi-colour incoming missile trails all produce distinct composite artifacts — colour fringing at each arc edge, subcarrier cross-modulation at the explosion boundary, chroma phase error in the bright detonation flash.
Original implementation. The anti-missile defence mechanic and city-survival concept are game rules and are not protectable by copyright. All explosion geometry, city artwork, and audio are written from scratch. No code or assets from any commercial release are reproduced.
Inspired by Centipede (Atari, 1980). A twelve-segment bug winds through a mushroom field, reversing direction each time it hits an edge or mushroom. Shooting a segment splits it in two; a mushroom spawns at the death point (each mushroom has four hit points). A spider bounces erratically through the player zone, changing direction irregularly. A flea drops vertically, planting mushrooms at random positions in its trail. Each enemy type has a distinct audio signature — segment impact, spider bounce, flea drop, and rapid-fire tones. The dense mushroom field of small bright dots is particularly hard on the composite chain: chroma crawl appears throughout the field, and each mushroom's sharp edge produces a fine ringing artifact from the composite encoder's low-pass filter.
Original implementation. The segmented-enemy, mushroom-field mechanic is a game rule and is not protectable by copyright. All sprite graphics, colour palettes, and audio are written from scratch. No code or assets from any commercial release are reproduced.
Inspired by Lunar Lander (Atari, 1979) — one of the first arcade games built on Atari's vector-graphics hardware, the same board that would later power their space-shooter titles. The player guides a descent vehicle down to the surface under authentic lunar gravity (1/6 g), balancing fuel against the need to slow the descent and correct horizontal drift. Three landing zones are marked with flags and multipliers (2×, 3×, 5×): the highest-value pad is also the narrowest. Touching down gently earns a full bonus scaled by the multiplier plus a fuel bonus; a hard landing is survivable at reduced score; any other contact is a crash. The game gives you three ships; lose them all and a game-over screen shows your final score and high score.
Three thrusters control the lander: Left arrow fires the left-side RCS thruster (pushing the craft right), Right arrow fires the right-side RCS thruster (pushing left), and Up or Fire fires the main descent engine directly upward. A fourth input — Fire 2 — triggers the abort thruster, delivering four times the normal thrust at four times the fuel cost for emergency situations. The lander always stays vertically oriented; only the three thrusters alter velocity. Collision is detected at both physical footpad tips independently, so a half-on, half-off approach crashes rather than landing. As the lander descends below 150 units of altitude the viewport zooms in, replicating the original arcade approach zoom. The HUD shows fuel percentage, altitude, horizontal velocity, and vertical velocity with live red highlighting when either speed exceeds safe thresholds. A low-fuel warning flashes below 100 units.
On the composite chain: the bright white vector wireframe against a black sky produces sharp luminance edges with almost no chroma content — much as the original monochrome vector monitor looked when fed through an RF modulator. The main engine exhaust is a white-to-orange plume of clustered particles below the engine bell; the RCS puffs appear as short blue-white bursts from the sides of the descent stage. Both introduce brief chroma transients into the otherwise achromatic scene, producing faint subcarrier fringing around the flame areas that the composite decoder resolves as a flicker of warm colour.
Original implementation. The physics of descent under gravity, the thruster-control mechanic, and the landing-zone scoring concept are unprotectable game rules. All geometry, physics constants, scoring logic, and audio are written from scratch. No code or assets from any commercial release are reproduced.
Libretro is an open API for emulator cores — the same interface used by RetroArch. The app can load any compatible .dylib core placed in ~/Library/Application Support/AnalogTV/Cores/. Drop in an Atari 2600, NES, Game Boy, or any other supported core, then load a ROM via the in-app file browser. The core's video output — rendered at whatever resolution and pixel format the original hardware used — feeds directly into the composite pipeline.
This means the NES's limited colour palette, the 2600's scanline-level timing artifacts, and the characteristic dot patterns of hardware sprite rendering all enter the simulation as raw pixel data, then get composite-encoded and decoded in full. The chroma subcarrier interacts with the hardware's colour encoding the same way it would have on a real television connected to the original console via RF or composite out. You supply the cores and ROMs; the simulation handles the rest.
macOS only. Libretro core compatibility depends on the individual core. The app does not bundle any cores or ROM files. Users are responsible for ensuring they have the legal right to use any ROMs they load.
Video Sources — Teletext
Teletext is a broadcast data system standardised as ETS 300 706 (Level 1.5) that encodes digital pages into the vertical blanking interval of a standard analogue television signal. Each page is a 40-column × 25-row grid of characters drawn from a hardware character set (SAA5050 and equivalents) — fixed-width monospaced cells, seven foreground colours, mosaic graphics, flashing text, and double-height rows. The app renders a live Teletext service as a video source that feeds the full composite encode/decode pipeline, so the sharp block graphics interact with the subcarrier in exactly the way real Teletext would on a PAL television: dot crawl at colour boundaries, chroma smear on the coloured headers, and phosphor bloom on white text against black backgrounds.
Navigation matches a real remote: swipe left/right to cycle pages, or tap the four coloured fastext buttons (red, green, yellow, blue) at the corner to jump to linked pages. The page number is always shown in the header row alongside a live clock.
The index page. A magazine-style directory listing all available pages with short descriptions. Fastext buttons navigate directly to Headlines, Weather, Calendar, and Sport.
Live news headlines fetched from an RSS feed and refreshed every five minutes. Up to 24 headlines are displayed across two pages (P150 and P151). The default feed is TechCrunch (techcrunch.com/feed/), used under the TechCrunch RSS Terms of Use — headlines are attributed and tapping opens the full article. A custom RSS feed URL can be entered in Settings. Headlines are normalised to printable ASCII before rendering because the SAA5050 character generator supports only the 7-bit ASCII range; typographic quotes, em-dashes, and non-breaking spaces are substituted automatically.
Current conditions (P400), hourly forecast (P401), and five-day outlook (P402) sourced from Apple WeatherKit. Requires location permission; falls back to a placeholder page when permission is denied or unavailable.
The current month in a fixed-width calendar grid with today's date highlighted. Day names across the top, week rows below — laid out to fill a standard 40×25 Teletext page. Requires calendar access on iOS; on macOS it reads from the system EventKit store.
Shows the currently playing track from the system Music library: artist, album, and track title in the style of a 1980s teletext music chart page. Requires music library access.
Reserved pages that appear as static placeholders in the current release. The fastext links and page structure are in place; live data providers will be added in a future update.
News data attribution: The default headlines feed is TechCrunch, used under the TechCrunch RSS Terms of Use. Headlines are displayed as short text strings with source attribution; tapping opens the full article. No article body text, images, or other content is reproduced. Users may substitute any RSS feed URL in Settings.
Signal Accuracy
Each standard sampled at 4× its native subcarrier frequency: 14.318 MHz for NTSC-M/J, 17.734 MHz for PAL/SECAM/625B&W, 14.303 MHz for PAL-M, 14.328 MHz for PAL-N, 8 MHz for System A, 22 MHz for System C.
Phase — radians per sample and per line — is pre-computed per standard on the CPU. All colour formats share the same GPU encoder and decoder kernels with no hardcoded format branches in the shaders.
REF mode bypasses all noise and CRT effects so you can verify the raw encode–decode roundtrip. Useful for confirming that a given standard's subcarrier, pedestal, and burst geometry are correct.
SYNC mode gates simulation updates to the standard's native field rate — 29.97 fps for NTSC-family, 25 fps for PAL/SECAM/B&W — using a drift-corrected timer rather than the display's 60 Hz refresh.
The built-in waveform monitor and vectorscope show actual signal values — the same measurements you'd see on real broadcast test equipment. The vectorscope traces the I/Q or U/V constellation of the decoded signal.
GPU compute shaders process every output sample in parallel — each thread computes φ(n) = 2π(f_sc + dev·sig)·n independently. This is correct for amplitude/phase systems like NTSC and PAL, but FM requires continuous phase accumulation: φ(n) = φ(n−1) + 2π·f_inst(n). Computing this on the GPU would require a prefix sum over 946 samples per line — serialising the entire active line and negating any throughput benefit. Instead, SECAM encoding uses a dedicated CPU kernel: all 576 active lines run in parallel (each line is phase-independent), but within each line the FM phase integral is computed sequentially, sample by sample. The CPU kernel writes into the same composite buffer consumed by the GPU decode pass, keeping the rest of the pipeline unchanged.
This matters for fidelity because GPU instantaneous-phase encoding produces a phase discontinuity at every colour transition — the carrier jumps to a new frequency as if it had always been there, rather than smoothly sweeping through the transition. The CPU accumulated-phase encoder produces the physically correct FM transient: a brief over- or under-deviation as the carrier sweeps to its new steady-state frequency. This transient is exactly the cause of SECAM fire — the coloured fringe at vertical bar edges unique to SECAM receivers. It is not achievable without continuous phase accumulation.
The CPU kernel also implements the full broadcast processing chain in the correct order: deviation limiting (clamping Db/Dr to ±1.0 per CCIR 624 §4.4), LF pre-emphasis (85 kHz one-pole HPF applied to the baseband colour signal before modulation), FM phase accumulation, and Bell/Cloche carrier amplitude weighting applied to the FM output. The GPU decoder applies the inverse: FM discriminate via atan2(z₁×conj(z₀)), then a sequential per-line IIR de-emphasis pass, then Y/C matrix.
MAC and MUSE encoding uses one Metal compute thread per scan line — all 625 (D-MAC/D2-MAC/HD-MAC) or 1125 (MUSE) lines encode in parallel. Within each thread, time slots are assigned deterministically — sync, chroma, luma, blanking — and luma is sampled through the same bandwidth-limiting FIR used by NTSC/PAL/SECAM encode (sampleInputBandLimited), correctly attenuating horizontal detail to match the broadcast standard's luma bandwidth. Time-division multiplexing is inherently data-parallel, so GPU is both the natural and more accurate home for this encode — unlike SECAM's FM phase accumulation, which is fundamentally sequential.
MUSE applies per-field quincunx sub-pixel offsets (half-pixel horizontal and/or vertical) across a 4-field cycle, and the decoder accumulates all four composite ring buffers to reconstruct the full Hi-Vision luma resolution. Each field is written into its own dedicated composite buffer; the decode pass reads all four and averages them. Static scenes benefit from the full 4× spatial interleaving; moving content blurs softly across the 133 ms accumulation window, matching a MUSE receiver without motion adaptation. The ring fills over the first four frames (~130 ms cold-start) and resets when switching standards.
RF interference, ghost/multipath, IF filter ringing, and chroma decode all operate on the composite buffer at the full composite sample rate — 14.318 MHz for NTSC, 17.734 MHz for PAL. No effect is applied as a pixel-domain post-process: every distortion enters and propagates through the waveform, interacting with subcarrier phase exactly as it would in real hardware. Y/C separation uses a 1-line comb at composite rate; chroma is demodulated by Hann-windowed quadrature FIR filters with per-channel bandwidth (NTSC I: 1.3 MHz, Q: 0.4 MHz; PAL U/V: 1.3 MHz); luma is bandwidth-limited by a 13-tap sinc LPF before the final texture write.
The decay formula new = d × prev + (1−d) × E converges to exactly E under constant input. Setting new = prev = ss: ss = d × ss + (1−d) × E → ss(1−d) = (1−d) × E → ss = E. No brightness creep regardless of d.
P22 phosphor decay times at 60 Hz: Red (Y₂O₂S:Eu) 3–6 ms → decay ≈ 0.82–0.93; Green (ZnS:Cu,Al) 1–3 ms → decay ≈ 0.70–0.83; Blue (ZnS:Ag) 0.5–2 ms → decay ≈ 0.55–0.75. The app maps the user slider to the green midpoint and derives R and B by ×1.15 and ×0.85.
App Store Reviews
“Miles better than any retroarch CRT shader! Tons of features to play with. This brings so much memories!!”
“This does a MUCH better job of properly fuzzing old console video game screenshots than any of the emulation CRT filters I’ve tried. PS1 and PS2 games are particularly well-affected. I’m very impressed. Fantastic work. Thank you.”
“Love it! The best CRT simulation I’ve seen, extremely fun to mess with.”
“Awesome.”
Under the Hood
The simulation is accurate. It is also not infinite. This page explains both.
Every distortion the app produces — noise, ghosts, IF ringing, dot crawl, chroma smear, intermodulation — emerges from operations on the composite waveform itself. The signal is encoded at 4× subcarrier frequency (14.318 MHz for NTSC, 17.734 MHz for PAL), corrupted in that domain, and decoded back to pixels. Nothing is painted on afterwards. If something looks like crosstalk between luma and chroma, it is because real interference occurred in the composite buffer.
The pipeline is driven by a rectangular pixel frame. Digital video has no blanking interval, no sync pulses, no vertical blanking lines — those are properties of a transmission format, not of a camera feed. The simulation models the active picture area only. VBI lines (lines 1–21 in NTSC, 1–22 in PAL), equalising pulses, broad pulses, and front/back porch timing are all synthesised separately rather than falling out of a continuous waveform generator.
A true end-to-end transmitter model would represent the entire frame — including blanking — as a continuous 1D time-domain signal at the composite sample rate. That is a fundamentally different architecture: a software-defined transmitter rather than a per-line texture pipeline. It would produce VBI content “for free” but would also need every VBI line explicitly programmed (VBI content is always synthetic — even in real broadcast, teletext and VITS were injected by separate equipment, not derived from the camera). The performance cost would be substantial with no improvement to the visible picture.
Most of the composite pipeline runs on the GPU as Metal compute shaders — thousands of threads processing lines in parallel. SECAM is the exception: FM encoding requires continuous phase accumulation (each sample depends on the previous one), which cannot be parallelised within a line, so SECAM encode runs on the CPU with lines in parallel and samples in sequence. MAC/MUSE encoding uses one GPU thread per scan line — time-division multiplexing is inherently data-parallel, so GPU is both the natural and more accurate home. Every format converges on the same composite buffer before the shared GPU decode pass.
For anything visible on screen the simulation is physically correct: subcarrier phase, burst geometry, Y/C crosstalk, filter transients, phosphor decay. For signal-level detail that lies outside the active picture — exact sync tip voltage, VBI data integrity, front-porch duration — the simulation is representative rather than measured. The waveform monitor and vectorscope show real values from the composite buffer, so you can verify the active-picture signal against broadcast specifications directly.
REF mode bypasses all noise, CRT, and VCR effects. It encodes a clean signal and decodes it with no further processing. Any colour error or bandwidth loss visible in REF mode is a true encode–decode artefact — inherent to the composite standard itself, not introduced by the simulation. This is the baseline that every other mode builds on.