VisionShift XR: Executive Project Report

Human-AI Strategic Collaboration for Urban Accessibility Dominance


1. Inspiration: The Empathy Catalyst

The genesis of VisionShift XR was born from a fundamental observation in modern urbanism: the Empathy Gap. While city planners and civil engineers operate within strict regulatory frameworks (like the ADA), the actual experience of the city remains fragmented and hostile for millions with sensory or physical impairments.

Our inspiration was to move beyond the checklist. We wanted to create a system where a planner doesn't just "verify" a curb ramp—they feel the friction of an impassable route. By leveraging the 14-hour pressure cooker of the RealityShift XR Hackathon, we aimed to bridge the divide between "compliance" and "human impact."


2. The Problem: The Invisible Barrier

Urban environments are often designed for the "mean user"—a healthy, able-bodied individual. This results in Urban Friction, where small design oversights accumulate into major social exclusion barriers.

Analytical Framework

We define the Urban Friction Index ($U_f$) as a function of barrier density, severity, and the reduction in human mobility reaches:

$$U_f = \sum_{i=1}^{n} \frac{B_i \cdot S_i}{\Gamma(A)}$$

Where:

  • $B_i$ is a discrete barrier identified in the spatial field.
  • $S_i$ is the severity coefficient (where $S \subset [1, 10]$).
  • $\Gamma(A)$ represents the accessibility reach of the specific user impairment profile.

The problem isn't just a lack of ramps; it's a lack of Interoperable Data Visualization. Planners cannot easily synthesize how a visual impairment (like Glaucoma) interacts with a physical barrier (like obstructive signage) in a unified analytical space.


3. The Solution: VisionShift XR

VisionShift XR is an Executive Command Framework that utilizes Dual-Intelligence Architecture to solve urban auditing.

  • Human Authority: Provides the vision, ethical mapping, and intuitive site selection.
  • AI Intelligence (Google Gemini 1.5): Acts as the reasoning engine, generating Impact Narratives that transform raw spatial data into persuasive executive briefing notes.

How It Works

  1. Simulation Layer: Users select a "Visual Profile" (e.g., Cataracts, Glaucoma) which applies real-time shaders to the 3D environment.
  2. Interactive Auditing: The user moves through a Digital Twin of a city block. Upon identifying a barrier, they trigger an "Audit Point."
  3. Cognitive Synthesis: The system sends the spatial coordinates, barrier type, and simulation context to the Gemini 3 Flash model.
  4. Impact Reporting: The AI returns a technical yet empathetic analysis, explaining the specific risk posed by that barrier to that user profile.

4. Technical Architecture & Tech Stack

Building a "winning" project required a stack that balanced speed with professional-tier fidelity.

  • Frontend: React 19 for high-performance state management and TypeScript for type-safe spatial vector math.
  • Rendering: Three.js via React Three Fiber (R3F) for the 3D city engine.
  • XR Layer: @react-three/xr using the WebXR standard for cross-platform availability.
  • Intelligence: Google Gemini API (@google/genai) for the multimodal impact analysis.
  • Styling: Tailwind CSS 4.0 utilizing a "Professional Polish" design system to ensure executive-level UI/UX.
  • Dynamics: Motion (framer-motion) for fluid UI transitions that reinforce a sense of "system command."

5. The Build Process: 14 Hours of Strategy

The build was structured into four executive phases:

  1. Phase I: Environment Mapping: Sculpting the low-poly but highly stylized "Urban Core" in Three.js.
  2. Phase II: Shader Integration: Developing CSS and Three.js material filters to simulate visual impairments.
  3. Phase III: Cognitive Pipeline: Connecting the spatial UI to the Gemini API, ensuring the prompts were engineered for "Executive Briefing" style outputs.
  4. Phase IV: Layout Dominance: Implementing the 3-column command dashboard to provide a "Mission Control" feel to the auditor.

6. Challenges & Lessons Learned

The Geometry Challenge

Managing real-time visual filters across a WebGL context within a browser's resource limits was non-trivial. We learned to use Radial Masking in CSS to simulate advanced Glaucoma (tunnel vision) without the overhead of complex post-processing shaders, achieving smooth 60fps performance even in "Audit Mode."

The Prompt Alignment Challenge

Initial AI outputs were too generic. We learned that to achieve "Intelligent First" status, we had to provide the model with a "System Persona"—it isn't just an LLM; it is an Urban Accessibility Expert.

Mathematical realization

We realized that accessibility is often measured linearly, but the impact is exponential. We modeled the "Risk Probability" ($P_r$) as:

$$P_r(x) = \frac{1}{1 + e^{-k(x - x_0)}}$$

Where $x$ is the cumulative barrier severity.


7. Future Scalability: The Autonomous City

VisionShift XR is a foundation for a larger Urban Intelligence Ecosystem.

  1. Crowdsourced Auditing: Expanding to allow citizens to log real-world barriers using AR on their smartphones.
  2. Automatic Remediation Prediction: Using Gemini to not just identify barriers, but to propose 3D-printable or modular architectural fixes.
  3. FHIR Integration: Correlating urban barrier data with healthcare outcomes (using the Agents Assemble healthcare integration) to show how inaccessible cities directly lead to higher injury rates in specific demographics.

8. Strategic Closing

The RealityShift Hackathon provided the perfect platform to demonstrate that XR is not a gimmick—it is a transformation of perception. By combining human empathy with AI's analytical depth, we have built a tool that doesn't just show you the city; it shows you the future of a city without barriers.

Ariadne-Anne DEWATSON-LE'DETsambali Executive Lead VisionShift XR Strategic Systems April 18, 2026

Built With

Share this project:

Updates