Inspiration

In U.S. hospitals, 700,000 to 1 million patients fall every year.
Of these falls:

  • 80–90% are unassisted
  • Most occur in the patient’s room
  • Commonly during bed exits or transfers
  • Often when no staff are present

The impact is severe:

  • 30–50% of falls result in injury
  • ~10% cause serious harm
  • Contributing to approximately 11,000 hospital deaths annually

Current hospital safety systems rely heavily on nurse call buttons and bed alarms, but these systems fail in the moments that matter most.

If a patient falls or becomes unstable, they may be unable to reach a call button at all.

At the same time, nurses are caring for more patients than ever:

  • 45% report burnout
  • Managing multiple high-risk rooms
  • Enduring constant alarms
  • Making life-or-death decisions under intense time pressure

Beyond falls, hospitals also struggle to monitor early warning behaviors that often precede harm, including:

  • Repeated bed-exit attempts
  • Excessive agitation or restlessness
  • Abnormal convulsive movements

These behaviors are common in elderly or cognitively impaired patients and significantly increase risk—yet very limited tools exist to reliably detect these physical movement patterns.


What It Does

MedEye uses computer vision to flag safety-relevant events in hospital rooms, including:

  • Bed exits
  • Agitation
  • Falls
  • Seizures

These events are surfaced as real-time notifications on a nurse-facing dashboard.

When an alert is triggered, clinicians can:

  • Review the flagged event
  • Confirm its relevance
  • Take appropriate action

MedEye also logs structured, time-stamped data on bed exits and other unsafe behaviors, enabling:

  • Historical review
  • Trend tracking
  • Safer shift handoffs

Rather than diagnosing patients or replacing clinical judgment, MedEye:

  • Identifies risk signals and abnormal motion patterns
  • Prioritizes abnormal events
  • Keeps clinicians firmly in the loop for all decisions

How We Built It

We built MedEye around a video-driven monitoring pipeline that processes:

  • Pre-recorded patient room videos
  • An optional live camera input from the CameraPage

For the demo, the frontend ingests a curated set of sample videos showing patients:

  • Performing bed exits
  • Remaining safely in bed
  • At varying frequencies and times of day

These videos simulate realistic inpatient scenarios and allow us to observe patterns such as repeated nighttime bed exits.


Videos are captured and analyzed using the Overshoot real-time vision SDK, which extracts structured signals related to:

  • Posture
  • Movement
  • In-bed presence

The structured outputs are sent to a Node.js backend, where we track indicators such as:

  • The number of bed-exit attempts
  • The timing of those attempts

We log these indicators in a Supabase database, enabling analysis of patient behavior over time.

By comparing current activity to prior patterns, the system can infer when behavior deviates from baseline—such as unusually frequent bed exits at night—and flag those events as higher-priority risks.


Challenges We Ran Into

We found that deciphering signal noise vs. real risk in Overshoot was challenging. Harmless movement would be picked up, but actionable risks could be missed if the prompt was not carefully tuned.

We addressed this by:

  • Tightening prompt language
  • Explicitly defining what does not constitute a risk event
  • Forcing outputs into discrete states rather than descriptions

Raw frame-level signals fluctuate rapidly, so we implemented debouncing logic to detect meaningful events (e.g., a bed exit vs. repositioning).


What We Learned

Working on MedEye strengthened our general startup skills, especially around scoping the right problem before building.

Grounding decisions in real data helped us:

  • Prioritize what features mattered most
  • Avoid overbuilding

We also learned how important it is to balance ambition with feasibility in a regulated space like healthcare—and continuously ensure that a human is in the loop of our automation.


What’s Next for MedEye

A key next step is expanding our safety checks to cover less frequent but high-impact events that complete the monitoring workflow. This includes:

  • Bed-rail interactions (attempts to climb over rails)
  • Floor-presence detection to accelerate response after a fall

In parallel, we plan to build a patient-specific learning profile that adapts alert sensitivity based on historical behavior—distinguishing, for example, a chronically restless sleeper from new-onset agitation.

Together, these additions would:

  • Reduce false alarms
  • Ensure rare but critical events are not missed

Built With

Share this project:

Updates