Neuro Glove — Early Tremor Detection Wearable
Inspiration
One of our teammates lost their grandmother to Parkinson's disease. Watching a loved one's autonomy erode gradually — the tremors, the falls, the slow loss of independence — is a particular kind of grief. Parkinson's is not curable, but its progression can be managed, and early detection matters enormously. The earlier a patient receives a diagnosis, the sooner they can begin treatment, adapt their lifestyle, and maintain quality of life for longer.
We built Neuro Glove because we wanted that window of early detection to be accessible — not just in a clinical setting, but in everyday life.
What it does
Neuro Glove is a wearable glove with an embedded ESP32 microcontroller, a 9-axis IMU, and five fingertip pressure sensors. It streams real-time accelerometer data over Bluetooth Low Energy to a web dashboard, where an on-device machine learning model analyzes hand motion for the hallmark signature of Parkinson's resting tremor: a rhythmic oscillation in the 3–7 Hz frequency band. When sustained tremor is detected, the dashboard raises an alert. Fall detection runs in parallel on the firmware.
How we built it
Hardware. The glove uses an Adafruit ICM20948 IMU communicating over I2C, five FSR pressure sensors on the fingertips wired through voltage dividers, and an ESP32 WROOM-32E as the compute and BLE radio. The firmware is written in C++ using the Arduino framework via PlatformIO.
Machine learning. We trained on the PADS dataset from PhysioNet — 469 patients, continuous wrist accelerometer recordings at 100 Hz across structured clinical tasks. Our model is a Gradient Boosted Tree classifier operating on spectral features rather than raw signal windows.
For each 4-second window of 400 samples, we first remove the gravity component using a high-pass filter:
$$a_{\text{linear}}[t] = a_{\text{raw}}[t] - g[t], \quad g[t] = \alpha \cdot g[t-1] + (1-\alpha) \cdot a_{\text{raw}}[t]$$
with $\alpha = 0.98$. We then apply a Hanning window and compute the real FFT, extracting 17 features per window: tremor band energy $E_{3\text{–}7}$, low-band energy $E_{<3}$, high-band energy $E_{>7}$, the tremor ratio $E_{3\text{–}7} / E_{\text{total}}$, and dominant frequency — each computed independently across all three accelerometer axes — plus cross-axis mean tremor energy and spectral entropy:
$$H = -\sum_{i} p_i \log p_i, \quad p_i = \frac{|X(f_i)|^2}{\sum_j |X(f_j)|^2}$$
Low entropy indicates a periodic, tremor-like signal. These 17 features are normalized and passed to a 100-estimator gradient boosted classifier, which outputs $P(\text{tremor})$. A majority vote across 10 consecutive windows triggers an alert.
Inference. The entire model runs in the browser in JavaScript — the GBT is exported as a JSON tree structure and evaluated client-side. There is no cloud dependency and no latency beyond BLE transmission.
Challenges
The naive model learned the wrong thing. Our first approach trained a 1D CNN directly on raw accelerometer windows labeled by patient diagnosis. It achieved 70% accuracy in cross-validation — but when we deployed it, the model would fire whenever the hand pointed downward. It had learned gravity orientation and patient-level movement style, not tremor. Throwing away a working model mid-hackathon and rebuilding from first principles was a hard call, but the right one.
Gravity contamination. Raw accelerometer data includes a $\sim 9.8\ \text{m/s}^2$ static component along whichever axis points down. Any model trained on raw values conflates orientation with dynamics. The high-pass filter was essential to make features orientation-invariant — but we had to be careful not to apply it twice (once in firmware and once in the browser), which initially caused all readings to converge to zero.
BLE reliability on macOS. The CH340 USB-serial driver required for flashing the ESP32 is broken on Apple Silicon Macs running Sonoma. We lost significant time before routing all firmware flashing through a teammate's Windows machine.
Dataset mismatch. PADS records wrist-worn accelerometer data. Our sensor sits on the back of the hand. The biomechanics differ — hand tremor has higher amplitude and slightly different spectral characteristics than wrist tremor. The model's AUC of 0.887 reflects this gap, and closing it would require collecting matched glove-based data from PD patients.
What we learned
Frequency domain thinking is underused in applied ML. The moment we stopped asking "what does the raw signal look like?" and started asking "what frequency is this oscillating at?", the problem became tractable. Parkinson's tremor has a clinical definition — 3 to 7 Hz rhythmic oscillation — and the right features encode that definition directly. A model built on domain knowledge generalizes better than one that learns from data alone.
We also learned that end-to-end browser inference is more powerful than it first appears. Running a GBT forward pass in JavaScript with no server, no cloud API, and no quantization constraints meant we could iterate on the model in minutes and demo it anywhere with a Chrome tab.
Built With
- cpp
- emedded
- ml
Log in or sign up for Devpost to join the conversation.