Inspiration

We were inspired by the idea that vehicle control does not need to rely on hands, feet, or conventional input devices. The Muse 2 EEG headband allowed us to work directly with neural and head-movement signals, and BeamNG.tech provided a physics-accurate driving environment where we could safely test control systems. Bringing together neurotechnology, signal processing, computer vision, and control theory into one integrated project motivated us to build a fully BCI-enabled autonomous driving prototype.

How We Built It

Brain–Computer Interface Layer We used the Muse 2 EEG headband as the primary input device. Through the BrainFlow SDK, we streamed EEG and IMU data at 200 Hz. We processed EEG signals using FFT + PSD.

$$ PSD(f) = \frac{|X(f)|^2}{N} \ X[k] = \sum_{n=0}^{N-1} x[n] \cdot e^{-j 2\pi kn / N} $$IMU readings were calibrated to remove drift: $$\omega_{cal} = \omega_{raw} - \omega_{bias}$$ A movement-classification state machine interpreted calibrated IMU signals (tilts, nods, rotations) for steering, cruise-control adjustments, and lane-change triggers. A PyQt5 GUI displayed real-time EEG waveforms and frequency-band power (Delta, Theta, Alpha, Beta, Gamma).

Advanced Driver Assistance Systems (ACC, LKAS, LCAS)

  • Adaptive Cruise Control (ACC) ACC used a proportional-derivative controller. $$u_{follow} = K_p \cdot e_d + K_d \cdot (-\dot{d})$$ Dynamic time-gap adjustment: $$T_{gap}(v) = T_{base} \cdot [\alpha + (1 - \alpha) \cdot (v / v_{ref})]$$

  • Lane Keeping Assist System (LKAS) Lane keeping used segmentation + contour grouping + line fitting + PD steering. $$\delta = - (e_{lat_bar} / e_{max})$$ Where: $$e_{lat_bar}$$ is a temporally smoothed lateral deviation.

  • Lane Change Assist System (LCAS) Lane changes triggered from IMU tilt thresholds.

$$LC_{trigger} = \begin{cases} \text{Left} & \text{if } \omega_x > \theta_{trigger} \ \text{Right} & \text{if } \omega_x < -\theta_{trigger} \ \text{None} & \text{otherwise} \end{cases} $$ A finite-state machine executed lane changes until the offset threshold was reached.

Computer Vision Pipeline BeamNG’s annotation camera provides semantic color masks. Using it, we performed:

  • lane-color segmentation
  • contour extraction
  • grouping of dashed-lane segments
  • least-squares line fitting
  • ROI smoothing and tracking

Line fitting equation:

$$ \min_{\theta, \rho} \sum_{i=1}^{N} ( x_i \cos\theta + y_i \sin\theta - \rho )^2 $$ROI smoothing:

$$x_{ROI}(t) = \alpha_{ROI} \cdot x_{center}(t) + (1 - \alpha_{ROI}) \cdot x_{ROI}(t-1) $$This produced robust lane boundaries even with fragmented or partially visible markings.

System Integration EEG, IMU, CV, and control modules run in parallel using multithreaded pipelines and lock-free buffers. BeamNGpy handled real-time steering, braking, throttle, and sensor polling.

Challenges We Faced

  • Dashed Lane Reconstruction Dashed lanes appear as multiple fragments. Without grouping, lane lines were misidentified. We fixed this by clustering contour centroids and fitting unified lines across grouped segments.
  • ROI Drift and Lane Loss Early ROI logic tracked noise and drifted off the lane. Anchoring ROI to the vehicle frame and smoothing updates resolved this.
  • Head Movement Noise IMU signals contained drift, jitter, and spikes. Calibration + hysteresis thresholds + temporal filtering were required for reliable head-gesture recognition.
  • Real-Time Performance Running EEG (200 Hz), IMU (200 Hz), computer vision (10 Hz), and control loops (100 Hz) required a carefully optimized, multithreaded architecture.
  • Lane Width Variability Certain maps had inconsistent lane widths. We implemented dynamic lane-width estimation and fallback rules to avoid mis-centering.

What We Learned

  • Brain–Computer Interfaces We learned how to process EEG signals, compute FFT/PSD, design real-time visualizations, and handle sensor artifacts and calibration.
  • Computer Vision We built a complete lane-detection pipeline using segmentation masks, contour grouping, least-squares line fitting, ROI tracking, and temporal smoothing.
  • Control Theory We applied PD controllers for both longitudinal and lateral control, tuned gain values, and implemented dead-zones and controller arbitration.
  • Real-Time Systems Engineering We gained experience designing multithreaded, latency-sensitive systems that integrate hardware, computer vision, and autonomous-driving logic in real time.

Built With

Share this project:

Updates