Deep Learning based Reach-and-Grasp Decoder from EEG Signals
This repository contains code for decoding reach-and-grasp actions from EEG signals using deep learning. The work explores different neural network architectures (Vanilla 1D CNN, EEGNet, HTNet) and training strategies (within-subject, inter-subject, transfer learning) for classifying three grasp types: palmar grasp, lateral grasp, and rest.
Decoding reach-and-grasp actions from electroencephalogram (EEG) recordings is crucial for the rehabilitation of hand functions in patients with motor disorders. Despite the high degrees of freedom in human hand movements, most daily activities can be executed using palmar, lateral, and precision grasps.
- π― Multi-class classification: Palmar grasp, lateral grasp, and rest
- π§ͺ Multiple architectures: Vanilla 1D CNN, EEGNet, HTNet
- π Transfer learning: Within-subject and inter-subject training
- π Three recording modalities: Gel-based (58 channels), water-based (32 channels), dry electrodes (11 channels)
- π§ Data augmentation: Frequency band filtering
The dataset is publicly available from BNCI Horizon 2020.
- Participants: 45 right-handed healthy individuals
- Trials per condition (TPC): 80 trials distributed over 4 runs
- Movement conditions: Palmar grasp, lateral grasp, and rest
- Recording duration: ~7 min runs per participant
| Modality | System | # Channels | Coverage |
|---|---|---|---|
| Gel-based | Standard EEG | 58 | Frontal, central, parietal areas |
| Water-based | EEG-Versatileβ’ | 32 | Full scalp coverage |
| Dry electrodes | EEG-Heroβ’ | 11 | Sensorimotor cortex |
- Filtering: Zero-phase 4th order Butterworth filter (0.3 Hz cutoff)
- Resampling: 128 Hz
- Segmentation: Window of interest [-2, 3]s relative to movement onset
- Rest trials: 81 trials extracted from inactivity periods (5s duration)
A compact CNN with 1D temporal convolutions designed to extract EEG features:
- Input: Multi-channel EEG time series
- Architecture: 1D convolution β temporal pooling β feature extraction β dense layers
- Purpose: Encapsulate traditional EEG feature extraction in a learnable framework
Lawhern et al., 2018 - Compact CNN for BCI applications:
- Temporal convolution (band-pass filtering)
- Depthwise convolution (spatial filtering)
- Separable convolution (temporal pattern identification)
Peterson et al., 2021 - Enhanced version of EEGNet:
- Adds Hilbert transform layer for spectral power features
- Data-driven filter-Hilbert approach
- Projects features to regions of interest
| Strategy | Description | Use Case |
|---|---|---|
| Within-subject | Train and test on same subject | Personalized models |
| Inter-subject | Leave-one-subject-out cross-validation | Generalization across participants |
| Transfer learning | Pre-train on one modality, fine-tune on another | Cross-modality adaptation |
On-the-fly frequency band filtering during training:
- Enforces learning across different frequency bands
- Improves model robustness
- Prevents overfitting to specific spectral features
Table 1: Inter-subject classification accuracy across recording modalities
Key Finding: Longer signal windows yield better performance.
- Optimal window: T=[0,1]s after movement onset
- Stride: 250ms overlapping windows
- Best performance: ~1s after movement onset

Important: Despite having only 11 channels (vs 32-58), dry electrode recordings achieved comparable performance:
- Gel: 65.3%
- Water: 62.8%
- Dry: 58.4% (only 11% drop with 81% fewer channels!)
| Pre-training | Target | Accuracy Improvement |
|---|---|---|
| Gel β Water | Water | +5.2% |
| Gel β Dry | Dry | +7.8% |
| Water β Dry | Dry | +6.1% |
Table 2: Transfer learning improves performance when adapting across recording modalities
- Python 3.7+
- PyTorch 1.7+
- NumPy, SciPy, scikit-learn
- MNE (for EEG processing)
# Clone the repository
git clone https://github.com/mariamonzon/EEG-Grasp-DL-Decoding.git
cd EEG-Grasp-DL-Decoding
# Create virtual environment (optional)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt# Train Vanilla 1D CNN on gel electrode data
python train.py --model vanilla1d --modality gel --subject 1
# Train with inter-subject cross-validation
python train.py --model eegnet --modality water --cross_validation inter_subject
# Train with transfer learning
python train.py --model htnet --pretrain_modality gel --target_modality dry# Evaluate trained model
python main_bci.py --model_path checkpoints/best_model.pth --modality gelfrom braindecode.datasets import load_eeg_data
from braindecode.datautil import preprocess_eeg
# Load and preprocess data
raw_data = load_eeg_data('path/to/dataset')
processed_data = preprocess_eeg(raw_data,
low_cutoff=0.3,
resample_freq=128,
window=[-2, 3])EEG-Grasp-DL-Decoding/
β
βββ braindecode/
β βββ datasets/ # Dataset loading and preprocessing
β βββ datautil/ # Data utilities and augmentation
β βββ models/ # Neural network architectures
β β βββ vanilla1d.py # Vanilla 1D CNN
β β βββ eegnet.py # EEGNet implementation
β β βββ htnet.py # HTNet implementation
β βββ training/ # Training loops and strategies
β βββ samplers/ # Data samplers
β βββ visualization/ # Plotting and visualization tools
β βββ classifier.py # Main classifier wrapper
β βββ util.py # Utility functions
β
βββ main_bci.py # Main training script
βββ train.py # Training pipeline
βββ LICENSE # MIT License
βββ README.md # This file
If you use this code in your research, please cite:
- Schwarz et al. (2018): Decoding natural reach-and-grasp actions from human EEG
- Lawhern et al. (2018): EEGNet: A compact convolutional neural network for EEG-based BCIs
- Dataset provided by BNCI Horizon 2020
- Experimental setup adapted from Schwarz et al., 2020
- Built with PyTorch and MNE-Python
This project is licensed under the MIT License - see the LICENSE file for details.

