Conversation
|
@teonbrooks have you looked at this by @alexandrebarachant : https://github.com/alexandrebarachant/muse-lsl ? Let's try to finish the refactoring tomorrow morning and then go for the openbci client later in the day ... |
mne/realtime/client.py
Outdated
| # Martin Luessi <mluessi@nmr.mgh.harvard.edu> | ||
| # Matti Hamalainen <msh@nmr.mgh.harvard.edu> | ||
| # Authors: Teon Brooks <teon.brooks@gmail.com> | ||
| # Mainak Jas <mainak@neuro.hut.fi> |
There was a problem hiding this comment.
I think the diff is really unreadable if we rename the files. I'd prefer keeping the filenames as is for now ...
bb3d809 to
cd0e135
Compare
Codecov Report
@@ Coverage Diff @@
## master #4110 +/- ##
==========================================
- Coverage 86.18% 85.95% -0.24%
==========================================
Files 354 355 +1
Lines 63729 63755 +26
Branches 9709 9709
==========================================
- Hits 54927 54801 -126
- Misses 6127 6272 +145
- Partials 2675 2682 +7Continue to review full report at Codecov.
|
|
I'll try to participate in this WIP (+1). |
|
Great! Feel free to review it, pull the changes and try them on your computer |
rodrigohubner
left a comment
There was a problem hiding this comment.
Two doubts:
- For me to get the whole project should I pull in Teon's repository?
- How would I contribute to commit to this same PR?
| # License: BSD (3-clause) | ||
|
|
||
| import copy | ||
| import threading |
There was a problem hiding this comment.
I think it is better to work with multiprocessing because of known problems with multithreading in python (GIL for example).
There was a problem hiding this comment.
if you have a better solution, i welcome it. here we are just extracting away from our existing code and wanted to have a lightweight base class object that would be work across our existing clients. I wasn't aware of the problems with multithreading
There was a problem hiding this comment.
I used thread for similar function and never had any issues. Yes python has problem with the GIL, but that does not mean thread should never be used. What you want to avoid is having a thread that never release the GIL (i.e. continuously run). For example, when using LSL.pull_sample() or a socket.recv, those function will block until you receive data but this is made with a call to a C extension or to a shared library that will effectively release the GIL during the time it wait.
There was a problem hiding this comment.
Thanks @alexandrebarachant . I'd say let's stick to threading for now, get the API right, add an example, so people can pull and test. Then we can smooth out other details. Make it work, then make it nice :)
|
One option would be to take over and make another PR with changes on the top of the ones made here (pull this branch, make changes, make another PR). |
|
@rodrigohubner I can give you write privs to my branch if you would like, let me know. for the LSLClient we are working on, I know that @aj-ptw has been working on making a compatible lsl interface with the npm module he's worked on. |
Keep in mind that PRs can be made from any branch to any other branch -- we juts usually make them from |
yeah, I agree, PR to my branch would work, and would be less hassle. |
|
@teonbrooks how close are we here? I am going to add a TODO at the top of the PR so we don't lose focus |
|
|
||
| from mne.io.meas import create_info | ||
|
|
||
| def _buffer_recv_worker(client): |
There was a problem hiding this comment.
it's the same for all clients, no? can we put it in one file and import it?
| pass | ||
|
|
||
| def _enter_extra(): | ||
| """For system-specific loading and initializing during the enter |
There was a problem hiding this comment.
system-specific -> client-specific
| """ | ||
| inlet = pylsl.StreamInlet(self.client) | ||
|
|
||
| ## add tmin and tmax to this logic |
| ## add tmin and tmax to this logic | ||
|
|
||
| while True: | ||
| samples, timestamps = inlet.pull_chunk(max_samples=self.buffer_size) |
There was a problem hiding this comment.
samples, times to conform to MNE nomenclature?
|
|
||
| return self | ||
|
|
||
| def create_info(self): |
| self.buffer_size = buffer_size | ||
| self.verbose = verbose | ||
|
|
||
| def connect(self): |
|
@teonbrooks let's try to finish it while the momentum is still strong :) |
|
yes @teonbrooks ? |
|
hey guys, agreed. I was flying yesterday back to SF so I didn't see any of this. I will address these comments this weekend :) also @aj-ptw, is it straightforward to establish a LSL stream with the npm package? maybe we can chat tomorrow to make discuss what it would take in a simple example |
|
Hi all, I have questions. I am able to stream EEG data from OpenBCI Cyton V3 hardware into MATLAB environment using BCILAB via LSL and I would like to polish the incoming data in real-time prior component analysis. I know BCILAB has the capability to perform data cleaning using filters like Clean_rawdata, PREP Pipeline, ASR, Infomax etc. Does anyone have any idea, how to perform those process online? |
You can probably use autoreject with the |
|
@teonbrooks @jasmainak what is the plan here? Any time to work on it? |
|
I'll be seeing @teonbrooks end of this month for the CRN coding sprint. @teonbrooks do you think we could earmark a couple of days after the sprint to work on this? |
|
yeah, I think this can be something we can knock out while @jasmainak is in town :) |
|
@teonbrooks this is about a year old, still planning to work on this or should we close? |
|
let's close this for now. maybe when I get a weekend free... |
TODOs