Bermuda is a custom solution for real-time integrated recording and analysis of EEG, eye-tracking, and body-tracking. Each of these three elements is being developed by a subgroup of the Bermuda team and are all based upon relatively inexpensive and widely accessible components. As a result of taking full advantage of consumer hardware, Bermuda will offer a complete system that is capable of real-time analysis for only a fraction of the cost of comparable systems that have been developed in the past. Additionally, the communications and integration of these 3 data streams are being built from the ground up.
The primary goal for Bermuda in the near future is to create a system that researchers can use to investigate novel research questions in areas such as prosthetics and movement disorder research. Ease of adoption and replication are central to the ethos of Bermuda. For this reason, the team has a heavy focus on communicating ideas and enabling the whole team to benefit from every individual’s work. This allows for cross-team idea-sharing and easy onboarding of new members.
Bermuda is currently in the early prototyping stage. For body-tracking, the team has recently achieved real-time tracking of body positions by using the PoseNet deep learning model running locally on a Raspberry Pi. Specifically, two simultaneous images are captured from a stereo camera module on the Raspberry Pi, then PoseNet is used to find the 2D locations of body parts individually in both of these images. The two estimated 2D poses are then combined into a 3D skeleton using OpenCV which will be sent to a central computer. One of the important next steps for this group is to perform an analysis of tracking accuracy by comparing predicted values to real-world measurements. Additionally, the first version of this system does not meet the performance standards required and will need to be improved and optimized to achieve smooth tracking of rapid movements. This could include using external processing modules (e.g. Coral Edge TPU), cloud processing, smaller/lighter machine learning models, or a combination of these.
At the center of Bermuda, there will be a Python-based graphical user interface (GUI) that streams data wirelessly from all 3 of the components by utilizing parallel processing. This GUI will display eye-gaze, body position, and EEG data in real-time as it is received. For eye-tracking, the team is making use of the Pupil Labs open source eye-tracking platform, although there will be significant alterations to the design of the device and communications protocols. Last, but certainly not least, the EEG team has been working on interfacing an ADS1299 development board with a Raspberry Pi which will form the core of the EEG device. This is far from a trivial task and requires solving many low-level electrical engineering problems through a joint effort of many talented team members.