The big news this week is the launch of a National Science Foundation-funded study aimed at “developing the NeuroPhone system, the first Brain-Mobile phone Interface (BMI) that enables neural signals from consumer-level wireless electroencephalography (EEG) headsets worn by people as they go about their everyday lives to be interfaced to mobile phones and combined with existing sensor streams on the phone (e.g., accelerometers, gyroscopes, GPS) to enable new forms of interaction, communications and human behavior modeling.”
More precisely, the research, being conducted at Dartmouth College, is intended to accomplish several goals, including developing “new energy-efficient techniques and algorithms for low-cost wireless EEG headsets and mobile phones for robust sensing, processing and duty cycling of neural signals using consumer devices,” inventing “new learning and classifications algorithms for the mobile phone to extract and infer cognitively informative signals from EEG headsets in noisy mobile environments,” and actually deploying “networked NeuroPhone systems with a focus on real-time multi-party neural synchrony and the networking, privacy and sharing of neural signals between networked NeuroPhones.”
I’ve always thought that the big problem with existing realtime social networking systems, such as Facebook and Twitter, is that they require active and deliberate participation on the part of individual human nodes (or “beings”) – ie, typing out messages on keypads or other input devices – which not only introduces systemic delays incompatible with true realtime communication but also entails the possibility of the subjective distortion of status updates. NeuroPhones promise, by obviating the need for conscious human agency in the processing and transmission of updates, to bring us much closer to fulfilling the true realtime ideal, opening up enormous new opportunities not only in “human behavior modeling” but also in marketing.
Plus, “real-time multi-party neural synchrony” sounds like a lot of fun. I personally can’t wait to switch my NeuroPhone into vibration mode.
This post is an installment in Rough Type’s ongoing series “The Realtime Chronicles,” which began here.
UPDATE: Here is a picture of a prototype of the NeuroPhone:
And here is a paper in which the researchers describe the project. They note, at the end, that “sniffing packets could take on a very new meaning if brain-mobile phone interfaces become widely used. Anyone could simply sniff the packets out of the air and potentially reconstruct the ‘thoughts’ of the user. Spying on a user and detecting something as simple as them thinking yes or no could have profound effects. Thus, securing brain signals over the air is an important challenge.”