The Annals of Neurotech Pub: What We’ve Got Here Is Failure to Communicate

Neurotech Pub
44 min readJun 21, 2021

Matt Angle:

Welcome to Neurotech Pub, I’m your host Matt Angle. I’m also the CEO of Paradomics, a high data rate brain computer interface company. In this series, I’ll be introducing you to some of my friends and colleagues from academia and industry. We’ll share a few drinks and talk about neuroscience and neurotechnology.

In each episode, I choose a particular topic and I’ll invite guests who are experts in that area. The discussions will be informal. I want to give you a sense of what it would be like to go out for beers with this group of eminent neuro technologists and listen as they talk about their opinions, their experiences and what they think is important in neurotechnology. I’m going to be inviting the people who inspire me personally. And I hope that throughout this podcast, some of that inspiration can come home with you as well.

Today, we’re going to talk about how brain computer interfaces can be used as assistive communication devices used to restore communication in patients with tetraplegia. We talk about the different ways that brain signals can be recorded, decoded, and transformed into speech or text or to directly operate a computer.

Our guests today are Beata Jarosiewicz who is a Senior Researcher at NeuroPace, Vikash Gilja, an Assistant Professor of Neuroscience at UC San Diego, and Sergey Stravisky and Frank Willett, Postdocs at the Neural Prosthetics Translational Lab and Stanford University.

Matt Angle:

All of my guests are either involved with or have been involved with the BrainGate clinical trial program. And you’ll hear a lot about that program during this episode, because in my mind, BrainGate represents the tip of the spear with respect to BCI based assistive communication. We have a really great lineup today. So I’ll jump right to the guests.

Beata Jarosiewicz:

I am currently a Senior Research Scientist at NeuroPace, which is a company in Mountain View that makes brain stimulation devices for people with epilepsy. Let’s see, I’ve been there for about two and a half years. Before that I was at Stanford in Krishna Shenoy’s group with Jamie Henderson working on the BrainGate team. And before that at BrainGate on the east coast with Leigh Hochberg.

My PhD is in neuroscience. I did rat hippocampus cell stuff for my graduate work and then went to Andy Schwartz’s lab at the University of Pittsburgh and did some non-human primate brain computer interface work, taking advantage of the system to look at plasticity and dynamics in neural coding. And then after that went to Andy Schwartz’s lab at MIT, where I looked more at like cellular molecular levels of plasticity and also looked at … Tried to understand mechanisms by which the brain self organizes its higher order cognitive functions. And then after that I went to Brown-

Matt Angle:

Vikash, you’re up next.

Vikash Gilja:

Vikash Gilja, I’m an Associate Professor in the Department of Electrical and Computer Engineering at UC San Diego. I run a translational neural engineering lab.

Sergey Stavisky:

Vikash just got tenure which is why I clapped my hands.

Matt Angle:

Oh, congratulations. I didn’t know that. That’s awesome.

Vikash Gilja:

Yeah. It’s pretty fresh.

Beata Jarosiewicz:

Congratulations. Well deserved.

Matt Angle:

Will you still answer my emails Vikash?

Vikash Gilja:

I have been. I’ve had tenure for a few weeks.

Matt Angle:

Oh, okay.

Vikash Gilja:

So before UC San Diego, I also worked with Krishna Shenoy and Jamie Henderson, Cynthia Chestek and I set up the translational lab after we both finished our PhDs with Krishna.

Matt Angle:

And I understand you worked for a small startup based out of San Francisco for a while.

Vikash Gilja:

Oh yeah. I took a leave from my faculty position for a year and was at Neuralink.

Matt Angle:

Has anyone heard of them? Are they in our space?

Vikash Gilja:

And I also currently consult for you guys.

Matt Angle:

Sergey.

Sergey Stavisky:

I’m Sergey Stavisky, I’m a post-doc in Krishna Chenoy and Jamie Henderson’s group at Stanford’s neuroprosthetics translation lab, which is part of the BrainGate trial which is also where I worked with Beata and Vikash actively work with Frank now. Before that I did my PhD in Krishna’s lab also at Stanford. Before that I was a research engineer at BrainGate, the east coast at Brown University working on the human BCI clinical trial there.

Matt Angle:

That was good. That was good. Frank‒

Frank Willett:

Sure. I’m Frank Willet, I’m a post-doc also working with Sergey in Krishna and Jamie’s laboratory. I did my PhD also as a part of BrainGate, doing human nature cortical work, where I worked on functional electrical stimulation so someone could move his own arm and hand with BCI.

Matt Angle:

Maybe we can jump into it. I wanted to start with Beata. I wanted to ask a little bit one‒ of the things that people are really interested in who don’t know a lot about BCI is they want to know kind of what it’s like and what the training is like. Really sort of a human level. What is it like for the participants to connect to a BCI and then use it to communicate? Can you give us an intro into that maybe.

Beata Jarosiewicz:

Sure. So the training being the decoded calibration stuff you mean, or the training for the [00:06:06].

Matt Angle:

Uma Thurman in Kill Bill is trying to wiggle her left toe, that kind of training, going from having no mobility to suddenly being able to do something.

Beata Jarosiewicz:

Sure. So our goal is generally to try to make it as intuitive and natural for our participants as possible. There’s of course exceptions when we’re trying to optimize control a little bit better and asking them to use particular movements for their visual imagery that will maximally fire the neurons that we happen to be recording from. But in general, we will ask the participant to imagine that they’re moving a mouse on a mouse pad or their finger on a track pad or whatever input device they’re most used to with whatever computers they used to use before.

Beata Jarosiewicz:

And then we record the neural activity that happens naturally when the person is imagining moving up for example, and we map that neural activity to upward movements, same thing for rightward movements and so forth. So by the time we’re done with this decoded calibration step at the beginning, the person really only has to think about which direction they want to move their mouse or their finger in order to move the cursor.

Beata Jarosiewicz:

I’ve heard several anecdotal reports from our participants that eventually they don’t even think about that movement anymore. That all they’re really thinking about is moving the cursor directly on the screen. So it’s almost as though they’ve embodied that cursor or the robotic arm or whatever they are controlling at that moment.

Matt Angle:

would they start out trying to mimic the kind of movement that you’re going for like a cursor, or do you ever have someone imagine swinging a tennis racket or sleeping.

Beata Jarosiewicz:

We’ve had some kind of rogue episodes with some of our participants where we started out by saying imagine doing whatever is most natural to you to control a computer cursor. And then they’ll report to us afterwards that they were imagining dropping a rubber ball for the proper target, something random like that, that doesn’t actually really map neural activity to intended cursor movements very well. So we’ve had to have them start over sometimes.

Vikash Gilja:

I think the most interesting one I’ve heard is imagine where I want to go as a planet with gravitational force. And the cursor has its own gravitational force and I’m trying to get their gravitational waves to align. That was a comment from the participant.

Beata Jarosiewicz:

Excellent.

Sergey Stavisky:

Did it work?

Vikash Gilja:

They described that as inferior control and then we’d get two different algorithm. And the comment was, “I don’t know how to explain how this works, it just works.”

Sergey Stavisky:

It’s a solid nil result.

Matt Angle:

But during process, maybe anyone who wants to jump in here, what are the most useful forms of feedback to participants that are learning, let’s start with just cursor control and how does getting feedback change their learning? After 10 plus years of BrainGate to clinical trials, what have we learned about the feedback that people are getting when they’re first learning to control a cursor?

Frank Willett:

I guess my experience has been that it’s pretty natural and easy for them to pick up and we have demonstrated that even the first time someone uses it, we can get really good control within minutes. So I’m not sure of how important it necessarily is, exactly what types of feedback you’d give them. I mean, we always show the cursor moving but in terms of exactly how you train them up, I think our experience has been it’s pretty natural.

Sergey Stavisky:

On the kind of the human side of it, providing rapid immediate feedback though is really helpful because sort of typically where we recalibrate it we’ll first show let’s say a cursor moving around on its own. And we say pretend you are moving this. Imagine moving your arm, imagine moving your hand. And we call this open loop but there the participant’s not getting any confirmation that their brain is … We’re recording this brain activity.

Sergey Stavisky:

So a lot of times they’ll say, “I’m really kind of … Not worried about if I’m doing it right.” I think the most common question we get is, is it working? Am I doing it right? And so as soon as we close the loop and they can actually see that cursor moving under their brain control, immediately there’s this relief that the participant … Pretty clear report saying, “Great I feel confident now that I can control it. And I can see that it’s going where I want it to go.”

Beata Jarosiewicz:

Yeah. As Sergey was saying, it’s generally a lot better to actually train and test or calibrate and then use the decoder that was calibrated in the same setting as much as possible. There are context differences in neural tuning. And one context that appears to be important, at least for some participants is the open loop versus closed loop settings. So some neurons will actually change the way that they fire to particular movements between this open loop and this closed loop context.

Beata Jarosiewicz:

So, and then also having … as Sergey was saying, having that feedback in real time of their brain activity, actually having an effect on the world then being able to error, correct and stuff also allows the person to be a little bit more engaged in the task. So there’s a little bit more modulation during closed loop.

Vikash Gilja:

That’s a great example of science feeding engineering in this work. Quite a few of these insights, they didn’t just magically come to the field. There’s a long history of work in non-human primate as well as our collective experience working with people that have taken us from earlier stages where it would take us longer to take a new individual and to get the quality control to more recently we can get to decent control very quickly as Frank described. That’s really engagement across animal model, clinical work and just a very long collective experience.

Matt Angle:

I’m curious. So to my understanding, cursor based typing in BCI based cursor based typing is … the fastest is currently 10 words per minute. And that’s from Krishna’s lab. Is that right? If I compare that to swipe based typing on a smartphone that can get 30 to 40 words per minute, what do we think is preventing us from using the same kind of soft keyboard interface to go faster? Is it information limited? Is it model limited? I’m really curious for your take on that.

Frank Willett:

I mean, sure. I’ve done as part of my graduate work, several papers on really trying to model what’s going on during two-dimensional cursor control and what goes into the performance you see, and what’s popped out in my opinion is it’s really us and our limited. Depends on the signal to noise ratio you’re getting from whatever particular where you’re recording from, and that usually limits how quickly you can go.

Frank Willett:

I think the speeds are usually limited by that because you turn the speed up to be as fast as you can go while still being able to click on things and whatever that speed is really comes down to the signals noise you’re getting, how good your arrays are.

Matt Angle:

Why are we clicking though? Because the fastest soft keyboard interfaces don’t click, they don’t have a dwell they just swipe. Is there a reason why BCI based interaction is click-based?

Sergey Stavisky:

I think there’s no concrete reason. I think something you’re kind of getting to those is the distinction between where academic research excels and where maybe industry excels. So within BrainGate, we typically focus on sort of getting as much of the raw neuro horsepower out. So maximum this for a second in terms of point and click, and then things like optimizing the keyboard, we’ve done some of that and papers from our group have looked at rearranging where the keys are or figuring out what the right dwell time versus speed or movement time are, but then sort of that next step of well instead of a point and click keyboard, what about a swipe keyboard or something where it was constantly … I forget the name of it.

Sergey Stavisky:

There’s one where the screen is moving up and down, you’re like falling through it. There are clearly improvements to be had there, and that’s just not necessarily something that we have the human bandwidth to work on in parallel multiple different types of user interfaces and things like that. That’s sort of an end. You can imagine if and when, or when this gets closer to deployment and commercialization, you could have whole teams of UI engineers working on that and getting that last couple of percent of performance out of it.

Beata Jarosiewicz:

To add onto that, there’s an important advantage of the point and click type interface, which is that if you can do point and click, you can use basically any computer application that able-bodied people can use once you add, click and drag and other subtle things like that. That’s the original motivation behind using just a keyboard, whether it’s cordy or in a more optimized arrangement as Vikash has worked on.

Beata Jarosiewicz:

But Frank is being … What’s the word? Reticent about this for some reason, but he has done this amazing work recently showing that if you use something more like a swipe strategy where you have a spatial temporal kind of pattern over time, you can distinguish those patterns better from one another than if you’re just making straight line movements. So you can take advantage of that kind of thing very much like swipe or graffiti and those old palm pilots or handsprings to better distinguish certain letters from each other. And then you can do a lot better. And I’ll let Frank elaborate on that more.

Frank Willett:

Sure, you want me to talk about the handwriting stuff?

Matt Angle:

Yeah, I mean I had a question about that. One obvious question is that handwriting is slower than typing for most able-bodied people, but your recent result that you can decode imagined handwritten characters is actually setting the world record for BCI based communication. And it’d be really interesting to hear your take on why that is.

Frank Willett:

I think it’s kind of what I was trying to say before, in that we’re really limited by the signal to noise we’re getting. We only have 100 or 200 micro electronic channels and we get the handful of neurons that we get. And in the case where you have a lot of noise from trial to trial and you’re trying to do typing, where if you’re trying to go to two keys that are right next to each other let’s say then the pattern of neural activity needed to go to one key versus the other is very similar. So it’s hard to tell apart if there’s a lot of noise.

Matt Angle:

I see.

Frank Willett:

So the idea with the handwriting was if we can make every character very distinct that has its own unique pattern of activity associated with is very different, then maybe we can beat the limit a little bit and maximize our performance by spreading apart these classes. So even if there is a lot of noise, we can still go faster and just as accurate.

Matt Angle:

So why anchor it on handwriting? Why not take a more data science-y approach where you just say, “I want to take unsupervised learning approach to the neural state space and find the sort of axes of highest variation, and then just forget handwriting and mouse and sort of cursor control. I want to directly map this sort of abstract space onto my effectors.” What would speak against an approach like that?

Frank Willett:

I guess a couple of things. So one is we did look into optimizing a set of different trajectories to make them more distinguishable that just say the Latin alphabet. So it’s definitely … It seems like in principle, you could do this and I agree you can move beyond the hand and map all the different body parts and we showed that you can find neural activity for all the different parts of the body and the area that we’re in.

Frank Willett:

So I think that definitely makes sense. Obviously, one big drawback of those types of ways of doing things is that the participant would have to learn to be able to do it really rapidly. So when you’re learning to make a rapid sequence of behaviors that you’ve never done before, and it’s a novel mapping between characters and behaviors you’re not used to, my intuition was that would be quite hard.

Vikash Gilja:

Co-optimization is a key theme. There’s training of the individual and there’s design and training of the system that interprets the activity that we’re getting from the participant. I think as we’ve all transition, many of us have transitioned from non-human primate to working with people. And as you start working with people, that second consideration comes more into like because you can-

Matt Angle:

It’s interesting though because some of the anecdotes that you and Beata highlighted earlier suggest that there might be a way … There are a lot of different ways to learn that neural state space, some people start off with handwriting or thinking about reaching out in different directions, but someone else thought about planets, gravitational pull, another person thought about dropping a ball. Do you think that … How explored do you think that is?

Matt Angle:

That sort of training aspect? Is there a lot of unpublished law from the BrainGate trials or is this a place where we just need to do more work? I’d be really interested in your takes on that.

Sergey Stavisky:

I think one important aspect of this is where in the brain we’re recorded. So we are in the hand knob of motor cortex, and this is a part of the brain that is most active for people trying to move their arms and hands. So what Vikash is alluding to that, think of planets and gravitational laws didn’t really work. One can speculate that if we were in somewhere else in the brain, which has more activity for abstract concepts, who knows, but if you’re in a motor area, which is concerned with moving the body, really the things that seem to drive this area is first trying to move their body.

Sergey Stavisky:

So that’s why handwriting or typing or try and move your arm a little bit with mouse works and it makes sense. It’s easy to tell the person, “Hey, try to do these movements.” You know what that feels like, getting at ease and then we see a lot of activity for it. To try to sort of more abstractly explore the full neural space like what you were hinting at Matt, I think certainly there’s a line of research to be had there, but it might not work as well in motor. That’s been our experience so far.

Vikash Gilja:

Yeah, Sergey that story is pretty complex as you’ve shown firsthand.

Matt Angle:

Actually I’d like either Vikash or Beata to explain what the homonculus is. And then I would like Sergey and Frank to explain why that is wrong.

Sergey Stavisky:

You guys were on the pro homonculus team.

Beata Jarosiewicz:

Go ahead Vikash.

Vikash Gilja:

I was just going to nominate you. I mean, there’s a long history in understanding sensory motor cortex. So primary sensory … Sorry, primary somatosensory cortex and primary motor cortex. And this goes back to kind of the early days of a neurosurgeon researcher known as Penrose where he had all of these awake brain surgery experiments where he was stimulating brain as a part of a functional mapping exercise and identified specific regions that seem to be quite repeatable across patients that corresponded to different parts of the body.

Vikash Gilja:

If you map from the midline down towards the temporal lobe, the motor cortex, in most people you’ll see this very repeatable mapping. So if you stimulate those areas in motor cortex, you’re going to cause muscle twitches. Close to the midline and it would be down in the feet, as you go more laterally you’ll get to that hand knob area. And as you continue, you’re going to see facial movements and the articulators.

Vikash Gilja:

And there’s a similar story on somatosensory cortex where if you stimulate folks will self-report feeling some sense of touch in those areas as well. So that was the first order approximation. That’s pretty dominant, still taught and-

Matt Angle:

And Vikash, we’re not a high budget operation. If we were a high budget operation, we’d put up a picture of the [monkey lists 00:23:01] right now.

Vikash Gilja:

So if we look in motor cortex, you would see this cartoon like man or woman whose hands and feet the more … I’d say the more dexterous areas of the body would have larger representations in motor cortex. So imagine that figurine mapped spatially to the motor cortex, similarly on this somatosensory cortex we’d see that sort of mapping where areas of the body that are more sensitive like the hands, the feet, areas that I won’t mention are going to be larger in that map. So that’s the rough visual. I’m really hoping Matt you can embed an image.

Matt Angle:

I lied. I just wanted to hear you describe it Vikash.

Sergey Stavisky:

We could have you lying on a yoga ball and then sort of illustrating with your body the homunculus.

Matt Angle:

So can you explain to us Frank and Sergey, why that model is too simplistic and how we should think about the maps especially in the motor area.

Frank Willett:

Well, we showed recently that when recording from hand knob area, so our arrays are in hand knob area, unfortunately, we can’t sample the entire motor strip. But looking at hand knob, we show that when you ask these participants to make movements throughout their body, either right arm left arm, right leg left leg, face and Sergey’s looked a lot at speech, all of these different behaviors produce robust and repeatable modulation in that area. So that suggests that the simplistic idea that hand knob only deals with control of the hand is not true.

Matt Angle:

From tracing studies, is it fair to say that these areas don’t project to control … Are their outputs controlling muscles associated with those non-canonical areas or are they representing something else? Is there any evidence that the output of those areas controls?

Frank Willett:

I guess if you were going to say is there evidence that the projections are arm focus, there’s a lot of evidence for that. And that’s part of why we thought that it was … we wouldn’t see the entire body being represented-

Matt Angle:

Is there any evidence that the hand knob kind of projects directly to areas that control speech for instance.

Sergey Stavisky:

To my knowledge, there is no evidence of that. In fact, I mean most of the cortical projections go to the lower areas and into the spine, sort of the most … Well, there’s no direct, direct connection to muscles. The most direct connections are from hand knob to the distal digits, so the fingers basically.

Sergey Stavisky:

And that’s something that’s often primates, humans, which may explain why we have very precise hand control. I think we could also look at the lesion studies where you have people who’ve had strokes or injuries damage their hand knob that’s traditionally not associated with weakness or paralysis of other body parts besides their arm. So kind of at this course approximation, it’s not that the homunculus is wrong. I think it’s a matter of what’s the resolution you care about.

Sergey Stavisky:

So with Penfield studies, he was reporting and sort of more moderate versions of it. What’s the first thing that twitches or that a person feels if you increase how much electrical current you’re applying, and there’ve been other work by for example, [Caziana 00:26:28] where if you apply more current for longer, you see more complex movements. And presumably if you went to higher currents, maybe you would feel or move wider areas of the bodies. Similarly-

Matt Angle:

I guess I’m thinking about this Sergey in context of some of the work that you’ve done looking at what you call output null activity and output positive activity. Do you imagine that … Let’s take for instance, the area that normally represents the hand has activity that correlates with speech. Do you imagine that that’s output null activity, or do you imagine that’s output positive activity? I think that that’s very useful and maybe you could also explain for people what the distinction is.

Sergey Stavisky:

Sure. So this idea of this output positive output null neural dimensions is that if we have a bunch of neurons and we have sort of fewer readouts than there are neurons, let’s say we have 100 neurons, but we’re controlling a 2D] simplification. Really, we have millions of neurons and hundreds of muscles, but the point is you have more neurons than there are readouts.

Sergey Stavisky:

So there are different patterns of activity amongst all these neurons that as far as a downstream observer, for example, the spinal cord or another part of cortex. As far as they’re concerned, there’s a range of different patterns into a cortex, that to them would look the same. We think this is a really important principle of how the brain works, because it lets you have internal computations.

Sergey Stavisky:

So to use a more computer analogy, you can have internal memory and things that are being processed locally but that are being transmitted, let’s say over a network to a printer or over the internet somewhere else. So to your original question of do I think this speech activity is output null, probably yes. I don’t think this is going out towards the spine to make movements of the hands. And why is that activity there? That’s actually a really a good question that we don’t have an answer to yet.

Sergey Stavisky:

One could speculate that it has something to do with coordination signals. So movements typically are not made in isolation. It’s pretty rare that you would just wiggle a finger or just move your mouth. When we sulk, when we talk and move our hands, as I’m doing now, I don’t know if you could see it, and most movements of the body we have to coordinate everything.

Sergey Stavisky:

So for example, the first thing that moves when you actually move your arm is your core and your butt cheeks, because you need to stabilize yourself before you put your arm forward. It may not be super obvious but everything’s connected. So it would make sense that even a part of the brain, that’s mostly concerned with the arm and the hand still needs to be aware of and get copy signals of what’s going on with other types of movements. Maybe that’s why we see the legs and the other arm and speech and the face and the neck.

Matt Angle:

Sergey, so obviously these studies are done with individuals with paralysis. Does that have any consequence in our ability to interpret the results? Any thoughts there?

Sergey Stavisky:

It’s definitely a big asterisk. So if we had only seen that now have speech and face activity in this part of the brain, then I would be more concerned that this is really related to personal injury because I would think, “Well, we know, or at least we think there’s lots of plasticity. We know this person can still talk and move their head, but they’re not moving their arm. So maybe cortex says we mapped. And all of a sudden this isn’t the arm cortex, it’s the base cortex.

Sergey Stavisky:

But there are a couple of things that argue against that. First, we still see a lot of arm and hard activity and that’s really one of the pivotal enabling things from BrainGate and other studies like it is that initially when people started doing this, they worried, are we going to go in, put arrays there and find that if this person’s been paralyzed for 10 years and they try to move their arm, there’s no activity in this part of the brain.

Sergey Stavisky:

And that would be a really major problem for restoring function, but that’s not the case. And Frank’s shown that the arm related activity is the dominant thing in this era, even a decade after injury. And then the second point is if it’s remapping, why would we see foot activity here? It’s not that the person’s moving their foot, their is foot paralyzed as well. So with these two things, at least I would say this is probably not due to remapping. But whether it’s having some effects certainly possible. And that’s going to be a hard question to answer because we don’t put arrays into healthy people.

Frank Willett:

So I just want to bring up a third point that I think also speaks to this maybe not being remapping, which is also interesting is that the structure we found was quite interesting between how all the different arms and legs are represented. So when you look at how the left arm and the right arm are related and how let’s say the right leg and the right arm are related, you see that there’s actually correlated, structure across all of these limbs. So let’s say … if you’re grasping with your fingers or grasping with your toes, but neural activities in this area is similar across those contexts. So it doesn’t seem like there’s just separate arbitrary commands leaking in, but rather that there’s some structured representation of the whole body in this area. So I would be surprised if that came about just because of remapping.

Sergey Stavisky:

Then I would also add that once we’ve seen these results, knowing what we know now, when we go back and look at old … or not even old, but recent FMRI studies and even some ECOG studies, if you squint, you can see a little bright voxels in an FRI study or maybe one electrode in an ECOG study that was over this area and seemed to modulate and in some previous papers, it just wasn’t in the analysis region of interest. So they ignored it, or it’s a weak blip compared to the bigger books for speech. So, there’s certainly evidence in people who aren’t paralyzed that there may be activity here. There’s also some behavioral evidence where there’s interactions between the speech … like the hand and speech systems. There’s some transcranial magnetic stimulation studies where if you sort of perturb in otherwise healthy people, either the speech cortex or the hand cortex, you’ll see some deficits in the other type of movement. So there’s quite a bit of evidence that it’s not just due to remapping.

Matt Angle:

I wanted to ask you a little bit about neural activity that’s not related to output. You probably don’t have this number off hand, but if you had to sort of make a guess, what percentage of the neural state space do you think is devoted to output and driving versus internal processing?

Frank Willett:

5% output, 95% in turn. At least.

Matt Angle:

Frank, what would yours be?

Frank Willett:

Depends on the area, but if you’re talking about M1PMD, I would say at least 50% of is output, but I don’t know, it’s hard to say.

Vikash Gilja:

Yeah. These are really tough questions … As a systems neuroscientists, we tend to focus most heavily on the input and the output, because we can control stimuli, we can observe behavior, when it comes to those internal states, perception, decision-making, I mean, there’s great work in those areas. It’s just trying to quantify how much information there is. Now we’re asking about these internal states that we have to make guesses about.

Matt Angle:

Is there some data on that? For instance, if you’re looking at the state space, there’s some manifold of activity that’s related to the output. And can you based on that make a more quantitative-

Sergey Stavisky:

Well, for example … Yeah. I mean, one interesting observation that Matt Coffman from Krishna’s lab made is that when you’re trying to start to make a movement, most of the neuro variants, at least what we could record is just related to the initiation. So it’s the timing, it’s basically going from not moving to moving, but it’s not about the details of exactly where you’re moving, which muscles you’re using. So basically he found that cross monkeys making arm reaches, no matter what reach they’re making, this biggest condition and variant signal look the same. And we’ve now seen similar data in our human participants when they try to speak. So I think that’s one point of evidence that much of this is sort of this internal dynamo crankshaft, whatever analogy you want to use, but it’s something to get the dynamics to change, but not directly related to other movements strategy made.

Matt Angle:

How should engineers respond to the idea that this motor cortex isn’t just a driver, but is actually a processor?

Vikash Gilja:

I would like to maybe get us started thinking from two concrete starting points. One is both are fully on the readout side, where were we reading neural activity. We started off this conversation talking about closed loop, cursor control, where we’re moment by moment identifying the intended movement of a cursor. We were introduced the Frank’s work, which is looking at a more structured movement. And so if we can compare and contrast those two, when we’re looking at this continuous, closed loop control, we’re dependent on a fast feedback cycle. So, Frank was talking about us being S&R limited, signal to noise limited. And a lot of that has to do with the users issuing an intent. They don’t know if their intent got interpreted correctly. They get visual feedback to see if that was correct.

Vikash Gilja:

And so, that limits, that update rate. Vision is slow, it takes some time. If we look at the handwriting work that Frank has done. Frank you’re asking people to attempt, I assume the instruction is attempt these more forward made movements over time. And so, they’re doing that open loop. They’re writing that character and open loop, and they get feedback later, but because the movement is so structured, you’re able to beat down the noise and get a more reliable output. And so if we look at those and compare and contrast like this, the second piece of work is taking into account a little bit more knowledge about the temporal dynamics at play. And if we think about motor cortex, not just as this puppet string controller, but as a brain area that is coordinating movement and we dissect movement more specifically, there may be engineering opportunities there, right? If it’s not just a yoked controller.

Sergey Stavisky:

Vikash. This reminds me a great anecdote, which actually I think comes from your work. Some of the earlier cursor BCIs have quite a long latency. I think they use 100 millisecond bins and maybe the system wasn’t coded up to be quite as real-time and fast. And so there was this phenomenon, which was believed to be a scientific one, which within BrainGrade we had called the forcefield effect. And that was that … when the user brought the cursor close to the target, it’s sometimes seems like it would orbit around the target. It couldn’t quite get it ontothe target. And there was some thought … well, was this some neural effects? Maybe the brain representation is changing because they’re so close. Maybe it’s switching from like a ballistic movement into a precision mode and our decoders good at that.

Sergey Stavisky:

And then Vikash when you and Paul built a much faster system with your 2012 work and got that latency down to more like 20, 50 milliseconds, it turned out … the problem went away. This forcefield went away and the users could just click on the target right away. And it turns out it was just the latency. It was like if you’re using a mouse that has a 200 millisecond delay, you’re often going to overshoot your target and you’re going to correct, but then that’s going to be delayed and you’re going to overshoot again and you’re going to end up circling around the drain. And really it’s just … from not thinking about engineering perspective, yeah it’s feedback controller. It’s not just a machine learning reconstruction problem.

Beata Jarosiewicz:

Mark Churchland had put it a really interesting way that always stuck with me. He was also in Krishna Shoney’s lab for a while before starting his own lab. But he found that a really big component of the neural dynamics seems to be common across different movements, especially straight line movements or just simple curved movements. And it’s almost as though the … Most of the neural dynamics can be explained by visualizing a kind of record going around in circles. So a lot of it is going around in circles and then all of the information is just like the little tiny groups … the little vibrations along the grooves that are actually playing the music. So a huge component of it is common across different movements. And very tiny part of is actually specifying the information on top of that, that sort of overall dynamics.

Sergey Stavisky:

And our ability to detect that is so dependent on our tools. I mean, Matt, for you and Paradromics, this is very relevant because I think a prediction I have in the next five, 10 years is we’re going to see way more specific information in those groups that maybe we just couldn’t detect as well before. Again, going back to the Vikash is saying that the homunculus over here just using lesions or a fickle stimulation or FMRI you see one thing. And when you go in with 100 electrodes, maybe We’ll see another thing. And if we go into with 1,000 electrodes, maybe we’ll see a lot more rich information.

Beata Jarosiewicz:

But the engineering implications of this, I think are not as horrible as they might seem at first because fortunately a lot of the machine learning methods that we use to do the decoding on these neural signals take into account this common variance across neural signals and figure out which individual neurons have information about which particular movements or whatever that we’re trying to decode and they’re able to just extract that information on their own without us having to worry about all these common dynamics.

Frank Willett:

Exactly. I was going to say that I think it really comes down to the definition of what you mean by output related. I think if you’re talking about output related just being things that are linearly related to what you want to decode like linearly related to velocity then yeah. Maybe there’s a lot of things that aren’t like that. But even if you take just the time in signal we’ve been talking about, for example, it’s definitely output in the sense that it’s related to the timing of the output. So while I think almost all the signals that we look at in motor cortex are related in some way to the motor behavior and may be non linear though. So yeah. That’s just an engineering challenge event of how to incorporate that and a lot of new machine learning type methods tend to be good at that.

Vikash Gilja:

There’s also the question of like, what can be co-opted. It’s a long history of studies done by a vet at U-Dub where, in training single neurons to control … He’s training animals to control single neurons and sensory areas for example, to control cursors. And there are examples, where we can take primary auditory cortex and have individuals imagine, things that they’re hearing and they can control outputs there. I think the way we leverage neural activity as we engineer these systems, there’s flexibility there as well.

Matt Angle:

So we’ve talked today a lot about text-based interfaces that are driven by motor cortex. But when people think about BCI for communication, a lot of people will have seen the popular headlines about speech decoding, either from Eddie Chang’s group at UCSF or from Robert Night’s group at UC Berkeley. Could you maybe help someone understand who’s on the periphery of BCI, the difference between providing someone a text-based communication, versus speech decoding from a functional level practically, how to compare them and also what brain areas are being used?

Sergey Stavisky:

Sure. I can take a stab at that. I mean, I think it’s really more of a continuum, so first of all, yeah what is the type of actual movement that you’re trying to decode? So if you’re trying to decode our movements or hand movements that lends itself to texts through point and clicking, or typing or handwriting. If you’re trying to decode language or speech there, there’s a really broad range of things. So it’s not really either/or. I mean, on the closer to what we do, there’s motor decoding and speech the same way that a person who’s paralyzed could try to their arm and we could decode firing rates that are saying, “Oh, trying to move here. These muscles are trying to move their arm forward with their arm left.”

Sergey Stavisky:

We can record the neural correlates of, “Oh, they’re trying to open their lips. They’re trying to activate their larynx and voice something. They’re trying to move their tongue.” And so there’s been a lot of work by Eddie Chang and others doing that mostly with electric corticography, but then kind of what Vikash is alluding there’s … also, you could have someone imagine hearing words or maybe an internal monologue, and there’s been work showing that you can construct what people are hearing from auditory cortex and more higher level language areas and some more preliminary work, but still work showing that if people then imagine hearing or speech, you can still to some degree decode that from the same areas.

Sergey Stavisky:

So there’s this continuum between motor decoding, between auditory decoding, there’s some efforts towards more of an abstract sort of linguistics decoding, which as you’ve said is everywhere in the brain, not everywhere, but almost everywhere. It seems that way.

Matt Angle:

And practically for patients let’s say, we’ve given a pretty clear picture of what text-based interfaces can do, which is 10 to 20 words per minute right now, how could we compare what’s going on right now with … let’s first start with motor decoding of intended speech? Where is that?

Vikash Gilja:

Yeah, so I think that comparison is a little bit apples and oranges. And I want to highlight that first. So the type of work that we’ve been mostly focused on is using Utah arrays one or more Utah arrays, which give us access to hundreds of neurons and we’re recording their action potentials versus I believe the work that you’re referring to, primarily you’re talking about work from Eddie Chang’s group, where they’re using electrocorticography, surface grids that are recording aggregate activity for many neurons per electorate, but over larger spatial areas.

Vikash Gilja:

And so in those examples in theory, if we go back to the homonculus, that first order of approximation, with those larger grids, they have access to more of cortex. But a coarser view of that cortex not as refined, but as a consequence, they have access to more muscle groups, right. And speech articulation, like our ability to generate speeches is very dextrous that has a very large facial representation on the brain. So you may need that coverage to do a high fidelity decoding, Sergey. You might’ve seen, otherwise can you tell us‒

Sergey Stavisky:

Believe it or not. Yeah.

Vikash Gilja:

Yeah. That’s what the homonculus would suggest.

Matt Angle:

Okay. So let’s say, what do we know right now based on ECOG recordings that exist, is it possible to decode speech and that’s probably caveated by how large is the dictionary of words that you’re trying to interpret. So how many words can you have … Maybe someone can explain the difference between sort of open set and closed set in these kinds of trials.

Vikash Gilja:

Yeah. So, if you look at most of the work happening for ECOG language decoding or vocalization decoding, they’re either outputting specific set of phonemes. So the English language is composed of phonemes. There’s a limited number of phonemes. So that limited set you can decode one for one, or there’s limited repertoire of words or sentences, so larger structures. We could do a direct comparison, I believe Frank, looking to you here, but I think we could try to do a direct comparison as engineers to your handwriting work. There are a lot of similarities. So if we look at what many of these ECOG speech decoders are doing, they’re looking at structured muscle movements. So if we’re looking at sensory motor cortex and the speech particular areas, there’s coordinated movement that we expect. So there’s coordinated activity that we’d expect similar to the characters that you’re having-

Matt Angle:

How many phonemes are there Vikash?

Vikash Gilja:

So it depends on the language.

Matt Angle:

I’m sorry yeah. English was biased.

Sergey Stavisky:

39 phonemes by one common definition is enough to get most of American English. I mean, there’s more levels of nuance you can take into, but with 39 you can do a lot.

Matt Angle:

Okay. So let’s think trying to make us apples to apples comparison as we can, let’s say that there are about 40 phonemes. Frank, you had 20, some written characters that were distinguishable. Is that right?

Frank Willett:

Yeah. I had 31. So similar.

Matt Angle:

Okay. Okay. And so they’re, they’re quite a similar number. Now, if we were to plot those out and in your favorite law dimensional representation and look at them on a board, I seem to remember Frank from your paper that the handwritten characters are pretty well separated. You have certain things like B and D maybe look quite similar to one another, but it was pretty well separated. If I take an ECOG recording and put the phonemes up on the whiteboard, I don’t know, TSNE. Are they as well separated as the handwritten characters?

Frank Willett:

I don’t think so.

Vikash Gilja:

Yeah.There is a lot of error in decoding single phonemes with an ECOG grid. Here, we should be specific, these are clinical ECOG grids, either pediatric or adult. So the electrodes are spaced about half centimeter to a centimeter apart over large areas. If we compare that to the Utah array, right, the size of the Utah array, where a hundred electrodes live is about the size of one of those electrodes.

Sergey Stavisky:

Is it?

Vikash Gilja:

That’s about right.

Matt Angle:

And I’m sorry for not knowing this, but I think in Krishna’s lab you did some … Did you do phoneme level decoding when you looked at speech representations in hand knob, or how did you look at that?

Sergey Stavisky:

Yeah. In our published work, we were decoding amongst 10 words or syllables. And then in our more recent preprint on Bio Archive, we looked at all 39 phonemes.

Matt Angle:

And how separated are they compared to Frank’s handwritten characters?

Sergey Stavisky:

Far worse. So we’re getting about 33% accuracy amongst these 39 phonemes. Whereas I think Frank was getting 95 plus percent.

Matt Angle:

Is it possible for us to make a comparison between Utah array recordings and hand knob and ECOG recordings over large areas of brain? Do we have a sense of, are they about the same ? Is one a little bit better than the other? From an engineering perspective that’s very interesting to people.

Sergey Stavisky:

Comparison on the what axis?

Matt Angle:

Let’s say… Yeah. You could say bit rate, you could say like phonemes, given sort of rate of arrival, their sort of accuracy.

Sergey Stavisky:

I think for phonemes because we’ve only put the HR arrays in the hand area it’s comparable to ECOG is what we’ve shown. But that’s from a very sub-optimal recordings.

Matt Angle:

Yeah, but that’s very … But that in and of itself is pretty interesting that you put a Utah array into the basically wrong area. And it’s just as good as using an ECOG array over the entire hemisphere.

Beata Jarosiewicz:

So maybe the right solution is taking a bunch of Utah probes and putting them all in the right area or even better electrode arrays.

Matt Angle:

Yeah. And what if we talk about … if we’re trying to think about now comparison to auditory representation of speech?

Vikash Gilja:

ECOG studies are similar performance to the readout that you get from sensory motor cortex during production, they are on par.

Sergey Stavisky:

One Utah array study from Nurmikko’s group, where they put Utah arrays in non-human primate auditory cortex, and they could reconstruct what thing was hearing quite well. So that’s quite exciting. But again, that’s a sensory test, wasn’t imagining speech. It was just literally listening to recordings.

Matt Angle:

Can you reconstruct intelligible human speech just based on the auditory decoding? Do they use tunnel stimuli?

Sergey Stavisky:

From the ECOG work with people actually speaking, there’s work by Michael Paula and Eddie Chang’s lab. You can reconstruct speech, I would say it’s borderline intelligible. Sometimes. You would not be able to hold a conversation with an open book yet, but certainly it’s some chance and there are some words that actually decodable.

Matt Angle:

Interesting.

Sergey Stavisky:

We’ve also done some similar work with Utah arrays and it’s again on par with ECOG.

Vikash Gilja:

Okay. I was going to also add to that. Even though those outputs are imperfect, both for auditory cortex and for sensory motor cortex, both input and output, if you limit the corpus, if you limit the number of sentences, then those systems can be quite intelligible. They can provide quite good text outputs, and Eddie’s group has shown that. So again, the caveat it’s not generalized language decoding, but once you get to a certain number of possible hindrances, you can start to imagine the system as being useful.

Matt Angle:

I guess that’s the classic way that assistive communication devices are designed now where, if you have 10 words that you want to say quite quickly, you can communicate at a very high words per minute, but of course you sacrifice complexity in that case.

Sergey Stavisky:

I think that’s where the speech BCI field is right now. There’s the Moses et al., 2019 study where it’s real time. So it’s actually happening fast and there’s a limited set. I forget the exact number, but maybe a dozen words or sentences. And then there’s the making it all decoding study where it’s sort of an end-to-end text generation. But there is limited to 50 different sentences, about 250 different words. So depending on what the use case is, yeah that could be really useful if it was reliable, worked all the time.

Vikash Gilja:

Yeah. And kind of alluding back to Sergey’s comment about UI design, hopefully this gets some folks on the industry side thinking creatively. There’s all of these different control options and input output options, that academic labs are developing and demonstrating.

Matt Angle:

I’m curious, and I want to ask each of you. Let’s pose it that the technology exists to have 10,000 neurons, readability from 10,000 neurons in any brain area of your choice, what’s the assistive communication device that you would build in that case? What’s the brain area you’d want. What’s the modality that you would choose for communication.

Vikash Gilja:

Read out only, or read write?

Matt Angle:

Let’s say to read only.

Sergey Stavisky:

I’d start by asking the end user, it’s going to depend what they want and what you need. I think people with different types of injuries are going to ask for different things. I think people who can’t talk or communicate really at all, communication is going to be the top need. Someone who can perhaps talk, but it’s paralyzed from the neck down probably wants reach and grasp and things like that. So there is not going to be a single application.

Matt Angle:

Let’s say for communication for someone who can’t speak can’t type very easily.

Vikash Gilja:

I think it’s a Utah array style resolution from what we know applied in the areas where ECOG language decoding has worked well. So I don’t want to put those electrodes all up and down the motor strip, I think all of us here would we have a pretty clear idea of what we’ve wanted to do with it.

Sergey Stavisky:

I’d be right there with you Vikash. I would be even more confident in basically doing what Frank’s done with handwriting. I think given that his system almost is good enough or perhaps is good enough as is with 200 electrodes, Matt or someone else, if you gave me 10,000 and gave Frank 10,000, I would be very shocked if it did not allow someone to type quite fast.

Matt Angle:

When you say Frank system, do you mean the idea of decoding complex trajectories through a neural state space? Or do you mean handwriting?

Sergey Stavisky:

I mean you’re decoding handwriting or something like it.

Matt Angle:

Because I noted that Frank you have something like 66 characters per minute, and the average person’s handwriting speed is 68 characters per minute. So someone might argue that you’ve basically already nailed it with respect to handwriting, and that may be 10,000, you could do 80 characters per minute.

Frank Willett:

Yeah, I don’t know. I mean yeah, the number was 90 characters a minute, but …

Matt Angle:

well, I’m sorry. I apologize.

Frank Willett:

It’s a hard trade off. I think if we were to just do the kinds of things that we’ve already proven out with more electrodes, I think they would just be an order of magnitude more robust and more accurate and maybe they would be really clinically useful in that sense. I think if you’re talking about more unsolved difficult problems like speech determining where we haven’t … how to prove a principle where you can do open language, high accuracy decoding, I think it’s a little more uncertain. And I think speech in particular, since it’s so fast, I don’t think we know what the S&R we need to get really high accuracy on something that’s 10 times as fast as handwriting. I would definitely be very curious to see, I think that’s definitely where the field is going.

Matt Angle:

Why haven’t we seen any BCI based stenotype keyboards? So stenographers can type I think 200 words per minute, sometimes faster. It’s very hard to learn though.

Frank Willett:

I think you just answered it. It’s the rehab aspect, that’s my intuition which is … Brain control does require some training and add on top of that a complex UI. I mean, it may be possible but it would take a lot of dedication and time to get that operational.

Sergey Stavisky:

I think it’s worth also describing for the listeners what these clinical trials or clinic day to day. For example, the current systems are wired. They need a technician present physically or the participants basically literally plug them in and run all these things. So for example, in our trial, we run two to three times a week, or at least we did pre-COVID believe it or not, for a few hours each. So we’re talking about just a couple or several hours per week of research time, which needs to be spent very carefully. This is very precious time that our participants are donating to us. And so that doesn’t necessarily leave a lot of room for something that would take 1,000 hours to master. I don’t know how long it takes master stenography but I imagine it’s quite difficult.

Matt Angle:

Are you very conscientious about that when you’re designing your experiments or is that a worry of yours that you have something you think would work, but you’re like, “Oh shit, am I really going to use a session on this? No.” Does that factor in?

Vikash Gilja:

It’s also What’s also respect for the participants time and efforts. I think most of us think of them sort of like astronauts in terms of their dedication to what we’re doing and we want to be respectful of that effort and it’s hard work for them and we want to present opportunities where we can make progress together. And so that does create a risk benefit trade-off equation that we’re constantly engaged in.

Beata Jarosiewicz:

Yeah. Just to chime in on your previous question about like optimal decoding and what to decode to get maximum speed. I think I would probably try to combine decoding of phonemes with decoding of semantics with a Paradromics style array, where you have tens of thousands of electrodes and you can put them wherever you want and combine those two in a Bayesian way along with maybe a language model to try to accelerate the decoding so that it’s at the speed of, or even faster than human speech. So the person only really has to think about what they want to say, the sentence they want to say and we decode that automatically.

Matt Angle:

Actually Beata I’m glad you raised the language model because Frank looking at your work, the language models really important.

Frank Willett:

Obviously language is a hugely useful constraint because there’s so many different statistical relationships between all the different words. And so for handwriting, we were able to get about 95% accuracy without a language model, so it’s still, not bad but frequent error is 5%. But then with a language model, we were able to get above 99% accuracy. So it’s definitely very useful. And I think we’ve all experienced that, When typing on a smart phone, we make a few errors here and there. And as long as there’s not too many, language is very constraining. So usually … Not always, but usually it corrects it back to right thing.

Matt Angle:

I think Sergey and Beata have given me their 10,000 neuron interfaces. But Vikash and Frank, have you?

Sergey Stavisky:

Vikash had the same one as I did.

Matt Angle:

You can’t copy‒

Frank Willett:

I think I may have, I think I would just go with the classic boring answer, put them all in precentral gyrus where we are and just have a really high functioning, low air really useful computer control, computer cursor, plus maybe a handwriting.

Matt Angle:

Okay. Well only Beata’s answer was interesting so we’ll only talk about that. How would we think about … if we wanted to target, some of these other non-motor areas, what would the cost benefit look like in terms of an advantage I would say of going into motor cortex in a patient who is paralyzed is that the risk is quite low. The adverse effects are essentially nothing. You could lesion the area and from a neurosurgical function neurology standpoint, it wouldn’t make a difference. But when you start going into these other areas, I guess, it’s … You really have to think about it and I’d be curious, how would you think about that?

Frank Willett:

I think one way to think about it …

Matt Angle:

No, no, not Vikash. I don’t care how you think about it. I care how Beata thinks about it.

Beata Jarosiewicz:

Oh my goodness. I care about how Vikash thinks about it.

Matt Angle:

You have a lot more patients than I do.

Beata Jarosiewicz:

Now I forgot the question because that had so much emotional salience. [crosstalk]

Matt Angle:

Oh, I was just saying, how would you think about which areas you might want to target? There’s an advantage to getting the information out, but then there’s also a clinical risk associated with it.

Beata Jarosiewicz:

Yeah. I mean, if we’re talking about just speech and people that don’t have speech, I think it’s okay to go into like Broca’s area, which is a speech production area. And Wernicke’s area, a speech understanding area. It seems. And some of the language, like phoneme production type, motor areas of motor cortex, those wouldn’t necessarily destroy anything the person’s actually using right now. Actually speech understanding, it’s important, isn’t it? Okay, first prove that the device has like no negative effect on anyone it’s ever implanted-

Matt Angle:

Oh, this a Paradromics device, so that’s given.

Beata Jarosiewicz:

Right. It’ll be fine.

Matt Angle:

Now Vikash, what were you going to say?

Vikash Gilja:

What I was going to say is like, we can look at common neurosurgeries as a way to come up with strategies. We know in the epilepsy monitoring unit, there are individuals with tens of these SEG probes, stereo EEG probes, and they’re placed and … by clinical standards there isn’t a measure deficit due to the placement. And so that at least gives us an existence proof. That’s fairly well-trodden on device design, that could give us a lower risk strategy for recording over a broader set of critical areas. And that could allow us to connect to some of the ideas that Beata is presenting.

Vikash Gilja:

In that if we can access more of the network involved in the overall task, we could use that information to refine control. So even in the case of motor control, if I was accessing areas related to reward and error that information could be used online to refine control, to decide on when somebody wants to control the system, it could also be used hypothetically to improve the system of your time. Like, you’re engaged with this prosthetic if you can measure the users self-report or internal view of how well that system’s working, that could be a really important refinement signal.

Sergey Stavisky:

We’ve actually done that with our cursor BCI, as we’ve shown that we can detect an error signal and use that as an automatic delete key basically, or undo key. So presumably more of that would be very good.

Vikash Gilja:

Yeah. In that case, it was pretty cool. It’s like you could see that error in a place you might not expect it in motor cortex. Now let’s say we go into areas of the brain where you see reward expectation and where you’d expect it.

Matt Angle:

Is there anything that you think we should talk about related to BCI based communication that we didn’t talk about today? Did I miss anything?

Sergey Stavisky:

I think one aspect that maybe I want to highlight is Frank’s work. The type of feedback is important. And I think it’s important to remember that not all users will be able to see well, or even at all. So certain paradigms, like point and click typing won’t work for someone who’s blind and others like handwriting or 10 finger typing or something like that might … and similarly there’s efforts to develop auditory feedback systems, because some people can’t see.

Vikash Gilja:

And it would also shorten that that feedback loop, if we go to auditory is lower latency. I think further than that is his writing, which we haven’t really talked about at all today, which is a whole area of research, another set of strategies for modifying the feedback.

Beata Jarosiewicz:

Just to amplify both of the previous comments, having something that’s sort of general purpose that you can sort of pick and choose for each particular patient or whoever will be using the device, like what would be most useful to them, I think would be super helpful.

Matt Angle:

It’s very challenging practically in terms of the way that therapeutic devices are approved. Of course we would like to have a sort of bespoke BCI for every patient based on their condition. But unfortunately it’s going to be the case that we’re going to have to make, our best guess that we’ll try to help as many patients as possible.

Beata Jarosiewicz:

Sure. I mean, there can be like a set of tools, you can have a point and click interface that can be used for basically any application. And then on top of that a communication system that the array goes into whichever part of the brain is most relevant for that person or whatever. So there could be sort of a scope laid out that’s all FDA approved. And then within that scope for each of those dimensions, you choose wherever that person belongs.

Sergey Stavisky:

And the good news is almost everything we’ve talked today, except perhaps speech, you could put your electrodes in the same area and do typing and handwriting and robot arms and other sort of anything that’s related to arm and hand movements, you can read up from the same area and apply it in different ways. We could provide feedback in different ways, all with the same hardware, it’s just changing the software layer.

Matt Angle:

Well, thank you all for spending the time with me. I thought this was really interesting conversation for me, so I hope other people find it interesting as well.

Beata Jarosiewicz:

Thank you so much. That was really fun.

Frank Willett:

Yeah. Thanks folks. Thanks for inviting me it was fun.

Sergey Stavisky:

Cheers, this is fun.

Matt Angle:

Cheers.

--

--

Neurotech Pub

Matt Angle, PhD, CEO of Paradromics invites expert guests to the ‘Pub to discuss neurotechnology and brain computer interfaces