The Annals of Neurotech Pub: Neurotechnology Startups and the E Word

Neurotech Pub
61 min readJun 23, 2021

Matt Angle:

For those of you who’ve been following along, we recently recorded an episode on the ethical considerations of brain computer interfaces with a special emphasis on implanted medical devices. I was so intrigued by that conversation that I wanted to further explore the topic of neuroethics with other voices from the neurotech community. In particular, I was interested in looking at the ethical implications of wearable technologies. Once the conversation gets going, however, we hone in on a much more central issue. What is the role of ethicists in the development of new technologies? And what could this look like for neurotechnology? Joining me today are Karen Rommelfanger, professor and neuroethicist at Emory University. Anna Wexler, professor of medical ethics at the University of Pennsylvania. Ana Maiques, CEO of Neuroelectrics, and Stephanie Naufel Thacker, technical program manager at Facebook, formerly at DARPA. I hope you enjoy the discussion as much as I did.

Matt Angle:

I was hoping to have everyone introduce themselves. And then I think it’d be really interesting if you could share with us what was the first moment or what was the first experience that got you thinking about neuroethics, particularly when did you start thinking about neuroethics distinct from just being a good person and being an ethical person, but when did you start cluing into the idea that this neuroethics thing, I need to be mindful of that, or maybe I even want to make my career in investigating that?

Karen Rommelfanger:

I’m Karen Rommelfanger and I’m a neurotech ethicist and strategist. I am housed at Emory University in the Center for Ethics where I run a neuroethics program. And I also run the Neuroethics and Neurotech Innovation Collaboratory. That all came from a long background and training in neuroscience. And before that, I think, a longstanding interest in what the mind was. I think I talked a little bit about this with you guys offline, but I had this dual interest in art and science, and I’d kind of been pursuing those two tracks at a parallel level. I was interested in music and I was interested in drawing and painting. And I thought that this was kind of a representation of the power of the mind. And I thought that I want to study this at a different level that’s currently inaccessible to me, I want to study the brain.

Karen Rommelfanger:

That was what oriented me in my neuroscience research. And then at some point in neuroscience, I was doing a live recording of a single cell and what we do when we record those cells, and make sure we’re in the right place in the brain with those cells is we transform the electrical activity into audio signals. And I was listening to those audio signals as there was this live activity, and I was thinking, oh my God, this is not where the mind is. And kind of almost had this moment of what have I been doing with my life. So anyway, I went into this other spin where I pursued neuroscience for a long time, but then started thinking like, well, what am I really caring about with neuroscience?

Karen Rommelfanger:

So these broader questions about what it means, what does the science mean? And that’s when I found through a bunch of reading that there’s this thing called neuroethics even though it still remains ill defined, but it seemed to be the common word that was around all these interests I had, around how the brain is related to emotion, how it’s related to identity, how one sees the world. And now I’ve just going down that rabbit hole and trying to meet anybody else who has a shared interest and enthusiasm for it.

Stephanie Naufel Thacker:

I’m Stephanie Naufel, I’m program manager at Facebook Reality Labs. We are developing noninvasive neural interfaces as an input to augmented reality glasses or virtual reality systems in the future. So in that team, I manage our scientific collaborations. I also lead our neuroethics program. I’ve been at Facebook for about a year. Prior to that, I was at DARPA as a contractor and scientific advisor. Super cool job. We got to work with these wonderful PIs and CEOs on everything from sensory neuroprosthetics to invasive high bandwidth cortical interfaces to spinal cord injury. So it was really the full breadth of neurotechnology. My training’s actually biomedical engineering through and through, all BCI work, my undergrad and master’s was in somatosensory neuroprosthetics. My PhD I jumped across the sulcus to motor cortex and worked on motor neuroprosthetics.

Stephanie Naufel Thacker:

How I got into ethics is actually by nature of being a college student, I started getting involved in human rights issues and these biomedical engineering ideas of what happens when you’re comatose. What sort of rights do you have? When does life begin? And at some point, I decided maybe I should make this more a part of my day job. And so I kind of shifted from general ethics and human rights to neuroethics. My first publication was on neuroethics and personhood. And from there I just kept finding out very interesting paths to go. Every one of my theses, undergrad, master’s and PhD has an ethics chapter. One became a publication and other kind of became a modus operandi of sorts.

Anna Wexler:

How did I get into neuroethics? Since I was an undergrad, I was very interested in the brain. So I came to college knowing that I wanted to major in brain and cognitive science. And that’s what I did. I thought I wanted to be a neuroscientist. And it’s interesting to hear these similar kind of trajectories in other people’s stories. I spent a lot of time working in neuroscience, mostly cognitive neuroscience labs throughout college. I worked in some very interesting ones. Did that, also worked as a research assistant for another year after college and a few labs. At the same time, I was really interested in science communication and thinking about how to engage the public and just thinking about how science generally remains or often remains in its own little bubble. And so I actually ended up doing a dual major in both brain and cognitive science and also writing.

Anna Wexler:

After college, I worked as a science writer for some time and an editor with a specialty in neuroscience and actually moved to Tel Aviv, Israel and worked with a number of really great neuroscientists there. And I was writing about other people’s work. And it was fascinating. I mean, I love speaking to people about their own research, sort of like what we’re doing here in this podcast, right. Just being able to people, talk to experts, right, and ask them anything. I genuinely loved that, but I decided that I did want to do my own research. And the thing that fascinated me the most at the time was thinking about brain-computer interface. So I actually applied to do a PhD program to study the social and ethical implications of BCIs at MIT and their interdisciplinary program, and their history, anthropology, science technology, and society. I only applied to that one program because that was the only thing I wanted to do.

Anna Wexler:

I didn’t want to go to do a PhD just for the sake of doing a PhD. I just wanted to do this specific thing. Got in, moved back to the states, started the program and realized that what I wanted to do felt a little too speculative for a PhD, that there wasn’t a lot to sink my teeth in at that time. But I’d heard about this do-it-yourself brain stimulation movement, maybe in my first or second year of the PhD, this was maybe 2012, 2013. And so I started studying it really from a sociological perspective. And I found that where other people were talking about the things that I found interesting about do-it-yourself brain stimulation, that conversation was happening in neuroethics. So that’s how I got into neuroethics. That was kind of my first pathway in. And then it just kind of expanded from there both in terms of thinking about neuroethics, but also in thinking about do-it-yourself endeavors and direct to consumer health technology and the challenges, the social and ethical challenges that those are creating.

Anna Wexler:

And I realize I actually never introduced myself. So I’ll do that now. My name is Anna Wexler. I’m an assistant professor in the department of medical ethics and health policy at the University of Pennsylvania Perelman School of Medicine. And I run the Wexler Lab, I have a great little group, and we look at mostly issues related to neuroethics to focus on issues related really to emerging technologies. There’s a lot of other areas of neuroethics, but also more broadly look at issues around do-it-yourself medicine and direct to consumer health technology.

Ana Maiques:

Hi everyone. I’m Ana Maiques, I’m the CEO and co-founder of Neuroelectrics, a company that is developing a non-invasive brain stimulation treatment for different indications. We are currently on an FDA trial for epilepsy on pivotal phase to reduce seizures in children that don’t respond to medication and also a fully at-home FDA ID pilot in depression. So from the moment I co-founded the company with my co-founder who is a physicist and mathematician, we always look at the brain as physics and electric part of the body, right. And our obsession was to decode the brain, looking at EEG, but also to learn how to stimulate. And it was very clear to us when we incorporated neuroelectrics in 2011, that we wanted to take this to the market, right. And that we had to go through the clinical route. So that was like an obsession as an entrepreneur to have a social impact.

Ana Maiques:

And that’s why I didn’t go into the consumer. It wasn’t because I was a good person, because I thought that the impact would be on the medical field. So I moved to the US, and then I was invited as a newcomer into the neurotech space to a conference in Washington, the Institute of Medicine. And I think there, it’s a bit sad to say, but Jeff Knight from J&J was the person that invited me. And he was in Johnson & Johnson. He was passionate about neuromodulation. And he said, it would be good to have the big company and the newcomer. And they said, Ana you’re the newcomer. You’re coming with me. Right. And there, I met Hank Greely and all the people, and I think that maybe Karen or Anna were there as well.

Ana Maiques:

And I promise, I sat down in Washington in this conference and I heard them speaking about neuroethics and it had never crossed my mind. It’s just I sat there, I listened. And I said, oh my God, I never asked myself these questions. This is pretty fascinating. I should be thinking about it. So it’s through meeting Anna and Karen and a lot of people involved in neuroethics that I started to think that I should be really as an entrepreneur, be concerned and learn how to incorporate these into my life as an entrepreneur. But honestly, it’s just because I listened to experts and to the conversation that I got involved in neuroethics.

Karen Rommelfanger:

You said because it is deciding to be a good person, doing ethics other than kind of this interest of self cultivation and your goodness. I really would not have gotten into neuroethics if I hadn’t have found all these ideas first and then saw that neuroethics was put on as a label. If I had probably seen the neuroethics label first, I think I would not have gone for it. And the reason is that as a scientist, I had a long history of bad required compliance-driven checklist stuff that I had to do. These modules of training where I felt almost my intelligence was insulted by having to do it. It slowed down my work. It made me have to answer what I felt like were obvious questions, and I already knew how to do. And I just felt like it was this thing and red tape, and I was not taught that ethics is as I see it now, as a problem solving tool that enhances innovation and creativity.

Karen Rommelfanger:

I didn’t see it as something that was expansive. I saw it as something that was constraining in the worst ways. That was almost the antithesis to science. So I fully appreciate many people’s disdain for hearing what we’ve, kind of in some of my circles have been calling the E word. We’ve wondered, should we even say the word ethics? Because it has such a terrible connotation? I will say AI ethics and kind of the proliferation of AI ethics entities within larger companies has kind of helped restore some of that in the conversation around the E word. So you can say it freely and people can get it a little better. But I think in our neuroscience communities, we still have a lot of outreach to do in that space. There’s a lot of damage that was done by scientists training of ethics and I’m continually frustrated by that actually.

Ana Maiques:

Yeah. And one of the things that I wanted to say Stephanie and Anna as well is one of the reasons for me to get involved is that I thought I was a good person because I said, I’m not going to the consumer world, I’m going through the clinical route. And then I’m doing the good thing. And then the consumer guys are the bad guys. Right. And then when Karen and Anna started to make the questions, I’m like, shit, I cannot answer that. I’m not sure I’m compliant with that. So I have all these blind spots and it’s like, oh my God, I’m not really responding. So maybe it’s not that I’m not such a good person, it’s not about being good or bad. It’s about, are you really answering the hard questions and doing things in the right way?

Ana Maiques:

So it doesn’t have anything to do with being good or bad. So Stephanie, what’s your take?

Stephanie Naufel Thacker:

It’s interesting to hear you talk about the consumer people as the bad guys. I certainly don’t see it that way. It’s interesting. Because I started in this field, I’m very passionate about neuro rehabilitation. You can talk to many of my colleagues at Facebook, they’re all so passionate, but I’ll be very clear. We are developing a consumer product, not for medical use, but what’s interesting about us pursuing this is we’re a huge company, we’re global. We serve the world. And I think there’s an avenue where we can make neurotech more accessible and maybe normalize it in a sense so that when people who do want to use neurotechnology for rehabilitation, it’s more of a common thing. So maybe you can wear your device for interacting with a virtual reality system.

Stephanie Naufel Thacker:

There’ll be other use cases that will really be opening for the rehabilitation community. There’s people in the company thinking about privacy measures. There’s people in the company thinking about diversity and inclusion. So we get to attack a lot of these ethical questions in a cool way.

Ana Maiques:

I mean, it’s not that the consumer were the bad guys. For example, I adore Ariel for interaction. She has the muse headset, right. I’m doing brain stimulation. Right. So when you’re assimilating the brain with electrical current, I have always been more cautious on the EEG rhythm side. Right. Because I do believe that stimulating a brain, some of the scientific uses or the science is not strong enough to know the long-term effects. Right. So that’s where I’m more cautious on the consumer side that not something like EEG or brain computer interfaces, right, which I believe are another league. Right?

Stephanie Naufel Thacker:

Absolutely. Yeah. And we are approaching neurostimulation, we’re purely during neuro recordings.

Anna Wexler:

I’ll just jump in and add that I think both Steph and Ana, you’re pretty unique in this space given that you are coming from companies that both of you have really proactively engaged with neuroethics in a pretty significant way. Ana, I was going to give a plug to your piece. I just pulled it up here that you wrote ‘Neuroethics to be or not to be involved in neuroethics an entrepreneurial perspective’. So you’ve actually written in the forum that neuroethicists read, this is our forum, right? And Steph you’ve been involved very intimately in helping organize and participating in events. And I don’t know the total extent of your involvement with the Facebook neuroethics internship, but I know it’s significant, I think in some way. So I think just for the listeners, right, this is not just two random people from different neurotech companies. You guys really have been significantly involved in neuroethics.

Karen Rommelfanger:

I would agree with that and say that I think one thing that they both point out is that they’re not using… Companies have a compliance officer. They do their thing. They have a certain skillset. They have a legal skillset. And they have their engineers and they do certain things, but an ethicist really has to sit in the gray zone. And so you have a different set of tools for that. And I think they both demonstrate curiosity and awareness of looking for those blind spots and figuring out how to integrate that into their day-to-day and their creativity. And one of the things I’ve been working on with Jacob Robinson at IEEE is we’re currently developing a paper about re-imagining engineering grand challenges as socio-technical ones, meaning that engineering feats don’t have to be separate just about fixing something or cleverly solving an engineering problem around size or materials that can one use, those kinds of problem-solving rubrics can be used to think about, can I make this out of affordable technologies so that more people have access?

Karen Rommelfanger:

Can I engineer wireless technology so that they’re more secure and even I think also concrete could be like, how can I help empower people so they have a greater sense of agency which also is a word that feels very abstract, but how can I design and feel safe so that the patient/consumer will feel more secure and empowered in how they’re using these things? So I don’t think that they should be seen as… Scientific excellence, engineering excellence really shouldn’t be seen as separate from ethical excellence or being rather black and white, being a good person, because really my job as an ethicist is usually not to help, if there is a clear good or bad, black or white, then you really don’t need me there. But if there’s a choice between a bad one and a less bad one, that’s usually where I come in. It’s not always the like, yay, a clear winner and this. So I think that these are complex problems that we’re addressing and they have to be thought about throughout the whole enterprise.

Stephanie Naufel Thacker:

One thing I’ve tried to do through my career which has only been a dozen or so years is try to find avenues to get scientists and ethicists speaking to each other and working very closely together. So actually when I was an undergrad, I was in a biomedical engineering lab and we had an ethics professor who would come and participate in experiments, watch experiments, was really invested in learning about the system. And we started thinking about different paradigms for getting the two groups to work together. And this was at Arizona State. So there were a bunch of people thinking about this already. There’s McGregor and Whitmore paper from, I think, 2009, where they talk about staging these round table discussions where ethicists show up and maybe you have a weekly lab meeting or a monthly lab meeting. So that was one approach.

Stephanie Naufel Thacker:

The other end of the spectrum was this paradigm of having an ethicist embed himself or herself in the lab. Eric Fisher at ASU is someone who had done this at Boulder. And so we started thinking about these paradigms and what I ended up putting together as part of my master’s thesis was this idea of a collaborative traineeship which, if I was the best of a few worlds where an ethics trainee as part of their undergraduate or graduate studies would do a rotation in the lab. And they would watch experiments, maybe participate if it made sense. And what this would do would be to encourage organic conversations between the two groups. Why are we doing this? Why did you choose that animal model versus this animal model? And it’s something we were able to pilot at Arizona State, and I’ve tried to carry it throughout at DARPA or the broad agency announcement that we put together.

Stephanie Naufel Thacker:

The very last one I did was bridging the gap, the spinal cord injury program where we encourage teams to embed ethics trainees in their groups when they went to submit a proposal. And this is something we’re trying to do at Facebook as well. We have these internships, we can bring ethics trainees onboard for a few months. They’ll learn how we do business. We’ll learn from them. Then they go back out into the field. So we’re starting to try this. We also work with ethicists who are already on staff, but it’s interesting to think about this sort of embedded ethics trainee paradigm.

Anna Wexler:

That’s so interesting because when I saw the call for Facebook, I was just continuously wondering why it was a trainee and not somebody later. And now you’ve just explained it.

Stephanie Naufel Thacker:

Yeah, that was all the motivation there. So this has been a very long-term project and seeing if it catches on and how it works.

Matt Angle:

Are you looking for people to apply right now?

Stephanie Naufel Thacker:

We’re looking in the fall and identified our first trainee so to speak.

Ana Maiques:

But see, that’s great. I think that after being in a lot of neuroethics forums and developing guidelines and writing articles, what Stephanie is describing is really about getting into action, right? How can you really get both cultures working on the day-to-day of the company, right? And I love the trainee idea. I also think that as my company we have investors, right. And we have a board, right? So I think at some point in some boards, it will be great to have these neuroethicists to also be on the board and go through some of the decisions because what I love the most is thinking what are the implications of your technology 10, 20 years from now. Those are the super good questions, right? How do you see these 10, 20 years and how can it backlash the board of the company in different scenarios, right? I would like to see one of these neuro trainees or neuroethicists involved in the board from time to time.

Ana Maiques:

I think it will be awesome because even if I think about that, it’s not the thing that I tell my board, than somebody, some doc out there, right, and some questions. I don’t think it’s about judging. It’s about asking the right questions. As Karen says, I don’t think there is a right or wrong answer, but are you asking the right questions to yourself?

Karen Rommelfanger:

I love that idea of integration. And I’ve been working on trying to understand the practical workflow piece of this for a while because I’ve been a part and I know you both have too of a number of neuroethics guidelines, recommendations, high-level national lens, transnational lens. In fact, there’s so many that people don’t even realize they’re there. And I think every person who creates them thinks that they’re the first person to create them and then we move on and don’t read those anymore either. So I’ve really shifted my work to thinking about practical levers that one can integrate into a company. And I’ve been actually talking to a lot of people in AI who already had embedded AI ethicists or people who are independently consulting as AI ethicists. And what I’ve learned is that there’s good ways it seems like to employ an ethicist in a company and there are some bad ways. And the bad way right off the bat is to not have that person in a position of legitimate power.

Karen Rommelfanger:

If you have that person be seen as subservient to the science or if you see that person as not a thought partner or that person as answering to someone or constrained in what they can truly offer as a critical eye, then that’s a problem. If they’re junior, that’s a problem because they’re not really a partner. That’s why Ana’s idea about the board kind of interface. If they’re in their own department, that’s a problem. So if they’re in the public engagement department which is part of the marketing department, there’s very low impact and trickle. It’s a different function. It’s not a problem-solving function. It’s a market research function.

Karen Rommelfanger:

If they’re in the compliance office, it’s a problem because they probably are hired lawyers and don’t have the skill. So I’ve been serving a lot of these problems and the frustrations of people who’ve been brought into that. There’s not a lot of places who have a sophisticated infrastructure for an ethicist at a high level and skilled. And there’s things that people want and there’s things that they will pay for. So a lot of people, if you can get them to have an affinity for ethicists, they’ll say, yeah, I totally would take that and you’re a good person, right? So you’ll do that for free. And so that’s one of the things I’ve been working on too is that there has to be a clear value proposition on both sides for the ethicist. And part of that starts with the ethicist realizing that their work has a certain value and it has a value that hits the bottom line of companies.

Karen Rommelfanger:

So what I’ve been trying to help people understand is that neuroethics isn’t about feeling good and helping you have just beyond the awareness, it helps you set up a trajectory of growth for a company so that you can anticipate, one that you can have a wider scope for your audience, so that you can figure out who are going to be the diverse stakeholders in broadening your revenue streams. I try to tell people that the people that are hired into companies like Facebook and Google, et cetera, they’re very younger generation and talent and very ethically minded. If you want to keep good engineers, they need to believe in the mission and they need to believe that you’re really going to walk that talk and that leadership cares. And I think the other thing is that there is just kind of sustainability and readiness in the longterm for the types of policy applications or policies that might be drawn up in the future.

Karen Rommelfanger:

And I think if you’re really a leader in the entrepreneurial space that you should be involved in those conversations. You should be a thought partner in thinking about what guidelines are going to be useful. And many of the guidelines I think have failed in that regard because they’ve been generated by academics and policy makers, academics in my experience, I’m starting to form this opinion that are even less amenable to thinking about neuroethics than private sector partners because companies really get that they have to hit the ground with their tech. And there needs to be this applied nature in a certain kind of group they’re talking to. An academic can be fairly more isolated in their lab publishing within their peer reviewed community. And so I think that there’s a different opportunity for actually the people like Ana, people like Steph to be setting the stage for even academics to do their work more effectively and having better impact with their work.

Karen Rommelfanger:

And the bottom line is that most of these academics, part of this inspiration came from working with the global brain initiative. So I co-chair a group integrating neuroethics into all these national level brain research projects and has been there. And one of the things that we’ve talked about is, gosh, in the end, all of these governments have invested a significant amount of money into brain projects. And we want to make sure that the products and deliverables of that end up being scaled and disseminated. And it’s really going to be critical to have a private partner, public, no, a private sector partner who can help with that piece of the rollout and thinking in different ways for it. But there’s been this gap in conversation between those groups. And so I’ve also had a similar issue trying to figure out how do we fit the neuroethics into this broken flow where even the science isn’t meeting.

Anna Wexler:

Actually with Laura Specker Sullivan, I’ve been thinking about paper under review now, or thinking about how neuroethics can have a greater impact and how we can shift to focusing on implementation. So criticizing sort of the same things that Karen was saying. So neuroethicists have been really good at putting out guidelines, but a lot of those guidelines don’t get seen. They just stayed behind paywalls. It’s not clear who reads them, if anyone reads them and to Ana’s point, right, they’re not necessarily practical, right. So if you outline a principle, that’s great. But then what is somebody working at a neurotech company supposed to do with that principle? Right. So I think there needs to be a greater focus on, okay, we have these guidelines, we have these principles, what’s the next step and what can we look to? What’s worked in other spaces? What can we look to to then figure out how to do that? And I think AI ethics is a really good place to start. And Karen, I’d be interested in hearing more about what you found. I also have somebody in my lab who’s been actually researching the issue similarly.

Anna Wexler:

I think another challenge to add to that is and it’s something we talked about at the neuroethics conference when we talked about the challenges of ethicists industry collaboration, right? These are not always easy interactions. And I think, especially when you have ethicists, right, in academia, even when you have ethicists within the companies, right, as we saw with Google, and Timnt Gebru, what happened there, right? Even there, it can be fraught, but especially when you’re working with outside ethicists, right? So there’s concerns about costs. Where’s the money flowing? Is there money flowing? I write about the space, I want to feel free to critique companies and critique the space, but I also want to help have a practical impact and work with the companies to help think about ethical development of their products and what ethical considerations can come into play. So the financial part is a really big thing, but even without that, right, when you develop relationships, you’re less free to criticize.

Anna Wexler:

So I’ve been very conflicted about how to proceed, but I think there’s a way to do it. And I’ve been thinking about ways that it can be done, but it’s tricky and it’s bumpy. And there’s a lot of issues that I think really haven’t been totally worked out yet.

Matt Angle:

Can I ask good question on the financial side, because I think for startups and for early stage entrepreneurs, this is a really big thing. Is there an equivalent of a small business, innovative research grant for ethics? Let’s say that I have an eight-person company that’s building a new wearable. And I have concerns about the implications of that technology, at that size company in sourcing an ethicist is probably not possible. And there may not be enough cash even to just pay a contractor or someone to come in. Is there a source of government funding that I could apply to?

Karen Rommelfanger:

Not to my knowledge. And one of the things I’ve been brainstorming with a couple of partners to come up with is something like that, like a service that could offer because those companies you’re right, those startups can’t afford it. So trying to get some sort of, I don’t want to say too much about it yet, but just the discussion is about an entity that would have kind of access to a set of ethicists who could come and help and weigh in whether it be at the investment stage so that it helps the investor or for the entrepreneur and then there’s a couple of ways to also build that into the agreement. So that maybe what we do is kind of shift the investor’s perspective as well as far as what they require for benchmarks of success, thinking about the longevity of the company and how, especially neurotech is going to have to be sensitive to thinking about certain ethical issues.

Matt Angle:

Have you spoken with many venture investors about their view on the role of neuroethics in shaping business plans for small businesses?

Karen Rommelfanger:

I haven’t spoken to a lot, but I have a lot scheduled and I’ll keep you posted. But basically I think there is an interest from impact investors. People are drawn to investing in neurotechnology because there’s a lot of ethical issues they’re thinking about usually. In fact, that’s how even endowments at universities for neuroscience and brain research are usually not about neuroscience or brain research, it’s about an ethical issue associated with that. So I think to make sure that investors are really getting what they want in the end, there needs to also be a help of clarity of the types of things that they want to address. Yeah, there’s different types of investors as well. And so then I think it starts to break down into some different types of questions and who those investors are responsible or beholden to. And thinking about the ethical obligation of some investors, thinking that basically their ethical obligation is to increase the investment.

Matt Angle:

To some extent, that’s their legal obligation. Their fiduciary responsibility is often just to grow their fund. Similarly, Ana was talking about the board of directors. Most startups are incorporated in Delaware and the law there is pretty clear about what the scope of fiduciary duty is within a company. And so I think it’s great when you have win-win situations where being ethically cognizant helps you build a better business plan. But I think we don’t have to look very far to see that those two things often come to loggerheads.

Karen Rommelfanger:

I’ll just add and then I’d love to hear what others think about it. They feel that their ethical obligation is that they’re conflating their ethical obligations with their legal obligations. And that’s because a lot of people don’t understand what an ethical problem is versus a legal problem and where those converge and how you tease those apart. So this is a good example of case in point of a compliance officer isn’t going to be able to untangle a lot of the issues that come into play there. I think I’ll leave it at that because I’d love to hear what others things.

Ana Maiques:

Yeah. I think that we come from very far away in terms of neuroethics industries. So I really applaud what Stephanie and Facebook is doing. I mean, just sending the message that you have to think about neuroethics and bring somebody on board. I don’t care about what level, you’re just sending a message that this is something to understand, to look into. And I agree with Matt, most of the entrepreneurs, they’re really worried about survival, right? So I think that as you’d never get a CFO, the first day, or a CMO the first day you’re creating a company, but as you grow, you incorporate different models. We should have some soft guidelines or some soft consulting when you’re at very early stage. But then if Facebook creates some success stories with the neuroethicists, it will become a reality that after a certain size of revenue you’ll have somebody on board. Right.

Ana Maiques:

So I just think that it’s very hard to do it at a very early stage. So I don’t know what you think.

Stephanie Naufel Thacker:

We’re trying a few things. I want to address something that both Karen and Ana brought up about who is giving the ethical guidance, what restrictions may they feel, are there perceived conflicts of interest? I’ve been thinking a lot about this and we have a bunch of different efforts. So the trainee ship is just one small component. We have a responsible innovation team. We work with, I’m trying to figure out how we work with academic partners to address these questions in a way that allows for freedom of discussion. Matt, maybe I’ll announce that here, but in May, we are putting together a workshop with the Columbia Neuro Rights Initiative. I’m pretty excited about it. We’re going to address three specific questions. One is we’re investing in optical BCI that uses light. What can you decode or record using these types of approaches and what are the ethical considerations?

Stephanie Naufel Thacker:

So we’re going to talk about that. Our company is also investing in ENG as a control input? So we’re going to all get together and talk about what can you record and decode from muscle signals and what are the ethical considerations?

Matt Angle:

And for those people who might be familiar, this is the CTRL-labs group that Facebook acquired.

Stephanie Naufel Thacker:

Exactly. We’ll all be there talking about this. And the third session is on this neural data sharing ecosystem that I don’t think exists yet, but we need to start talking about it. I keep seeing research kits pop up and it would be great to get together as community academics, small businesses, large businesses, and really discuss what types of science are we hoping to accomplish with these research kids? How do we deal with the data sharing? Is it going to be like other wearables where you upload your data to your phone and then it goes somewhere? And that’ll be very interesting and looking forward to seeing how the community comes together to discuss those issues. But at the end of the day, when you’re at a company, you’re sitting where the rubber meets the road. At the end of the day, ethics will have to turn into policy. And so the stakes are much higher for these sorts of discussions.

Anna Wexler:

I just wanted to throw out one other model that we’ve been thinking about in terms of industry emphasis collaborations, and that is Arthur Caplan and Ethicists at NYU has a lot of expertise and actually his the whole team there has a lot of expertise in terms of ethics of compassionate use requests. So when a company gets asked to provide, let’s say, Remdesivir, right, before it’s been FDA approved, right? How does the company make that decision? Right? And that’s a really ethically complex problem that ethicists have thought a lot about and here too, right, there’s a situation where you have companies that really could benefit from expertise of ethicists and how do you foster that kind of collaboration. And so what he’s done is created a sort of committee at NYU, where basically, I believe it’s funded by Johnson & Johnson, or maybe there’s two companies that fund this overall committee, and then they could bring requests to that committee and then the committee responds to that request.

Anna Wexler:

So it’s not necessarily a one-on-one, it sort of goes through this overall committee, but-

Matt Angle:

There’s something interesting about using an external committee for these decisions. So kind of by analogy, I talk about when a startup needs to determine what the strike price for their options will be, for their employees. There’s an incentive for the company to set the number really low because it essentially dodges taxes, and it’s a way of incentivizing your employees without having to pay taxes. And so the IRS obviously doesn’t want the company to do that and under price the options. So they want to make sure that the value is the real market value. One of the ways that companies deal with this is through what’s called a 409A valuation where they have an external group come in and value the company and value what they think the option prices and that provides in a legal framework, a safe harbor provision for the company so that if later that decision is challenged, they can say, well, someone independent looked at that and came up with that number.

Matt Angle:

I think it’s an interesting concept for really challenging ethical discussions to have a sort of safe harbor where it’d be incentive perhaps even legally or just from a civil liability standpoint to have these groups that come in from outside and make those tough decisions where it might be a kind of lose-lose or there could be some benefit to the company for having that especially if it’s perceived as an independent decision.

Karen Rommelfanger:

I think that sounds like a great point and having an external body that could periodically come in is helpful. One of the things I think I toiled with is thinking about sustainability for ethics. So a lot of companies, a lot of actual academics, scientists, researchers, it doesn’t matter if you’re actually in your science or not, use ethics historically as a pennant like, look at me, I’m doing something good. They engage in ethicist launch. And so it’s good but if that person can call themselves an ethicist, that’s enough. So it’s hard to manage and balance that relationship with the kind of getting the foot in the door and having some touch points but then also having something that maintains itself with integrity multiple times. And I’m actually more deeply investigating this now because it’s just continually being brought up. And if it’s not done well, it’s going to be disastrous, but I’m worried about this is the idea of a ranking system for companies or neurotech companies.

Karen Rommelfanger:

There is something called good pharma scorecard and it already exists for pharma companies. It’s developed by Jennifer Miller, she based that scorecard on actual guidelines that were out there specifically about particular issues around consent and transparency and then offers a score. And I think she has recurring evaluation. And so I’ve heard this as like, some companies, if I could just have a stamp of approval, if I could just have a checklist that I do, and then I make sure I’m good. Some people even make up their own awards for themselves saying that they met a certain benchmark. That’s another interesting thing. So how can you have an independent entity that might be all the offer such a ranking that could celebrate people who are doing it well, but also do it with a level of integrity that you have to maintain this distance from… You could damage relationships if you’re not protected from, I was saying, do you have a committee separately who is involved and how are they associated with an entity? What benchmarks are there going to be? And how legitimate are people going to be with this?

Karen Rommelfanger:

So if we say, do you have an ethicist? And they’re like, yeah, I have a student in here, is that going to be enough? Or is it going to be like, they have policies in place and protocols and how do we evaluate those? So there’s a lot of pieces I think to think about and how to interact with, how to align with incentive structures that already exist within companies. How do you give people what they want as far as the acknowledgement of doing the right thing? And also still keeping them in touch with the latest of what’s happening in the world of neuroethics so that it’s nimble and doesn’t stay stale. I guess the same is true with the science. And we’ve had lots of discussions about different neuro ethical issues for a long time.

Karen Rommelfanger:

There’s 20 some odd recommendations and guidelines and principles. And then we’ve got conferences all the time trying to keep up to date with different topics. I think the big one that’s interesting is the one that Steph mentioned is this data sharing piece that people haven’t been able to really get their handle on, but there are lots of people working on that and currently working on that. And I’m wondering almost if in part of these conversations, we need to be getting wisdom and foresight from kind of adjacent scholars that you wouldn’t normally typically involve in this space or adjacent communities. So I had this interesting conversation with a group that works on designing protocols for distributed data networks. And I think this is a person who are global neuroethics working group working with the international brain initiative and all the brain initiatives have a very robust and multilayered neuroscience data sharing working group. Actually, there’s like four levels of it.

Karen Rommelfanger:

And they’ve been working on this for a while. And I think one of the pieces that might be missing is future-proofing and thinking about how are you going to get these data to really connect in a secure way in the future? How are you going to get data, having me secure points, because I think data privacy for one’s identity as far as being re-identified from one’s data is just almost a certainty. So the way to get around that is not going to be actually to the GDPR which says that you have the right to be forgotten is almost an impossible ask because your footprint will never be forgotten from the internet for instance, but can we change the flow of information so that one effectively has a certain kind of privacy? Well, I don’t think that engineers and neuroscientists and ethicists typically have the kind of skillset that this type of entity does.

Karen Rommelfanger:

So I’m wondering if maybe that’s how we address some of these problems. That’s how we become a little bit more practical. That’s how we also have new creative problem solving. And we demonstrate how bringing these different groups together helps us to solve complex problems in better ways.

Stephanie Naufel Thacker:

It’s really cool to hear you say that. I feel hopeful about data privacy because there’s a lot that data scientists and people who do machine learning can bring to the table here in a very concrete way. There’s federated learning for example. Federated learning is where instead of uploading all your data to a centralized server and building models on the server itself, you keep everything decentralized. So you might have local data and you build parts of your models there. And then all you’re sending back to the server are parameters like decoder weights and such. That’s a very concrete machine learning way to protect your data.

Stephanie Naufel Thacker:

And it’s being talked about in neuroethics circles. I believe it’s in the four neuroethics priorities for AI by the Morning Sight group, there’s differential privacy which is this idea of injecting noise into your datasets so you never really find out specifics about one person. And I think there’s pretty big avenues to continue to identify these engineering solutions to the privacy considerations that come up when we talk about neuroethics.

Karen Rommelfanger:

Yeah. I think even those that you mentioned, I just want to add, I think those are embedded in the community and those are good conversations happening. I think this is, I would say this is a different layer and different level from that. And so I’m kind of thinking about different players that are not normally in the neuroethics conversation, not in the neuro engineering community.

Ana Maiques:

Yeah. I kind of like the approach that Anna and Karen were saying with the words that is Stephanie described. And when we attended the neuro rights symposium, right, by Rafa maybe a couple of months ago, right. I think that it would be very helpful for other companies in neurotech to say, okay, let’s take Ana’s technology, right. Let’s take the current stimulation at home for epilepsy. Right. And then go through the mental privacy, right. The data protection. Okay. So as Stephanie is saying, we are advanced in this area. I mean, how are we measuring identity and agency? I mean, these things about self and free will are not so easy to identify, especially if you’re on a remote scale, right. And then maybe we come through that workshop that you’re doing Stephanie to ways in which we are going to really identify if we are doing well, the self and free will.

Ana Maiques:

And then there are other issues that I think that go beyond the use of the technology that it’s more societal, like for example, for my epilepsy trials, every patient signs an informed consent, right. And I go through the FDA and through Boston Children’s, but then, I mean, as a group of patients, are they being exposed to that therapy? Or are they going driven into surgery because doctors make more money, right. That’s an ethical issue in which I, as a company don’t have a lot to say, right? The other is what is the access to this kind of technology is going to be a super expensive CMS, Medicare, whatever reimbursed. So there are other societal questions that as a company, I see so hard. It’s a group of companies and a community that has to say to government, oh, you have to regulate this because we just cannot do anything about that. And it needs to be regulated, right, that Rafael I was proposing. So I think we need to get super practical in order to help others.

Matt Angle:

Stephanie, we’ve been talking about data in the abstract and I’m curious, not everyone who’s tuned into this comes from a deep neurotech kind of background and knows the entire kind of field. And could you give us a little bit of an overview of the technologies that exist right now or the ones that are may be coming down the pipeline in the next one to five years? What kinds of data will these devices have access to? If you could maybe just kind of break it down a little bit by modality and give people notionally a sense of what they’ll have access to.

Karen Rommelfanger:

Yeah, that sounds great. Maybe I’ll talk about optical technologies and Anna to talk about EEG and electrical-based technologies. For those who are unfamiliar with what we’re doing at Facebook, we’re pursuing a fully optical approach. So optical being light, and we’re looking at a few different approaches. One is classic fNIRS, Functional Near-Infrared Spectroscopy. And I think in the age of COVID, the best analogy for this is a pulse oximeter. We all know what a pulse oximeter is. You can imagine what we’re building as a pulse oximeter for your head.

Karen Rommelfanger:

So you send your infrared light into the tissue, hemoglobin which is a blood protein is a very strong absorber of light. So you can look at the photons that come back out into your detector. Look at changes in light intensity and understand something about blood oxygenation. So to Matt’s question, we’re actually looking at hemodynamic signals and then indirectly inferring what the neurons are doing. And we’re leveraging this idea of neurovascular coupling where when neurons want to fire and they need oxygen. So blood cells bring the hemoglobin to the neurons and we can record and decode. That’s very interesting because now we can know something about neural activity, but maybe we can learn something about cardiovascular activity or health, and we need to be aware of that too. So that’s a very interesting avenue and we’re going to talk more about that at the workshop in May.

Matt Angle:

Would someone know that I had cheated on my taxes if they scanned my brain with an fNIRS system? What are fNIRS systems doing right now? And if you had to prognosticate a little bit, what do you think they might be doing in the next couple of years?

Stephanie Naufel Thacker:

I think with all of these questions you need to ask what is the field of view? Where is this system on your head? So for us, we’re interested in speech. So we’re targeting the speech motor cortex. Am I going to be able to tell what you’re looking at? No, because I’m not in front of the visual cortex. I’m over your temporal lobe. These are the critical questions we should be asking, classic experiments in fNIRS really to test the system include finger tapping. So maybe you would progress from tapping your thumb all the way to your pinky and the system can tell you, okay, was your right hand moving? Was your left hand moving? And we’re trying to get to higher spatial resolution. So maybe if you can resolve the thumb from the index would be great.

Ana Maiques:

I’m seeing it already. I think that the story of EEG or EMG is not that different from what Stephanie is saying. I mean, people think that because you have a headset of EEG in your brain, you can mind reading or read your thoughts. BCIS are done just by activating the motor cortex and you just tell the user, think you move your hand, think you move your feet, and that’s coded, but it’s not about brain gradient. Right? And I think that we are far from there. EEG right now, you could diagnose epileptic seizures, sleep disorders. That’s a valid diagnostic tour. But I think there are a lot of biomarkers under development and we are not yet clear of what kind of meaningful data on the disease spectrum you can get from EEG besides epileptic seizures and your REM sleep state, right. EMG with control labs, one of the cool things of EMG is that you can also code intention.

Ana Maiques:

That’s kind of a fun concept, right? That you don’t actually need to move your finger. Right. That you just code like the intention, right? So I thought that was pretty interesting in a way as well. Stephanie, you want to comment on that. But when I saw the demos from Thomas on controllers, I was very shocked by the concept of you don’t even need to move your finger, you just think about it, right, which is like a standard EEG brain computer interface. But what are the ethical implications of that? You don’t know the sex, you don’t know the gender, you don’t know the disease. So why should you be concerned about that. Stephanie?

Stephanie Naufel Thacker:

Yeah. I won’t talk too much about their technology except to say, if they’re recording motor units, they’re recording from the peripheral nervous system. And so we have to talk about what you mean by intention. Is this something in the brain or?

Ana Maiques:

No.

Stephanie Naufel Thacker:

Exactly. I think we have to have a conversation about this, but I think the periphery is more at the level of execution. Maybe if I could drop a reference here, Stephen Rainey wrote a paper April last year which is pretty interesting on whether brain technologies are actually reading your mind. And he goes into what does it mean to mind read, are thoughts just our words because certainly we think of images and other things like that. So I would point listeners to that. It’s pretty interesting. I think these are the types of discussions that we should be having when we say intention versus execution or something else.

Ana Maiques:

But it’s not brain mind reading in any case. Right? I mean, it doesn’t matter that like at BCI, with mind headset, you have to take the intention of moving your hands or your feet, right, to create an evoke potential. Right. But you’re not brand, I mean, you deliberately want to think that and there’s not brain ingredient? So I think that people are scared, but the technology is not there. And I think we are years behind from reading minds. That’s my thought. On the noninvasive side, I cannot speak for Matt and paradynamics right. But from a noninvasive perspective, we are miles from reading minds.

Karen Rommelfanger:

I think that one important point about that is that always have to say it in every conversation that we can’t read minds yet, because someone’s going to hear that we said that. So I always have to be very clear that we can’t read minds yet. We don’t even know what a mind is honestly, no one agrees what that is and I don’t trust anyone, especially a scientist who says they know what that is. Functionally, behaviorally, there’s some interesting points. So I’ve actually heard Thomas Ruden say that within 30 seconds of information from his technology, just from my graphic information that he can detect anyone forever. So I just say that because that isn’t mind reading, that isn’t. But if the creator of it expresses concerns about the ability to re-identify individuals, that’s not my draining, but that’s a different level of concern to address. And granted if you sat down with them for another couple hours, I’m sure we would tease that apart and figure out which kinds of data and how and all of that stuff.

Karen Rommelfanger:

But the point I’m trying to make is that maybe it’s not mind reading, but maybe there’s still something there to be carefully thinking through.

Matt Angle:

I think there’s an interesting analogy to genetic data here in the sense that we don’t need to sequence your entire genome to identify you as a person, generating essentially a hash for a unique identifier for people can be done with much less data. It is interesting in the sense that even though these systems may be are functionally limited right now, it doesn’t necessarily mean that the data that they’re collecting is limited in the scope of what it can do from a privacy standpoint. And I really want to bring Anna in here because Anna one of the things that you’ve been very interested in has been direct to consumer wellness apps and things that kind of may be from a legal perspective are not healthcare applications, are marketed as everyday consumer applications, but the type of data that they collect falls into what many of us would consider private healthcare data and the overlap there.

Matt Angle:

And just a little bit about how maybe privacy laws that were written a long time ago don’t cover the new state of the field and what that might mean when brain computer interfaces start collecting even more data.

Anna Wexler:

So I’m not sure that they’re collecting what we would count as private healthcare information. I think that’s a question. I think it depended where we want to draw the line on what sorts of things this data can reveal. So iTracking devices can reveal a lot if you look at them in certain contexts. We have devices that can measure galvanic skin response, that can reveal a lot. We have Fitbit, right? All these things are collecting data from the body, but just because it’s data collected from the body or from ourselves that we’re generating maybe unconsciously, it doesn’t mean that we should have greater protection for that data. So when I think about what privacy concerns or what privacy interests are at play, the first thing I think about is what can this data reveal now? What can they reveal at the moment? And what is it likely to reveal? That’s how I approach this question. And so right now with EEG, I think Ana gave a really excellent overview, right?

Anna Wexler:

And I should say that the research grade devices, not the consumer grade devices can tell us about epilepsy and sleep disorders. The consumer grade devices can tell us a little bit about drowsiness, perhaps attention, but I think the problem with those devices at the moment is that they’re oversold or sold with certain misleading claims. And so for me, the question at the moment isn’t actually as much about privacy for those devices, as much as what the companies are claiming for those devices, but it is a cruise coming back to the original question. I think it is really interesting to think about what is considered protected health information and what’s not. And even if you are working with protected health information, right, at least in the US, the fact that it’s protected health information or private health information, it doesn’t mean it’s necessarily protected by HIPAA laws.

Anna Wexler:

HIPAA laws only cover what they call covered entities which are certain entities, right? So even if you are working with health information, that doesn’t necessarily mean that health information comes with it. You have to have specific protection for it because you might not be a covered entity under US law. So it’s a really complicated question I think.

Karen Rommelfanger:

Matt, one thing that that makes me think of is we’re in a very weird time where not just neuro data, but just data period, there is data out there that’s publicly available, that is non-health data that becomes health data. And that’s kind of a weird switch. So being able to mind social media accounts for mental health status and even predictive. It’s a really weird world because entities like YouTube and Google and Facebook didn’t go out to create a health product, but it became one. People are looking for suicidal ideation and how to manage that. People committing suicide on Facebook video live and YouTube and the types of things people are sharing on YouTube and Google. As I understand it, I’m on the world economic forum’s global future council for disruptive technology and mental health.

Karen Rommelfanger:

And I understand that many of those companies and part of our thinking about this in conversations with entities is that they are thinking about how to manage non-health information that gives us health information. And one of our ethical responsibilities to do that. And also as I understand it too, it’s been difficult for them to feel safe speaking to experts also about it. People who would be outside of their companies because of how secure they need to keep that information. I think that that is a new territory. And frankly, if your data is really available and out there in a public domain, it’s not considered protected in other ways too. You don’t have to get approvals to do that kind of research. It’s a tricky space and our existing IRB guidelines on data usage and data protection don’t assume a model where big data, for instance, where data is infinitely analyzable and creates something different than it was initially in its analysis. So there are a lot of issues that are unmet in that space.

Matt Angle:

I think a lot of rules and a lot of norms were written at a time where the practicalities of data collection and data analysis were very different. If I think of a picture in national geographic in the ’70s of Time Square filled with people, no one in that picture had any privacy concerns because it was going to be printed, it was going to be distributed and then it was going to go on a shelf or a box, never to analyzed again. But today I would take a high resolution picture of 100,000 people in the street, and that’s very easily analyzable and exists for the rest of time. It’s just a different situation. And I think that that’s the case with all of our data. And it’s about to be the case with our mental data. Do you think we’re ready for that?

Karen Rommelfanger:

I don’t think we’re fully ready because we don’t fully understand the possibility yet. And I really don’t think we have all the people in the room who need to be talking to each other.

Matt Angle:

Who’s most prominently not present in these conversations that would have solutions or ask better questions?

Karen Rommelfanger:

In the brain space, the consumer.

Ana Maiques:

I mean, regardless of our talk and Stephanie and me and we are such cool companies, I mean the reality and I just check, Ana is that today there are 13.9 thousand ready users of tDCS. So my device is being do-it-yourself by a community of almost 14,000. And some of the posts that I just wrote is like I’m using this for depression, is it working for you? Your advice, inducing euphoria or whatever. I think that to your point Karen, this is happening already out there. So while we are discussing, there’s people doing this to themselves. So I think we need to engage them into the conversation. And at least, I don’t know, educate them or tell them if it’s safe or not, but we cannot look the other side and say, everybody should be acting in the market. I mean, there’s always, and this community is growing. The last time I checked, it was 11,000 and now it’s 14,000.

Karen Rommelfanger:

Yeah. I think if you define scientific excellence as something that, and people define it different ways, but if you think about it in the sense that it has high societal impact, improves wellbeing, those types of things, but one of the things that you can’t keep scientific, you can’t keep ethics and public engagement separate from that. So it’s really, to achieve scientific excellence, you have to have some ethics thinking in that and you have to have public engagement and by engagement, I don’t mean to kind of one direction kind of broadcasting of what’s going on, but there needs to be a space where you’re also listening to the responses of the community and actually willing to change the directions you’re going if the community gives you feedback that suggest the science would benefit from it doing otherwise.

Karen Rommelfanger:

So I don’t think that we’re very good at that. I do think actually in many ways, companies are better than academic institutions in this regard because companies generally have a market research arm that at least has some dedication and a model for pivoting some of their work with some of the technologies they put out there. But I’m not sure that we’re asking enough of those questions with a lot of the emerging tech that we have.

Matt Angle:

Do you think it’s enough to survey customers given that customers often don’t understand the nature of the data that they’re sharing?

Karen Rommelfanger:

That’s not engagement. So engagement by definition is bi-directional. So it doesn’t work. And so a lot scientists actually feel in that, I know very prominent scientific entities who operate around a broadcasting model of scientific information versus actually one where you can’t really have a good conversation about science with someone that you don’t at least give them an opportunity to get up to speed so that they can ask legit questions that are meaningful and get at the points that they care about. And a lot of people miss that in all levels of research. And that’s a really significant problem that public engagement specialists actually have identified long ago. For example, even with the vaccines or using animals in research, you’re never going to get someone to change their view if they just become enlightened like you as a scientist. It’s never going to happen. You have to actually be willing to meet that person where they are and understand where they’re coming from. But most of our science outreach activities are not oriented on the science in that space. So true engagement is pretty important.

Anna Wexler:

Can we come back to the privacy issue for a minute? I feel like there’s a lot of conversation about what neural data can reveal, but thinking about the genetic context and the genetic parallel, right? There was efforts there to think about a specific privacy harm or implication, right, and in this specific context, right? So if you can reveal propensity to have a certain disease from genetic information, what does that mean in the context of employment and what I don’t feel has been well specified in the neuroscience privacy debate is what is it that we’re afraid of? Is there something that we think that neural data is going to reveal in the near future that we are concerned about? Because if we can specify that, then we can take action, but I haven’t heard a lot of very specific unique claims, right? There’s issues with regard to re-identification and data privacy generally, but that’s generally a concern I have for everything I do.

Anna Wexler:

And I certainly don’t think that’s well addressed. I don’t think it’s well addressed for my data now. And to me, that’s the big gap. It’s not that I’m not concerned about privacy for those things. It’s not being well done now. And so I haven’t really very clearly heard what we are concerned about that neural data can reveal and in which context.

Matt Angle:

I think that one of the concerns that people have is that we live in a framework that’s kind of implicitly built around mind, body, duality and assumes a certain privilege of people’s thoughts that those thoughts will only be known if they choose to express them. And that their mind is essentially a privileged entity that exists outside of the material world. It’s not the practical viewpoint that neuroscientists operate under when they’re physically working with the brain and trying to decode things. Yeah, a lot of neuroscientists probably have a kind of code that they live by every day and then a code that they use to scientifically interrogate the brain. But certainly when people are developing devices for the brain, they’re doing it from a very materialist perspective. And so I think some of the anxiety that I can myself express and I’ve certainly heard other places is that neural data breaks that a little bit. And there aren’t enough existing structures in place for when you can kind of break through that barrier.

Anna Wexler:

What timeframe do you think we’ll be able to break through that?

Matt Angle:

I think we’re already there to be totally honest with you. I don’t know if we’re doing this correct, that you can fingerprint people based on their EMG signal, but certainly there will be data that is perhaps not interesting enough to drive the cyborg applications of the future, but it’s enough to identify if someone’s predisposed to Parkinson’s or other diseases. I think that you’re going to find biomarkers and I don’t think that you’re going to need new technology to do that. That’s something that drives my personal curiosity in BCI ethics and-

Anna Wexler:

So that’s really helpful, right? Because there’s three different things to unpack there. So one is biomarker of disease, thought reading and just like a fingerprint. So unique identification.

Matt Angle:

And I think thought reading isn’t necessarily thought reading in the sense that grabbing a complete snapshot of one’s consciousness, but simply perhaps even just being able to bin the state of mind into a convenient bin.

Anna Wexler:

Which arguably if you believe what EEG measures are supposed to be correlated with, you can do now.

Matt Angle:

For instance, you might be able to tell if an employee is not paying attention during a meeting. That’s not exactly thought reading, but it’s also data that that employee might want to keep to themselves.

Anna Wexler:

So for me, I’ll say it’s super helpful to actually break because privacy is such a big word and thoughts. These are all such big concepts and I think the considerations for each of the things you mentioned are worth talking about, and they’re worth talking about in the context of the technologies that are being developed because there’s different technologies that are going to be able to get at each of those things in different ways, right? Are we coming at it from the medical space, the consumer space, what are we revealing and what are the protections for those things? And where are they living? When we go big into thought protection and mind, like I feel like that’s not as helpful as going into a concrete, what are the concerns and what context and using which technology?

Karen Rommelfanger:

Anna that’s actually what I’m really concerned about in current conversations advocating about the uniqueness of neural data and that we need special regulations that it really rests usually upon hyped arguments about an orientation towards mind reading. And I’m really worried about the damaging effects of that. I wish Nita Farahani was here and I’ve been waiting for this paper to be published and maybe it has and I just haven’t found it, but she actually did research asking American populations about their perceptions of their data and how careful they thought they should be with their data and which kinds of data they cared about. And frankly, they didn’t really seem to care uniquely about the neural data at all. They didn’t explain why, it might be the case that they just didn’t have a sense of what it was or what could be interpreted from it, but cared a lot about breaches that would let people know their social security number or their credit card numbers, and their text messages, those are the types of things that they were worried about.

Karen Rommelfanger:

So I think that it’s important to then link what like Anna was saying, the capability of those technologies with the types of things that are tangibly out there in the world that people are concerned about and circumscribe within the ability of what those technologies can do now. The reason I still think that even early on, we need to take extra care and consideration with neural technologies though, is because it’s just another technology that’s apprehending a unique kind of information that will likely get better and not worse. And I think if you get in the habit of not worrying about it and just giving your information, all kinds of information all the time, and you think it’s fun and don’t really think about other implications, then I think that it can be damaging later because we won’t have seen the possibility of what could happen when you have more detailed information.

Karen Rommelfanger:

And part of that is related to biosensing in the real world around predictive brain health and a number of things, being able to use sensors and accelerometers and all kinds of data that people are actually freely sharing now through Apple Health agreements. Apple is turning into a health company but it’s just a fun thing, taking their steps and helping them lose weight or whatever it else they think. I feel like there’s not enough awareness and communication about the possibility of what those technologies are. And it may be the case that nobody cares too And I also suspect is going to be different. In fact, this is what I study, is culturally. And to your point of this mind, brain, body dualism. In fact, my research has shown that in fact, scientists and doctors in neurology and psychiatry are really strong doulas in their practice, and then the way they see patients and how they stigmatize them too and how they make assumptions about their care.

Karen Rommelfanger:

So scientists aren’t really different even if they’re on the nuts and bolts, even those who profess that they are not that explicitly and have the vocabulary to say that will still in their work manifest in that way. So one of the things I was actually trying to do was interview for particular unique disease that I was working with. I was interviewing colleagues in east Asia who have a baseline philosophical historical orientation around oneness. And that is around the mind, brain and spirit are actually one entity and inseparable. In fact, there’s not even the vocabulary to suggest that they’re separate. Well, I ended up landing mainly with professionals in the space that are working as neurologists, psychiatrists, or scientists. I really didn’t get to move into talking to community. And what I learned though was that there is this kind of allegiance to western ideology and training in medicine and science that’s even eclipsing some of the traditional medicine offerings that a lot of these countries had.

Karen Rommelfanger:

So there was being infiltrated with that kind of bias as well. It’s interesting but as far as the privacy, that seems to be oriented not necessarily around just mind/body dualism or collectivism or individualism, but it’s actually more oriented around the history of those words. So privacy in Chinese and in Japanese, the original translation is shameful secret. So why would you need to have one? Whereas western trajectory of the definition has been one more about celebrating your ability to present yourself as you want and that is considered something that later was called a right. I think as technology evolves on a global scale, those kinds of considerations will have to be put into place.

Matt Angle:

You brought up the idea of diverse perspectives on this. I’m curious if we can just quickly go around and I’d be really curious to know what’s the type of data that you feel particularly sensitive to that may be a lot of your peers and other people that you meet don’t feel as sensitive about? So I’ll go first just to break the ice. I’m certain not to target any particular company, certain social media sites, I am uncomfortable putting my daughter’s faces because I don’t like the idea of that technology company building a model for their face and identifying them and tagging them. And I want them to be able to grow up without that burden but everyone in my family thinks that I’m a complete weirdo. And they’re probably right. And I’m curious, do you have some data that you feel is close to you, that’s personal?

Anna Wexler:

I would say even knowing what I know about the potential privacy implications, what scares me the most is how much I trade off my privacy for convenience with all my data. I know Alexa about what’s happening with the data. I mean, I know what the media knows, right? I’m sure there’s a lot more going on with data from Alexa. I have a smart home, so I’m concerned about all of my data, but I’m most concerned that I’m trading off that privacy for convenience and I’m doing it knowingly and totally willingly.

Stephanie Naufel Thacker:

Since I’ve started thinking about these issues, I’ve started reading EULAs, the end user license agreement. I don’t know if people read these, but they’re actually quite interesting. I bought a $30 activity tracker over the Christmas break and read the EULA. And it was pretty funny and enlightening. It said, if you log into the app, we’ll probably share your data with our server otherwise, no. So I chose not to log in and then it had this funny line, if you are uncomfortable with our data practices, you should talk to the regulators. So it was unclear who the regulators are, but I’ve been approaching this in terms of do I know what’s going to happen with my data. And then I just make the choice after that.

Karen Rommelfanger:

I wanted to purchase a Roomba, but that was too expensive. So I got a knockoff and there was an option to get a smart Roomba or a smart whatever it is, and I didn’t get it because I was like, I don’t want that thing responding and connected to the internet and all of that. But I will say in general, I think I’m a lot like, gosh, I’m like, ah, I just need to get through, there’s this update on my computer, I just need to agree to it because what else am I going do? As far as those privacy agreements, Stephen Shuler runs Psyberguide, and there’s been a lot of conversations since then looking at privacy agreements in a lot of apps. And a lot of them are just cut and paste, they don’t even apply to those new apps that are out there, but he scans a lot of them. And if you’re interested in a mental health app, you can go to Psyberguide and get a sense of transparency and how those privacy agreements line up.

Stephanie Naufel Thacker:

You three have so far given very interesting answers, but you’ve actually not answered the question that I asked and maybe you don’t feel comfortable and that’s fine too.

Ana Maiques:

Matt, I would agree with you that I kind of hesitate every time I upload a picture in my Instagram or Facebook about my kids. So I do feel that this is not my life in a way, and that I’m okay giving my face and my data away for convenience or whatever, but I’m very protective with my kids. So I don’t share a lot of pictures of my kids.

Anna Wexler:

Matt, to answer you, so your question was what specific kind of data?

Matt Angle:

Is there something that’s close to you and particularly there are things that are common to everyone, but is there something that’s close to you that your friends would say, Anna, why are you being weird about this? Who cares? Just give them the data. I’m curious if there’s something that you feel is tied up with your identity or your situation that you like to hold close, that other people don’t care about as much, because I think it’s different for people. I think that’s something Karen was saying that really resonates with me is that this concept of privacy and private data and privileged data is not universal.

Karen Rommelfanger:

I think that in general, I am careful of the possibility even if the possibility might be labeled because even with agreements for certain kinds of information, it can easily be sold to the next company if the first one folds or different decisions happen. So I think that I’m kind of thinking about portals for information for me. And sometimes I’m more disciplined about not letting portals for information allowed. So I don’t have a nest, I don’t have smart appliances.

Matt Angle:

Would you be comfortable with, just as an example, how about your voice and did anyone watch the little mermaid? Cost of doing business for Ariel as she has to give her voice to the sea witch. What about voice recognition that builds a model of your voice and maybe could be used in a generative model so that perhaps one day you’re riding the bus and the person that tells you that the next stop is Charing cross, you find that that’s your voice. Would that feel personal to you?

Karen Rommelfanger:

There was a woman who recreated a deceased friend through old text messages and emails, and has an app now that one can use to speak to the dead in a way. And then there’s that other Korean VR experience where the mother tragically lost her daughter young and they recreated images of her and her voice through available information they had. I’m surprised we didn’t get more buzz about that in the US but if we had done that in the US, we would have gotten a lot more information, but in Korea, it was developed for a way for this woman to cope with the loss of her daughter. And had she not given all that information, she wouldn’t have gotten the closure that she could have been in that, and it was heartbreaking. I mean, in the video, her daughter tells her not to cry.

Karen Rommelfanger:

It’s a birthday party for her. I mean, I can’t think of how more heartbreaking it could’ve been, but it’s a birthday party for her deceased daughter telling her mom not to cry and to let her go. And I was like, oh my God, would I want to have that experience? Would I want information, I guess I was thinking of your pictures of kids on the internet and videos that people share. Some people would really want that and be grateful for something in that light. That’s not a marketing tool necessarily at this point, but I think about that and think about the woman who developed a technology that recreates the dead from your text messages and whatever when you upload into it, would I feel violated if someone did that with me? I mean, I spent a lot of days sending email messages. I think with me, people can get a pretty good picture of at least my professional life from that.

Karen Rommelfanger:

But yeah, I don’t stop doing that, even though I know that that’s out there, but would I be happy if that were done, I’d really want to be involved in that conversation of that maybe happening.

Anna Wexler:

Yeah. I think for me it would depend on what the trade-off was. What sort of convenience you were giving me in exchange for my voice. So I would like to say that I wouldn’t do it, but if it was a really awesome trade-off, I might, especially considering that you can probably, if somebody wanted to imitate my voice, there’s probably enough videos in my lectures or whatever on YouTube that they could probably do that now anyway, if they really wanted to. So there’s some data that’s out there already. Yeah. I think it would depend what the trade-off would be, but I guess I don’t have one specific thing that I keep private that my peers don’t. I’m probably, given that I have a smart home and I’m an early adopter, I would say when it comes to certain things, I’d probably give away more information then.

Ana Maiques:

But I think that depends on the context because I’m not totally addicted to Clubhouse. Right. And in Clubhouse, your voice is you. Right. And there are brands that are really thinking about voices, right. So I don’t know, maybe your perception of voice now and with these social media, new networks based on voice, I don’t know. I think that you may change if you start using Clubhouse like crazy.

Matt Angle:

Ana, are you on Clubhouse? I’m not on Clubhouse.

Ana Maiques:

I’m super addicted. It’s just amazing.

Matt Angle:

Can you tell the audience what that is because not everyone’s part of the tech illuminati. And so not everyone will know.

Ana Maiques:

Just to give you a sense, Friday night, somebody opens a room and in Clubhouse, you’re buying back, right? And then you go, it’s like a radio format, but there are rooms. And somebody like Matt opens a room, there was a room called neuromodulation in Alzheimer’s and I’m like, wow. And then you get into the room and they’re like two moderators that open the room. And then if you raise your hand or they know you, you become a speaker. So it’s like listening to the radio, but with the chance to speak, and we end up being 450 people from all over the world, talking about neuromodulation in Alzheimer’s on a Friday night. Just amazing. And then you find these really kind of really cool people.

Ana Maiques:

And I think it’s because it’s the beginning, maybe the more people come, the less quality, but you just share rooms with amazing people and jump from room to room and you can be doing your dishes and just listening or working or whatever and voice, Anna is so important because when journalists are in Clubhouse and they run a room, you can tell by their voice, right, hat they are super professional, right? So brands are starting to think that they should have voices to represent them in Clubhouse.

Karen Rommelfanger:

Yeah, we totally do. I had a colleague who worked with voice data to help predict disease and actually recently sold his company. And he had talked with us a little bit about the interesting, how it varied from state to state, how you owned your voice data. If you were leaving a message on someone else’s phone and somebody was in the background, would you own it or would you have to ask that person for permission? And it wasn’t uniform across the state. So running a company where you’re collecting voice data and trying to do that kind of work is not as simple.

Matt Angle:

There’s another topic with brain data that I’m curious to get your opinions on if you have one and that’s the kind of trend of medicalizing the human experience. There’s been this greater appreciation recently of cognitive diversity. But I think that there’s a tension between if I take something like ADHD, on one hand, I think there’d be a lot of neurologists and neuroscientists that would be skeptical of that label and see attention as a kind of spectrum, a natural spectrum. And then on the other hand, I think there are advocates for people, for whom attention deficit is a kind of disability in the learning environment, who want to make sure that there’s a label that can be applied to get particularly children extra time taking tests. And there’s been this whole ecosystem that’s arisen out of there.

Matt Angle:

All of that is based on just kind of behavioral assessment of people and trying to bend them at a behavioral level. What happens when we start having greater access to brain data and we start correlating brain data with those labels and start developing new labels based on kind of diagnostic technology and even just consumer technology? What direction do we think that will push this? Do we have any kind of predictions?

Karen Rommelfanger:

I would say that’s the medicalization piece or the blurred line between health and wellness is something that I’m really worried about actually mainly in the sense of, I worry about the quantified self movement in a way that’s pervaded so many things that everything about you can be quantified with a metric that will start diminishing a richer perception of an experience. And part of that is going to be thinking about what cognitive diversity looks like because I feel like there’s so many ways that that’s already constrained by social norms also. And anyway, and the many people don’t benefit from that. Many people are actually disenfranchised from that. I think that technology can be a positive tool in helping predict diseases and thinking about how to intervene early, but it also requires a lot of care and making sure you’re not boxing someone in and forecasting a future that they don’t have a part in.

Anna Wexler:

I would say that that my concern in the near term is more based on the validity of those metrics, just because looking at the direct to consumer space, there are a lot of products that are marketed to consumers that claim to be able to add some measure of engagement or attention, but how they’re actually extrapolating that from the EEG is a definite black box. And so I’m concerned that at least in this near term space, right, that they’re being sold to consumers, but they’re not actually accurate measures of attention. And then looking a little bit ahead into the future, if these tools do get taken up in, let’s say workplace settings or school settings, right, I have concerns about how those measures will be used. And underneath that all, there’s still a question about whether they’re valid and what we’re extrapolating them. So those are the concerns for me that are out there now with regard to those kinds of measures.

Ana Maiques:

To me as an entrepreneur, I think I’ll take it one step at a time because it’s impossible. So when I do an EEG biomarker for epilepsy or for my depression trial or whatever, I’ll be asking myself these ethical questions and making sure that the data cannot be used in another way, but honestly, we are not yet there. So I think we have to think about it, but there’s no correlations I can extract right now. And I need massive amounts of data that I don’t have yet. And I need to go through the FDA to validate that diagnostic tool. And so, I mean, honestly I think that we need to apply some of these questions and principles in our daily life, but I cannot get that crazy because, I don’t see clear signs that I can get meaningful data from these patients and subjects in my study which I cannot right now. Right.

Ana Maiques:

I just have to make sure that they can send the data, that I make good use of the data, that it’s identified and so and so forth and protect the patients, it’s their data, right?

Stephanie Naufel Thacker:

Yeah. I think it’s about grounding it in the science of what we can actually understand, maximizing transparency and having a clear vision of what is going to be done with these biomarkers or outcomes or results.

Matt Angle:

Stephanie, Ana, Anna, Karen, thank you so much for your time today.

Karen Rommelfanger:

Thanks Matt. We had a great time.

Ana Maiques:

We’re still running neuroethics in Clubhouse. Come on. I invite you all.

Karen Rommelfanger:

I think we should.

Ana Maiques:

I will. I promise.

Karen Rommelfanger:

Okay. I’ll do it.

Stephanie Naufel Thacker:

Thanks Matt.

Karen Rommelfanger:

Okay. Bye.

Ana Maiques:

Thank you everyone.

Matt Angle:

Bye.

--

--

Neurotech Pub

Matt Angle, PhD, CEO of Paradromics invites expert guests to the ‘Pub to discuss neurotechnology and brain computer interfaces