Advertisement

Advertisement

Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain-Based Speech Decoding

  • Original Research/Scholarship
  • Open access
  • Published: 30 April 2020
  • Volume 26 , pages 2295–2311, ( 2020 )

Cite this article

You have full access to this open access article

research papers read minds

  • Stephen Rainey 1 ,
  • Stéphanie Martin 2 ,
  • Andy Christen 2 ,
  • Pierre Mégevand 2 &
  • Eric Fourneret 3  

38k Accesses

18 Citations

175 Altmetric

18 Mentions

Explore all metrics

Brain reading technologies are rapidly being developed in a number of neuroscience fields. These technologies can record, process, and decode neural signals. This has been described as ‘mind reading technology’ in some instances, especially in popular media. Should the public at large, be concerned about this kind of technology? Can it really read minds? Concerns about mind-reading might include the thought that, in having one’s mind open to view, the possibility for free deliberation, and for self-conception, are eroded where one isn’t at liberty to privately mull things over. Themes including privacy, cognitive liberty, and self-conception and expression appear to be areas of vital ethical concern. Overall, this article explores whether brain reading technologies are really mind reading technologies. If they are, ethical ways to deal with them must be developed. If they are not, researchers and technology developers need to find ways to describe them more accurately, in order to dispel unwarranted concerns and address appropriately those that are warranted.

Similar content being viewed by others

research papers read minds

Neuro Rights: A Human Rights Solution to Ethical Issues of Neurotechnologies

research papers read minds

A Path to Science Fiction Style Technology Applications? The Example of Brain-to-Brain Interfaces

research papers read minds

Brain-Computer Interfaces: High-Tech Race to Merge Minds and Machines

Avoid common mistakes on your manuscript.

Introduction

This paper will explore ethical issues arising from neural technologies in terms of mind-reading. The term ‘mind-reading’ has been used to describe the mechanisms employed by brain–computer interfaces (BCIs), and neural decoding using neurotechnologies. In the philosophy of mind, the mind refers to mental states (imagination, emotions, intentions, perception, decision making, etc.), and with brain interfacing technologies, neuroscience is now able to highlight some correlations between mental states and cerebral activity. There is thus some material basis for the mind.

However, the access to some material basis of mental states remains piecemeal and does not embrace simultaneously all aspects of the mind. In other words, neural correlates remain physical imprints of the expression of the mind, but are not sufficient to be thought of as constituting the whole mind itself. Confusion should be avoided between mind and piecemeal thoughts, and between reading mind and reading some neural imprints of thoughts. In particular, neural prostheses may allow reading neural correlate fragments of mental states but not the whole mind on its global scale. The extent to which all the pieces of thoughts that can be decoded from neural recordings constitute whole thoughts thus remains unclear. First, we address this question generally, and then more specifically with reference to the context of a speech BCI. For the speech BCI we ask: to what extent might speech prostheses allow access to our thoughts?

While people in general are quite reliable in ‘reading the mind’ of one another, according to familiar behavioural and linguistic cues, they are actually mainly able to infer another’s thoughts based on the signs the other person externalises intentionally (subconscious and other such ‘tells’ notwithstanding). In particular, access to another’s inner thinking remains inaccessible, it being possible only to make predictions about that. A technological turn within this familiar practice excites ethical concern, since the type and content of information that one may access with a ‘mind-reading’ device may strongly diverge from human inferences based in more traditional interpersonal methods. Technology, perhaps, makes a tacit claim to be objective in a way that much interpersonal interpretation does not. Putting one’s mind in the realm of objective legibility may appear to include more jeopardy than putting it in familiar social, fallible, realms for this reason. The specific notion of technologically-mediated mind-reading is apparently a particular kind of concern, and so it requires a specific discussion.

In order to structure the enquiry as we go forward, we need to investigate:

What BCI and neural decoding can currently do, and what may be possible soon

Ethical issues in current and future neurotechnology

Speech neuroprosthesis as a mind-reading device

We will then ask, further:

How ought we to treat these ethical issues?

What further analysis is needed?

By getting a handle on the technological capabilities of neurotechnologies, like BCIs, we can realistically frame the ethical concerns that may arise. Likewise, in order to consider how ethicists ought to react to emerging issues, or pre-empt likely future issues, a clear picture of what happens in brain decoding contexts is necessary. Funding bodies, and the researchers they fund, have specific responsibilities here. In constructing calls for research, funders steer efforts in specific directions. In creating these technologies, researchers are relied upon to work responsibly, and to communicate clearly the nature of their work, both in terms of present capability and likely future scenarios. Ethicists need clear pictures of what is happening with funding strategies, and technology development, in order to reflect upon and respond to it. Ultimately, such reflections may go on to seed policy advice, as well as colour public perceptions. This is vital for a clear perspective on the research ecosystem, including how this impinges upon wider socio-political realities. We will explore this generally, as well as going deeper into a speech prosthesis as a special case. This is considered especially interesting given the proximity between much thought, and language.

What BCI and Neural Decoding Can Currently Do, and What May be Possible Soon

At the most general level, neurotechnologies work by recording electrical activity in the brain, and applying various processes to the outputs obtained. Recording can happen within the brain itself via macroscopic or microscopic intracerebral/intracortical probes, on its surface with electrocorticography (ECoG), or from non-invasive electro- or magneto-cencephalography (EEG/MEG) recording devices positioned over the head. All types of brain recordings can be correlated with a variety of physical and cognitive activity.

Given the ongoing activity in the field of brain-reading and attempts to correlate this work with mental states, it is important to remain vigilant of social, legal and policy dimensions of primary research and concurrent technology development. The nature of the individual as an agent in their own right, a locus of intentional action, may be challenged in the development of technologies that appear to read minds, whether or not they actually read minds (Mecacci and Haselager 2019 ).

Reading the mind, like reading a book, implies something about mind’s being potentially open to view. This would mark a radical departure from conventional accounts of one’s mind as accessible only to oneself. In a mind-reading context, one person might gain access to another’s ideas, thoughts, intentional, emotional, perceptual states, or their memories. This might be done with or without permission. It could offer the promise of exciting new modes of communication, self-expression, and mutual understanding. Often the stuff of science fiction, this prospect can have alarming dimensions concerning who might have access to the mind, as well as implications for how persons might be judged. In a world of mind-reading ought a person to be judged in terms of what they reveal voluntarily, or what can be read from their thoughts?

For example, since 2013 it has been known that detection of a specific type of signal (the ‘P300’ wave) can play a role in ‘spying’ on brain activity to extract confidential information. This can be done with subliminal cues, perhaps to gain information predicting personal beliefs. Researchers constructed a game and recorded the brain activity of its players. These signals could be processed to elicit details about bank PIN numbers and related private information without the game player knowing (Ienca et al. 2018 ). This was done through recording brain activity during the game, and processing the signal to search for P300 waves in response to hidden cues. Brain data is thus clearly highly sensitive data because it can house information that a subject may not wish to externalise, but which may nevertheless become accessible by others, in specific situations using neurotechnology.

This point is raised quite acutely where neurotechnology would have applications in the legal sphere. Meegan ( 2008 ) discusses law enforcement applications of memory detection. Recognition of a scene, or an object, can be the sort of thing detectable in neural activity regardless of claims overtly made. As far as memory-reading goes, this might be seen as a litmus test—the idea of ‘guilty knowledge’ as a smoking gun in a courtroom. Would memories that are stored, but are not being reinstated at the present moment, be available to the mind-reader? This is a neuroscience question about how are memories stored, and the difference between a memory that has been stored and one that is being reinstated. It is also an ethical question, however, in that it has ramifications for what limits we ought to apply in treating them as readable in machine-like terms.

Through recording signals from various regions of the brain, research has suggested that quite fine-grained information can be partly read from brain activity. Motor plans, visual imagery, percepts such as faces (Chang and Tsao 2017 ), speech (Akbari et al. 2018 ) decision and intentions, landmark places, moods can all be predicted from neural recordings (Haynes et al. 2007 ; Kay et al. 2008 ; Roelfsema et al. 2018 , p. 13; Sani et al. 2018 ). Existing research technologies can be used to decode the neural correlates of mental images too, the things seen by a person. In controlled circumstances, identification algorithms operating on fMRI data, can pick the image viewed by an experimental participant from a known set of exemplars. Experiments here can achieve over 90% accuracy (Kay et al. 2008 ). The idea of mental privacy certainly seems to be challenged by these kinds of activities. Such results appear to demonstrate that mental content can be ‘read off’ from brain measurements. This implies that though someone may be certain that they have unique, privileged access to their own thoughts, that certainty can be misplaced (Eickhoff and Langner 2019 ; Farah et al. 2009 ; see Mecacci and Haselager 2019 ).

If we want to focus on mind reading as a point of reference for ethical concerns surrounding neurotechnology, we can ask of the technologies and techniques mentioned here: Is this mind-reading? We would be compelled to answer, not exactly . In terms of the approaches to identifying mental images, for instance, the experimental protocol operates on the basis of a modelled receptive field, and activation data for sets of images. The images decoded from the fMRI data are selected from a known list, and represented as matching patterns of data. This is detailed and interesting work, illuminating much of how visual representations work in the visual system. But it isn’t the case that, in an uncontrolled environment, a device can reconstruct the visual experience of a given individual.

In the legal example, what can be said is that the techniques involve careful attention to specific neural activity in specific contexts. A memory can’t simply be ‘read’ as one could read a sentence on a page. This kind of memory detection exploits associations among known stimuli and evoked neural signals in order to warrant inferences about a subject’s past experiences or perceptions, like recognition of a particular image. If, when shown a crime scene, my brain exhibits a response associated with familiarity it may indicate that I was there.

Clearly, there are risks and potential for false positives with this kind of approach. On the other hand, it seems equally clear is that the idea of accessing the real content of memory, or downloading a set of memories, doesn’t come up. But this does not mean that ethical problems do not arise, however. Where some practice might be taken as mind reading, we ought not to be too complacent in having ruled out ‘real’ mind reading on a technicality. An approach sensitive to ethical, and socio-political realities is required in order to deal with the possibilities for pseudo-mind reading in which people may fall prey to bad practices.

Ethical Issues in Current and Future Neurotechnology

To the extent that neurotechnologies embody somehow a claim that the mind may be open to view, they each raise ethics concerns relating to a range of issues, including mental privacy. Relating to this too, is a concern over the reduction of mental states to sets of neural data. We will get into more detail on these and the related areas of cognitive liberty and self-conception. Before delving into these functional issues arising from the use of neurotechnology, something should be said about the presentation of neurotechnology.

Outside the research lab, there is a variety of BCIs already available commercially, including products like Cyberlink, Neural Impulse Actuator, Enobio, EPOC, Mindset (Gnanayutham and Good 2011 ). The potential prospects for applications based on these types of technology are interesting (Mégevand 2014 ). However, the plausibility of technological claims ought to be carefully scrutinised.

While the detection of neural signals is in principle easy, identifying them is difficult (Bashashati et al. 2007 ). A lot of research effort aims at improving detection and recording technology. This should help to improve the prospects for identifying recorded neural signals. Identification is centrally relevant to mind reading in that the signals recorded must be correlated somehow with mental states. It is ethically relevant too, not least owing to the prospects of misidentifying mental states via inappropriately processed brain recordings, or through misrepresenting the nature of the recording taking place.

Brain signals can be sorted into types. Recording sites can be classified in functional ways—visual, motor, memory, language areas, for example. That types of signals in specific areas appear to be ‘behind’ our conscious activity suggests that activity ought to be classifiable in a quite objective way. At least some neurotechnological development paradigms would suggest that this was the case: claims have been made about the kinds of technologies discussed above as ‘accessing thoughts’, ‘identifying images from brain signals’, ‘reading hidden intentions’ (Haynes et al. 2007 ; Kay et al. 2008 ). Attending to the brain signals means getting to the mental content, these claims suggest.

But this may be a case of overclaiming. It seems as if a great deal more information than is captured through measuring brain signals is required if meaningful inferences about thought content are to be drawn from them. For example, Yukiyasu Kamitani carried out experimental work aimed at ‘decoding dreams’ from functional magnetic resonance imaging (fMRI) data. Media reports presented this work as if dreams were simply recorded from sleeping experimental participants (Akst 2013 ; Revell 2018 ). But in reality, 30–45 h of interview per participant was required in order to classify a small number of objects dreamt of. This is impressive neuroscience experimentation, but it isn’t it just a ‘reading of the brain’ to ‘decode a dream’. Interview is an interesting supplement to brain signal recording because it specifically deals in verbal disclosures about the experience of mental states.

When it is reported that Facebook or Microsoft will develop a device to allow users to operate computers with their minds or their thoughts (Forrest 2017 ; Solon 2017 ; Sulleyman 2018 ), this is perhaps a too-extravagant claim. While many consumer devices are marketed as ‘neurotechnology,’ it is implausible that they actually operate via detecting and recording brain signals (Wexler and Thibault 2018 ). Far more likely is that such devices will respond to electrical activity in the muscles of the face, the signals in which are maybe 200 times as strong as those in the brain, and much more closely positioned to the device’s electrodes. In all likelihood, doing something like typing with such a device exploits micro-movements made when thinking carefully about words and phrases. Muscles used in speaking those words are activated as if preparing to speak them, hence corresponding to them in a way that can be operationalised into a typing application. Indeed, this is the stated mode of operation for Google’s ‘AlterEgo’ device (Kapur et al. 2018 ; Whyte 2018 ).

Overclaiming is an ethical issue as it can undermine confidence in neurotechnologies in at least two ways: failing to deliver by misrepresenting technologies, and serving to raise undue hopes and concerns. This builds on a misleading representation of how a device works, and its prospects as an effective technology. There are ethical implications from this in terms of user consent in using a device. There may be varying degrees of deception at work, given this sort of misrepresentation, that could affect how we ought to consider the potential uptake and use of devices, whether by experimental participants or consumers.

Drawing on the dream decoding example, we have reason to think that the objective recording of brain signals is insufficient as an account of a mental state precisely in that it has no experiential dimension. Thoughts occur within an internal model of the world from a particular point of view. This model cannot be straightforwardly generalised from subject to subject based on brain signal observation. Only specific dimensions of this model can be inferred, limited in terms of predictability, and only after large amounts of training in contexts of rigorous research conditions. The objective promise of recording brain signals might be exactly what cuts them off from the mind, which includes a subjective perspective.

The possibility of a too-zealous reduction of the mind to some neural data arises here as an ethical concern. ‘Mental’ concepts can bear discussion without reference to ‘neuroscientific’ concepts (also vice versa). How each might relate to natural kinds is an open question (Churchland 1989 ). There is therefore a ubiquitous question of interpretation to be remembered, as the interplay between mind and brain is considered. The thought-experiment of a ‘cerebroscope’ serves to highlight this.

The cerebroscope is a notional device that records all activity of all neurons in the brain on a millisecond by millisecond basis. With this total representation of neural activity, the question is whether we have a representation of the mind. Steven Rose suggests not—the nature of the brain as an evolving, plastic entity, means that millisecond by millisecond resolution of neural activity is not intelligible without a total map of the genesis of those active neurons and their connections:

…for the cerebroscope to be able to interpret a particular pattern of neural activity as representing my experience of seeing [a] red bus, it needs more than to be able to record the activity of all those neurons at this present moment, over the few seconds of recognition and action. It needs to have been coupled up to my brain and body from conception—or at least from birth, so as to be able to record my entire neural and hormonal life history. Then, and only then, might it be possible for it to decode the neural information. (Choudhury and Slaby 2016 , p. 62ff)

We should be careful in considering these sorts of issues when it comes to thinking of mind-reading. It might be thought that the mind is akin to a space through which a putative mind-reader could walk, and examine what is to be found in there. But Steven Rose’s point suggests a more situated kind of mind, reliant upon its genesis as well as its state at some moment in time. The point being made is that even were one to perceive the thought of another somehow it could only be understood as a subjective thought, not as an objective thought had by another.

Relatedly, Mecacci and Haselager ( 2019 ) discuss some philosophical ideas that relate to the privacy of ‘the mental’. They describe a perspectivalism from A. J. Ayer regarding mental states, prioritising the privacy of the mind and its contents. Such a view would also appear to rule out mind-reading as they require a particular perspective, meaning they appear not as objects in a mental space potentially open to view, but private contents of a specific mind.

Misrepresentation of technology, and reductionism, each appear to be dimensions of ethical importance in themselves. But a little more analysis of each shows them to lead to a broader set of ethical issues in neurotechnology. Where mental privacy is threatened, cognitive liberty may suffer. ‘Cognitive liberty’ includes the idea that one ought to be free from brain manipulation in order to think one’s own thoughts (Sententia 2006 ). This concept often arises in the context of neuro-interventions in terms of law or psychiatry, or neuroenhancement (Boire 2001 ). Here, it is most salient in connection with a potential loss of mental privacy.

Where mental privacy is uncertain, it is not clear that someone may feel free to think their own thoughts. Where measurements of brain activity may be taken (rightly or wrongly) to reveal mental contents, neurophysiology itself could be seen as a potential informant on thought itself. This would be to uproot very widely assumed notions about a person’s unique and privileged access to their own thought. If a keen diarist was to become aware that their diary could be read by another, they might begin to write less candid or revealing entries. If anyone became sure that measurements of their brain might reveal any of their mental contents, how might they refrain from having candid and revealing thoughts? This would amount to a deformation of normal ways of thinking, in rather a distressing way.

With this distressing possibility, the very idea of self-conception too is threatened. Where mental privacy concerns lead to inhibition of cognitive liberty, it would not be certain that one might feel free to reflect upon values, decisions, or propositions without threat of consequences. Considering ethically dubious thoughts, even if one considered them only to develop ways to refute them, might become dangerous where the content of the thought might be read from the activity of the brain. Faced with technology that appears to read minds, it seems ethical risks are posed by that technology in representing the mind as open to view.

Part of what it is to have a mind, and to be an agent at all, able to act on one’s reasoned opinions, includes reflection. This might mean that we wish to consider things we wouldn’t do, run through options we may disdain, or otherwise wish to reject. If we were to find ourselves in a context where mental contents were thought of as public, this reflective practice could suffer. Especially where such mental data might be held to be a more genuine, unvarnished, account than one offered in spoken testimony. This might build upon the principle at stake in the ‘guilty knowledge’ example from above. A chilling effect on thinking itself could materialise owing to the possibility of very intimate surveillance via brain recording.

The mediation of thoughts, ideas, deliberations, into actions is part of autonomous agency and self-representation. The potential for indirectly representing such things in one’s action is part of what makes those actions one’s own. Where a mind-reading device could be imagined as ‘cutting through’ the mediation to gain direct access to mental contents, this would not necessarily make for a more accurate representation of a person. Nor might it underwrite a better explanation of their actions than an explanation they might volunteer. At the heart of this is the privacy of mental activity, and the space this allows us to deliberate. Nita Farahany has called this a ‘right to cognitive liberty’ (Farahany 2018 ).

The privacy of deliberation is very important in providing room for autonomy, and substance for agency. As has been mentioned, inner mental life can be characterised to a greater or lesser extent through one’s behavioural cues. The difference between the reluctant carrying out of a task, as opposed to an enthusiastic embracing of the same is often fairly obvious. But indirect assessments of someone’s state of mind in their activities is a familiar, fallible, and well-established interpersonal practice. The idea that objective data might be used to directly characterize an attitude, once and for all, serves to undermine the role of agency. A decision to act represents a moderation of impulses, reasons, desires. If a mind-reading device were deployed it would represent a claim on the real state of a person’s mind, certainly. But this could serve to downplay that person’s action as more complex than simply the outcome of a neural process.

Thinking of the cerebroscope example, this is akin to the decontextualisation of neural recordings discussed there. The nature of the signals represented may make little sense outside of a biographical story. They may be likely, thereby, to misrepresent the person recorded. The fact that extensive testimony played such a large part in the dream reading experiment seems to back up this thought-experimental conclusion.

More broadly, it is important to discuss the purpose to which mind-reading devices are put. For instance, wearing a cast on a broken arm displays some dimensions of a person’s physiological state. However, this is of low concern because no one might gain from wanting to ‘steal’ such information. But what is problematic is the potential for the misuse of people’s thoughts, choices, or preferences as inferred from neurotechnology. Even if thoughts are not accessible by a technology, but it is possible that they are taken to be so, ethical issues arise. With a commercialisation of neurotechnology as ‘mind reading’ technology, these potentialities multiply as technology may be deployed where there is no particular need. This leaves open a question about what purposes the technology may be used for, and by whom. A potential diversity of technologies, actors, purposes, and stakes make for a complex picture.

The socio-political ramifications of widespread neural recording could be deep. From these recordings, detailed predictions can be made about private, intimate aspects of a person. For those with access to it, this data will be a valuable asset. Facebook’s intended brain–computer interface, permitting seamless user interfaces with their systems would not only record and process brain signals, but associate the data derived from them with detailed social media activity (Robertson 2019 ). This would represent a valuable resource, providing rich links between overt actions and hitherto hidden brain activity. This kind of detailed neuroprofiling will likely be taken to be as unvarnished and intimate an insight into a person as it is possible to acquire. To the extent that this is accurate, new dimensions of understanding people through their brains might be opened. As with the political micro-targeting scandals involving Facebook and Cambridge Analytica, this data can also enable personal manipulation, as well as social and political damage (Cadwalladr and Graham-Harrison 2018 ).

At the personal level, databases that associate not only behavioural, but also brain data, represent serious risks for privacy and wider dimensions relating to dignity. The kinds of profiling they would enable would risk marginalising individuals and groups, while eroding solidarities among diverse groups. This happened in the run up to Brexit, based in covert psychometric profiling, and has had lasting social damage (Collins et al. 2019 ; Del Vicario et al. 2017 ; Howard and Kollanyi 2016 ). Targeting information at specific individuals or groups based the neural data would represent a new front in data-driven marketing or political campaigning, enabling novel, more sinister, and perhaps harder to deflect, forms of manipulation (Ienca et al. 2018 ; Kellmeyer 2018 ).

These examples focus upon how information can be leveraged for specific effects. Where neuroprofiling converges with advancing technology, direct neural-based manipulation also arises as a potential concern. Among the types of neurotechnology already available for research and for consumer purposes are those that use brain data to control software and hardware, those that display data for user’s purposes as neurofeedback, and those that seek to modify brain activity itself. These neurostimulation or neuromodulation devices use data derived from the brain to modulate subsequent brain activity, regulating it according to some desired state (Steinert and Friedrich 2019 ). This is quite a clear challenge to autonomy. Outside of an ethically regulated context such as that of a university research lab, this ought not to be taken lightly. Market forces are not self-evidently sufficient for ensuring the responsible marketing, and use, of such potentially powerful devices.

The kinds of concerns being discussed here are not based in mind-reading per se, but rather in effects likely to occur in the context of widespread neurotechnology use. Beyond the market context however, in the realm of ongoing research, at least one sort of mind-reading might appear to be technically possible in a limited sense at least. Following analysis of this case, we will be well placed to take a position on the ethical concerns that have arisen across a variety of applications from those where mind-reading is not the central effect to one in which it would be most likely.

Speech Neuroprosthesis as a Mind-Reading Device

A first impression might be that ‘thought’, to the extent that thought can be ‘in words’, is substantially linguistic. While all thought is not necessarily something verbal: images, sounds, smells, etc. can be brought to mind as well, significant dimensions of thought such as internal monologue, or inner speech are readily conceivable as thinking in words (Perrone-Bertolotti et al. 2014 ).

This does not sound so far away from some of the explanation of human consciousness provided by Dennett ( 1993 ). On his account, augmentations upon abilities and instincts evident in many animal species are at least partly realised in human beings through linguistically borne ‘microhabits of thought’. For Dennett, this is what turns a brain into a mind . If language plays these kinds of roles, perhaps even being constitutive of minds as we know them as Dennett appears to suggest, ‘inner’, ‘silent’, or ‘covert’ speech may be very close to mental contents. What’s more, these kinds of non-externalised speech signals can be recorded from the brain. In the recording of covert speech, there is some prima facie possibility of technology-mediated thought-reading.

Whereas in natural speech, the vocal cords create a vibration that is modified by the vocal tract to create a word (or phoneme, or syllable), a neural-based speech processor takes as input neural signals, applies a modifying function, and creates a new signal as output. Such systems record the neural signals associated with vividly imagined, but unverbalised speech, and translates these signals into intelligible speech without any need of peripheral nerves or muscles activation. Currently, several strategies have been investigated to define what is the best speech representation to be decoded to target this type of speech interface.

One strategy is to classify the neural activity into a finite number of choices. Several studies have shown the feasibility to decode discrete units of speech, such as phonemes (Brumberg et al. 2011 ; Ikeda et al. 2014 ; Pei et al. 2011 ) or words (Martin et al. 2016 ), during covert speech.

If every mental state correlates with, or is realised by, a neural mechanism, then reading signals from the brain ought to allow access to mental states, including covert speech states. Covert speech seems a contentful medium, and one that really could be decoded in a mind-reading scenario. In terms of research-grade neurotechnology, in the context of controlled conditions, devices that are triggered by covert speech activity could be triggered by mentalised speech not intended for externalisation (Bocquelet et al. 2016 ). Considering further decoding techniques, especially the use of artificial neural nets, this could further be compounded as neural activity associated with types of covert speech activity might be discerned in a way that bypasses the user’s intentions.

Building software that directly maps the neural activity to any speech representation remains difficult due to the lack of any measurable behavioural output during covert speech, however. An alternative solution is based on the fact that imagined, covert speech, has features like those associated with the neural correlates of overt speech (Bocquelet et al. 2016 ; Chakrabarti et al. 2015 ). As such, it becomes possible to build a decoding model from an overt speech condition, and then apply this decoder in the covert speech condition to reconstruct acoustic speech features (Martin et al. 2014 ). Studies demonstrate the feasibility to decode basic speech features from neural signals during covert speech, but also emphasize the difficulty in extracting the patterns accurately. This illustrates how far we currently are from developing a sci-fi mind-reading device.

In principle, more brain signals than intended could be recorded in the kind of system just outlined. From any recorded signal features of relevance must be extracted such that they create an appropriate source for the modifying function. Means of determining speech-relevant source signal features might include the use of machine learning, using probability functions for each phoneme in a given language (Amodei et al. 2016 ; Hinton et al. 2012 ). This kind of approach would recognise language-relevant neural signals in terms of a mapping between neural signal and likely phonetic correlates.

Recalling the relations between thought and speech it seems possible that a too-sensitive speech device, based in covert speech, could externalise some parts of a person’s internal monologue. In some sense at least, this could be a case of mind-reading, perhaps not as generally represented in sci-fi, but nonetheless an example of internal monologue being externalised by technical means. One of the main conceptual, technological, and ethical difficulties here is to distinguish the covert speech that should be externalized from that which should not.

What’s more, with the inclusion of machine learning, language models could be integrated such that phonemes in a language could be predicted based on the model. This would mean that, as well as brain signals, a language model also adds a predictive dimension to the speech prosthesis system. In principle, the system could ‘guess’ the words to be spoken before the biosignals are realised that would coincide with the phonetic signal. The prediction could be done well but, in being based on neural signals and model-based predictions, nevertheless occur in the absence of a decision to speak out loud. This could be as if the system were speaking on the user’s behalf, perhaps undertaking delegated action without express permission (Rainey 2018 ).

In any case of speech prediction, there could be the problem that the system could externalise something not intended at all by the user, not as thought or as speech. Even where a robust system of retraction was in place, there would be a risk that erroneous speech was taken as that of the user. This could amount to a challenge to their first-person authority.

So, in terms of imagined speech, there is an obvious risk in principle. The nature of the recording and decoding, in being triggered by covert speech, could feasibly result in more speech being externalised than expected or desired. This could be because of the way triggering works as based in brain signals and predictions from language models, prior to conscious decisions to act (Glannon 2016 , p. 11). This raises some prospect of thought-reading, based covert speech involuntarily captured by a brain signal recording.

How Should These Ethical Issues be Treated?

User control over neurotechnologies would appear to be of great importance in mitigating the potential mind-reading risks to privacy, autonomy, agency, and self-representation. A fine-grained ability for the user to select what exactly is output by such devices would be a good start. Besides this, some ability to retract actions mediated via brain controlled devices ought to be built in. This ‘veto control’ (Steinert et al. 2018 ) would allow for some practical distinction to be made between brain recording-related disclosures to be considered deliberate or not. This might be most obviously illustrated with reference to a speech device. In term of a speech neuroprosthesis, speech action and the output of involuntary or other proto-speech act elements (e.g. thinking things through verbally), ought to be strictly user-controllable. Speech that the user intends to broadcast verbally should be clearly distinguishable from inner speech that the user does not want to broadcast. The user ought to have strict control over this distinction.

More than this, however, regulatory systems must be put in place to anticipate neurotechnology-specific issues. These will include not only how neurotechnologies are presented, but also how they work, and what sorts of applications they ought to be circumscribed from. For instance, medical device regulation, and data protection regulation, are likely each deficient when it comes to consumer neurotechnologies (Allison et al. 2007 ; McStay and Urquhart 2019 ). Devices of that sort are not medical, yet they might operate on health-relative neural functions, and record and transmit health-relative data. The developers of brain technologies ought to, as part of their product or application development, maintain active links with policymakers in order that appropriate regulation can be framed.

To illustrate, it is likely that private companies will drive much neurotechnology development, even in assistive applications. To some extent, some users will thereby be relying upon those private companies in order to be able to live a fuller life, whereas others will use devices more recreationally. How assessments may be made of this kind of distinction in action, between those who cannot act but for a device and those who merely choose so to act, represents a novel issue. Policymaking will be required for scene-setting around the introduction of devices that introduce this distinction, highlighted by ethical analysis. This would be a useful, and ethically sensitive, means of anticipating near-future issues in conjunction with technology development.

What Further Analysis is Needed?

The technology to routinely, accurately, record all of the brain signals required to reconstruct something like a stream of consciousness is not yet here. Nevertheless, neurotechnology is a burgeoning field, and techniques, materials, technologies, and theories are being refined at a pace. Anticipation of future developments ought to become live research ethics focal points in neuroscience and related labs in order to avoid a ‘delay fallacy’, as discussed in Mecacci and Haselager ( 2019 ).

Given the sorts of high stakes possibilities described here, we might do well in developing neurotechnologies to consider the benefits proposed applications will deliver. If we can answer the question why do we want this neurotechnology now , we may have good reason to proceed. If we cannot, be may have good reason to pause. ‘We’ here will include a variety of actors, it should be noted. Asking and answering the question why will likely be a very widespread discussion, drawing upon a variety of expertise, social, political, legal, and ethical resources. That such a discourse is so complex ought in itself to indicate the pressing nature of questions surrounding neurotechnological advance.

Specifically in terms of the thought-reading speech neuroprosthetic case discussed here, and other such assistive neurotechnologies, the question why is most clearly answerable. Where disability or disadvantage can be alleviated well with technology there is a strong case to be made for its development. Ethical issues that do arise cluster around the concept of control, in order to protect the volition of technology users. These concerns can be mitigated by sensitivity constraints within the system, and veto control whereby a user can halt entirely the synthetic speech emanating from their speech device. Conceptual analysis of the nature of responsibility ought to be used to inform technological development in terms of device activation, control, and veto, in order to ensure voluntariness remains central in device use. These relate to device-centred concerns that may emerge. On a wider perspective, how outputs from devices are received by audiences, are dealt with in law and policy, and feature in social perspectives, requires some thought.

In relation to user control over neurotechnologies in general, developers should ensure that any BCI affords the user as much control as possible, with a focus on reliably distinguishing between intentional triggering and neural activity merely sufficiently like it to cause device activation. These kinds of ethical dimensions even the more clear-cut case of neurotechnology for virtuous purposes, illustrate likely areas where subsequent problem could arise. Legal ramifications of devices not sufficiently and demonstrably in the control of users are likely to arise where ethical issues surrounding responsibility for technology-mediated action are not treated as the technology develops. Where a user relies upon their device, moreover, it will be vital that this somehow be taken into account in terms of the functioning of a device.

A further area likely to require more ethical, and legal, analysis will be that of data. Neurotechnologies will operate on the basis of a lot of brain derived data. This is sensitive material, from which can be inferred a range of health and other personal information. Yet the relations between data and persons requires further clarity (Rainey et al. 2019 ). In some senses, we are our data, but to a substantial degree we are not, being merely represented by it in particular ways, relative to the purposes for which it was collected, the means used toward that collection, the mode of storage, and so on. But how this works is a matter in need of debate, as illustrated in issues surrounding the use of Big Data (Bollier and Firestone 2010 ; Boyd and Crawford 2012 ).

At any rate, we ought not to proceed with neurotechnology developments that will raise data questions, and then try to work it out. Too much potential risk of different kinds would attend that approach. Especially where databases including brain derived data are already being created, the very existence of such resources is a problem where no clear conceptualisation is ready for their nature. Data privacy is emerging as a collective concern (Véliz 2019 ). As the science advances, it is through interdisciplinary discourse, highly reflexive and inclusive discussions that policy, legal, and social norms can be kept up to date. These technologies represent challenges to which we ought to respond in constructive ways in order that research and citizens alike be safeguarded.

Akbari, H., Khalighinejad, B., Herrero, J., Mehta, A., & Mesgarani, N. (2018). Towards reconstructing intelligible speech from the human auditory cortex. BioRxiv . https://doi.org/10.1101/350124 .

Article   Google Scholar  

Akst, J. (2013). Decoding dreams. The Scientist Magazine . https://www.the-scientist.com/notebook/decoding-dreams-39990 . Accessed 10 Oct 2018.

Allison, B. Z., Wolpaw, E. W., & Wolpaw, J. R. (2007). Brain–computer interface systems: Progress and prospects. Expert Review of Medical Devices, 4 (4), 463–474.

Amodei, D., Ananthanarayanan, S., Anubhai, R., Bai, J., Battenberg, E., Case, C., Casper, J., Catanzaro, B., Cheng, Q., Chen, G., Chen, J., Chen, J., Chen, Z., Chrzanowski, M., Coates, A., Diamos, G., Ding, K., Du, N., Elsen, E., et al. (2016). Deep speech 2: End-to-end speech recognition in English and Mandarin. In International conference on machine learning (pp. 173–182). http://proceedings.mlr.press/v48/amodei16.html . Accessed 6 Nov 2018.

Bashashati, A., Fatourechi, M., Ward, R. K., & Birch, G. E. (2007). A survey of signal processing algorithms in brain–computer interfaces based on electrical brain signals. Journal of Neural Engineering, 4 (2), R32. https://doi.org/10.1088/1741-2560/4/2/R03 .

Bocquelet, F., Hueber, T., Girin, L., Savariaux, C., & Yvert, B. (2016). Real-time control of an articulatory-based speech synthesizer for brain computer interfaces. PLoS Computational Biology, 12 (11), e1005119. https://doi.org/10.1371/journal.pcbi.1005119 .

Boire, R. G. (2001). On cognitive liberty. The Journal of Cognitive Liberties, 2 (1), 7–22.

Google Scholar  

Bollier, D., & Firestone, C. M. (2010). The promise and peril of big data (pp. 1–66). Washington, DC: Aspen Institute, Communications and Society Program.

Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication & Society, 15 (5), 662–679. https://doi.org/10.1080/1369118X.2012.678878 .

Brumberg, J. S., et al. (2011). Classification of intended phoneme production from chronic intracortical microelectrode recordings in speech-motor cortex. Frontiers in Neuroscience . https://doi.org/10.3389/fnins.2011.00065 .

Cadwalladr, C., & Graham-Harrison, E. (2018). The Cambridge analytica files. The Guardian , 6–7. http://davelevy.info/Downloads/cabridgeananalyticafiles%20-theguardian_20180318.pdf . Accessed 21 Mar 2019.

Chakrabarti, S., Sandberg, H. M., Brumberg, J. S., & Krusienski, D. J. (2015). Progress in speech decoding from the electrocorticogram. Biomedical Engineering Letters, 5 (1), 10–21. https://doi.org/10.1007/s13534-015-0175-1 .

Chang, L., & Tsao, D. Y. (2017). The code for facial identity in the primate brain. Cell, 169 (6), 1013–1028.e14. https://doi.org/10.1016/j.cell.2017.05.011 .

Choudhury, S., & Slaby, J. (2016). Critical neuroscience: A handbook of the social and cultural contexts of neuroscience . New York: Wiley.

Churchland, P. S. (1989). Neurophilosophy toward a unified science of the mind brain . Cambridge: MIT Press.

Collins, D., Efford, C., Elliot, J., Farrelly, P., Hart, S., Knight, J., et al. (2019). Disinformation and ‘fake news’ (Vol. 8, p. 111). London: The Digital, Culture, Media and Sport Committee.

Del Vicario, M., Zollo, F., Caldarelli, G., Scala, A., & Quattrociocchi, W. (2017). Mapping social dynamics on Facebook: The Brexit debate. Social Networks, 50, 6–16. https://doi.org/10.1016/j.socnet.2017.02.002 .

Dennett, D. C. (1993). Consciousness explained (New Ed ed.). London: Penguin.

Eickhoff, S. B., & Langner, R. (2019). Neuroimaging-based prediction of mental traits: Road to utopia or Orwell? PLoS Biology, 17 (11), e3000497. https://doi.org/10.1371/journal.pbio.3000497 .

Farah, M. J., Smith, M. E., Gawuga, C., Lindsell, D., & Foster, D. (2009). Brain imaging and brain privacy: A Realistic Concern? Journal of Cognitive Neuroscience, 21 (1), 119–127. https://doi.org/10.1162/jocn.2009.21010 .

Farahany, N. (2018). When technology can read minds, how will we protect our privacy? https://www.ted.com/talks/nita_farahany_when_technology_can_read_minds_how_will_we_protect_our_privacy . Accessed 28 Nov 2018.

Forrest, C. (2017). Facebook planning brain - to - text interface so you can type with your thoughts . TechRepublic. https://www.techrepublic.com/article/facebook-planning-brain-to-text-interface-so-you-can-type-with-your-thoughts/ .

Glannon, W. (2016). Ethical issues in neuroprosthetics. Journal of Neural Engineering, 13 (2), 021002. https://doi.org/10.1088/1741-2560/13/2/021002 .

Gnanayutham, P., & Good, A. (2011). Disabled users accessing off-the-shelf software using a button interface. In Paper presented at computer science and information systems, 7th annual international conference . Athens.

Haynes, J.-D., Sakai, K., Rees, G., Gilbert, S., Frith, C., & Passingham, R. E. (2007). Reading hidden intentions in the human brain. Current Biology, 17 (4), 323–328.

Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., et al. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29 (6), 82–97. https://doi.org/10.1109/MSP.2012.2205597 .

Howard, P. N., & Kollanyi, B. (2016). Bots, #Strongerin, and #Brexit: Computational propaganda during the UK – EU referendum (SSRN Scholarly Paper ID 2798311). Social Science Research Network. https://papers.ssrn.com/abstract=2798311 . Accessed 21 Mar 2019.

Ienca, M., Haselager, P., & Emanuel, E. J. (2018). Brain leaks and consumer neurotechnology. Nature Biotechnology, 36, 805–810. https://doi.org/10.1038/nbt.4240 .

Ikeda, S., Shibata, T., Nakano, N., Okada, R., Tsuyuguchi, N., Ikeda, K., et al. (2014). Neural decoding of single vowels during covert articulation using electrocorticography. Frontiers in Human Neuroscience . https://doi.org/10.3389/fnhum.2014.00125 .

Kapur, A., Kapur, S., & Maes, P. (2018). AlterEgo: A personalized wearable silent speech interface. In 23rd International conference on intelligent user interfaces (pp. 43–53).

Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008). Identifying natural images from human brain activity. Nature, 452 (7185), 352–355. https://doi.org/10.1038/nature06713 .

Kellmeyer, P. (2018). Big brain data: On the responsible use of brain data from clinical and consumer-directed neurotechnological devices. Neuroethics . https://doi.org/10.1007/s12152-018-9371-x .

Martin, S., Brunner, P., Holdgraf, C., Heinze, H.-J., Crone, N. E., Rieger, J., et al. (2014). Decoding spectrotemporal features of overt and covert speech from the human cortex. Frontiers in Neuroengineering, 7, 14. https://doi.org/10.3389/fneng.2014.00014/full .

Martin, S., Brunner, P., Iturrate, I., Millán, J. R., Schalk, G., Knight, R. T., et al. (2016). Word pair classification during imagined speech using direct brain recordings. Scientific Reports, 6, srep25803. https://doi.org/10.1038/srep25803 .

McStay, A., & Urquhart, L. (2019). ‘This time with feeling?’ Assessing EU data governance implications of out of home appraisal based emotional AI. First Monday . https://doi.org/10.5210/fm.v24i10.9457 .

Mecacci, G., & Haselager, P. (2019). Identifying criteria for the evaluation of the implications of brain reading for mental privacy. Science and Engineering Ethics, 25 (2), 443–461. https://doi.org/10.1007/s11948-017-0003-3 .

Meegan, D. V. (2008). Neuroimaging techniques for memory detection: Scientific, ethical, and legal issues. The American Journal of Bioethics, 8 (1), 9–20. https://doi.org/10.1080/15265160701842007 .

Mégevand, P. (2014). Telepathy or a painstaking conversation in morse code? Pierre Mégevand goes beyond the media hype. PLOS Neuroscience Community . http://blogs.plos.org/neuro/2014/09/08/telepathy-or-a-painstaking-conversation-in-morse-code-pierre-megevand-goes-beyond-the-media-hype/ . Accessed 16 Aug 2018.

Pei, X., Barbour, D., Leuthardt, E. C., & Schalk, G. (2011). Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans. Journal of Neural Engineering, 8 (4), 046028. https://doi.org/10.1088/1741-2560/8/4/046028 .

Perrone-Bertolotti, M., Rapin, L., Lachaux, J.-P., Baciu, M., & Lœvenbruck, H. (2014). What is that little voice inside my head? Inner speech phenomenology, its role in cognitive performance, and its relation to self-monitoring. Behavioural Brain Research, 261, 220–239. https://doi.org/10.1016/j.bbr.2013.12.034 .

Rainey, S. (2018). A steadying hand”: Ascribing speech acts to users of predictive speech assistive technologies. Journal of Law and Medicine, 26 (1), 44–53.

Rainey, S., Bublitz, J. C., Maslen, H., & Thornton, H. (2019). Data as a cross-cutting dimension of ethical importance in direct-to-consumer neurotechnologies. AJOB Neuroscience, 10 (4), 180–182. https://doi.org/10.1080/21507740.2019.1665134 .

Revell, T. (2018). Mind-reading devices can now access your thoughts and dreams using AI. New Scientist . https://www.newscientist.com/article/mg23931972-500-mind-reading-devices-can-now-access-your-thoughts-and-dreams-using-ai/ . Accessed 16 Oct 2018.

Robertson, A. (2019). Facebook just published an update on its futuristic brain - typing project . The Verge. https://www.theverge.com/2019/7/30/20747483/facebook-ucsf-brain-computer-interface-typing-speech-recognition-experiment . Accessed 13 Aug 2019.

Roelfsema, P. R., Denys, D., & Klink, P. C. (2018). Mind reading and writing: The future of neurotechnology. Trends in Cognitive Sciences, 22, 598–610.

Sani, O. G., Yang, Y., Lee, M. B., Dawes, H. E., Chang, E. F., & Shanechi, M. M. (2018). Mood variations decoded from multi-site intracranial human brain activity. Nature Biotechnology, 36 (10), 954–961. https://doi.org/10.1038/nbt.4200 .

Sententia, W. (2006). Neuroethical considerations: Cognitive liberty and converging technologies for improving human cognition. Annals of the New York Academy of Sciences, 1013 (1), 221–228. https://doi.org/10.1196/annals.1305.014 .

Solon, O. (2017). Facebook has 60 people working on how to read your mind. The Guardian . https://www.theguardian.com/technology/2017/apr/19/facebook-mind-reading-technology-f8 . Accessed 14 Nov 2018.

Steinert, S., Bublitz, C., Jox, R., & Friedrich, O. (2018). Doing things with thoughts: Brain–computer interfaces and disembodied agency. Philosophy & Technology . https://doi.org/10.1007/s13347-018-0308-4 .

Steinert, S., & Friedrich, O. (2019). Wired emotions: Ethical issues of affective brain–computer interfaces. Science and Engineering Ethics . https://doi.org/10.1007/s11948-019-00087-2 .

Sulleyman, A. (2018). Mind-reading headset allowing people to control computers with their thoughts described in Microsoft patent. The Independent . https://www.independent.co.uk/life-style/gadgets-and-tech/news/mind-reading-headset-computer-control-thoughts-microsoft-patent-a8163976.html . Accessed 14 Nov 2018.

Véliz, C. (2019). Privacy is a collective concern . https://www.newstatesman.com/science-tech/privacy/2019/10/privacy-collective-concern . Accessed 23 Oct 2019.

Wexler, A., & Thibault, R. (2018). Mind-reading or misleading? Assessing direct-to-consumer electroencephalography (EEG) devices marketed for wellness and their ethical and regulatory implications. Journal of Cognitive Enhancement . https://doi.org/10.1007/s41465-018-0091-2 .

Whyte, C. (2018). Mind-reading headset lets you Google just with your thoughts. New Scientist . https://www.newscientist.com/article/mg23731723-300-mind-reading-headset-lets-you-google-just-with-your-thoughts/ . Accessed 14 Nov 2018.

Download references

Funding was provided by Horizon 2020 Framework Programme (Grant No. 732032), Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (Grant No. #167836).

Author information

Authors and affiliations.

Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK

Stephen Rainey

Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland

Stéphanie Martin, Andy Christen & Pierre Mégevand

Braintech Lab (U 1205), Université Grenoble Alpes, Grenoble, France

Eric Fourneret

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stephen Rainey .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Rainey, S., Martin, S., Christen, A. et al. Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain-Based Speech Decoding. Sci Eng Ethics 26 , 2295–2311 (2020). https://doi.org/10.1007/s11948-020-00218-0

Download citation

Received : 08 July 2019

Accepted : 16 April 2020

Published : 30 April 2020

Issue Date : August 2020

DOI : https://doi.org/10.1007/s11948-020-00218-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Neuroethics
  • Mind reading
  • Neuroprosthetics
  • Neuroscience
  • Neurotechnology
  • Find a journal
  • Publish with us
  • Track your research

Find anything you save across the site in your account

The Science of Mind Reading

By James Somers

An anatomical view of a human head showing a memory of two people kissing.

One night in October, 2009, a young man lay in an fMRI scanner in Liège, Belgium. Five years earlier, he’d suffered a head trauma in a motorcycle accident, and since then he hadn’t spoken. He was said to be in a “vegetative state.” A neuroscientist named Martin Monti sat in the next room, along with a few other researchers. For years, Monti and his postdoctoral adviser, Adrian Owen, had been studying vegetative patients, and they had developed two controversial hypotheses. First, they believed that someone could lose the ability to move or even blink while still being conscious; second, they thought that they had devised a method for communicating with such “locked-in” people by detecting their unspoken thoughts.

In a sense, their strategy was simple. Neurons use oxygen, which is carried through the bloodstream inside molecules of hemoglobin. Hemoglobin contains iron, and, by tracking the iron, the magnets in fMRI machines can build maps of brain activity. Picking out signs of consciousness amid the swirl seemed nearly impossible. But, through trial and error, Owen’s group had devised a clever protocol. They’d discovered that if a person imagined walking around her house there was a spike of activity in her parahippocampal gyrus—a finger-shaped area buried deep in the temporal lobe. Imagining playing tennis, by contrast, activated the premotor cortex, which sits on a ridge near the skull. The activity was clear enough to be seen in real time with an fMRI machine. In a 2006 study published in the journal Science , the researchers reported that they had asked a locked-in person to think about tennis, and seen, on her brain scan, that she had done so.

With the young man, known as Patient 23, Monti and Owen were taking a further step: attempting to have a conversation. They would pose a question and tell him that he could signal “yes” by imagining playing tennis, or “no” by thinking about walking around his house. In the scanner control room, a monitor displayed a cross-section of Patient 23’s brain. As different areas consumed blood oxygen, they shimmered red, then bright orange. Monti knew where to look to spot the yes and the no signals.

He switched on the intercom and explained the system to Patient 23. Then he asked the first question: “Is your father’s name Alexander?”

The man’s premotor cortex lit up. He was thinking about tennis—yes.

“Is your father’s name Thomas?”

Activity in the parahippocampal gyrus. He was imagining walking around his house—no.

“Do you have any brothers?”

Tennis—yes.

“Do you have any sisters?”

“Before your injury, was your last vacation in the United States?”

The answers were correct. Astonished, Monti called Owen, who was away at a conference. Owen thought that they should ask more questions. The group ran through some possibilities. “Do you like pizza?” was dismissed as being too imprecise. They decided to probe more deeply. Monti turned the intercom back on.

“Do you want to die?” he asked.

Tiny bird food cart selling crumbs from the hot dog cart next to it.

Link copied

For the first time that night, there was no clear answer.

That winter, the results of the study were published in The New England Journal of Medicine . The paper caused a sensation. The Los Angeles Times wrote a story about it, with the headline “ Brains of Vegetative Patients Show Life .” Owen eventually estimated that twenty per cent of patients who were presumed to be vegetative were actually awake. This was a discovery of enormous practical consequence: in subsequent years, through painstaking fMRI sessions, Owen’s group found many patients who could interact with loved ones and answer questions about their own care. The conversations improved their odds of recovery. Still, from a purely scientific perspective, there was something unsatisfying about the method that Monti and Owen had developed with Patient 23. Although they had used the words “tennis” and “house” in communicating with him, they’d had no way of knowing for sure that he was thinking about those specific things. They had been able to say only that, in response to those prompts, thinking was happening in the associated brain areas. “Whether the person was imagining playing tennis, football, hockey, swimming—we don’t know,” Monti told me recently.

During the past few decades, the state of neuroscientific mind reading has advanced substantially. Cognitive psychologists armed with an fMRI machine can tell whether a person is having depressive thoughts; they can see which concepts a student has mastered by comparing his brain patterns with those of his teacher. By analyzing brain scans, a computer system can edit together crude reconstructions of movie clips you’ve watched. One research group has used similar technology to accurately describe the dreams of sleeping subjects. In another lab, scientists have scanned the brains of people who are reading the J. D. Salinger short story “Pretty Mouth and Green My Eyes,” in which it is unclear until the end whether or not a character is having an affair. From brain scans alone, the researchers can tell which interpretation readers are leaning toward, and watch as they change their minds.

I first heard about these studies from Ken Norman, the fifty-year-old chair of the psychology department at Princeton University and an expert on thought decoding. Norman works at the Princeton Neuroscience Institute, which is housed in a glass structure, constructed in 2013, that spills over a low hill on the south side of campus. P.N.I. was conceived as a center where psychologists, neuroscientists, and computer scientists could blend their approaches to studying the mind; M.I.T. and Stanford have invested in similar cross-disciplinary institutes. At P.N.I., undergraduates still participate in old-school psych experiments involving surveys and flash cards. But upstairs, in a lab that studies child development, toddlers wear tiny hats outfitted with infrared brain scanners, and in the basement the skulls of genetically engineered mice are sliced open, allowing individual neurons to be controlled with lasers. A server room with its own high-performance computing cluster analyzes the data generated from these experiments.

Norman, whose jovial intelligence and unruly beard give him the air of a high-school science teacher, occupies an office on the ground floor, with a view of a grassy field. The bookshelves behind his desk contain the intellectual DNA of the institute, with William James next to texts on machine learning. Norman explained that fMRI machines hadn’t advanced that much; instead, artificial intelligence had transformed how scientists read neural data. This had helped shed light on an ancient philosophical mystery. For centuries, scientists had dreamed of locating thought inside the head but had run up against the vexing question of what it means for thoughts to exist in physical space. When Erasistratus, an ancient Greek anatomist, dissected the brain, he suspected that its many folds were the key to intelligence, but he could not say how thoughts were packed into the convoluted mass. In the seventeenth century, Descartes suggested that mental life arose in the pineal gland, but he didn’t have a good theory of what might be found there. Our mental worlds contain everything from the taste of bad wine to the idea of bad taste. How can so many thoughts nestle within a few pounds of tissue?

Now, Norman explained, researchers had developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense “meaning space.” They could see how these points were interrelated and encoded by neurons. By cracking the code, they were beginning to produce an inventory of the mind. “The space of possible thoughts that people can think is big—but it’s not infinitely big,” Norman said. A detailed map of the concepts in our minds might soon be within reach.

Norman invited me to watch an experiment in thought decoding. A postdoctoral student named Manoj Kumar led us into a locked basement lab at P.N.I., where a young woman was lying in the tube of an fMRI scanner. A screen mounted a few inches above her face played a slide show of stock images: an empty beach, a cave, a forest.

“We want to get the brain patterns that are associated with different subclasses of scenes,” Norman said.

As the woman watched the slide show, the scanner tracked patterns of activation among her neurons. These patterns would be analyzed in terms of “voxels”—areas of activation that are roughly a cubic millimetre in size. In some ways, the fMRI data was extremely coarse: each voxel represented the oxygen consumption of about a million neurons, and could be updated only every few seconds, significantly more slowly than neurons fire. But, Norman said, “it turned out that that information was in the data we were collecting—we just weren’t being as smart as we possibly could about how we’d churn through that data.” The breakthrough came when researchers figured out how to track patterns playing out across tens of thousands of voxels at a time, as though each were a key on a piano, and thoughts were chords.

The origins of this approach, I learned, dated back nearly seventy years, to the work of a psychologist named Charles Osgood. When he was a kid, Osgood received a copy of Roget’s Thesaurus as a gift. Poring over the book, Osgood recalled, he formed a “vivid image of words as clusters of starlike points in an immense space.” In his postgraduate days, when his colleagues were debating how cognition could be shaped by culture, Osgood thought back on this image. He wondered if, using the idea of “semantic space,” it might be possible to map the differences among various styles of thinking.

Osgood conducted an experiment. He asked people to rate twenty concepts on fifty different scales. The concepts ranged widely: BOULDER, ME, TORNADO, MOTHER . So did the scales, which were defined by opposites: fair-unfair, hot-cold, fragrant-foul. Some ratings were difficult: is a TORNADO fragrant or foul? But the idea was that the method would reveal fine and even elusive shades of similarity and difference among concepts. “Most English-speaking Americans feel that there is a difference, somehow, between ‘good’ and ‘nice’ but find it difficult to explain,” Osgood wrote. His surveys found that, at least for nineteen-fifties college students, the two concepts overlapped much of the time. They diverged for nouns that had a male or female slant. MOTHER might be rated nice but not good, and COP vice versa. Osgood concluded that “good” was “somewhat stronger, rougher, more angular, and larger” than “nice.”

Osgood became known not for the results of his surveys but for the method he invented to analyze them. He began by arranging his data in an imaginary space with fifty dimensions—one for fair-unfair, a second for hot-cold, a third for fragrant-foul, and so on. Any given concept, like TORNADO , had a rating on each dimension—and, therefore, was situated in what was known as high-dimensional space. Many concepts had similar locations on multiple axes: kind-cruel and honest-dishonest, for instance. Osgood combined these dimensions. Then he looked for new similarities, and combined dimensions again, in a process called “factor analysis.”

When you reduce a sauce, you meld and deepen the essential flavors. Osgood did something similar with factor analysis. Eventually, he was able to map all the concepts onto a space with just three dimensions. The first dimension was “evaluative”—a blend of scales like good-bad, beautiful-ugly, and kind-cruel. The second had to do with “potency”: it consolidated scales like large-small and strong-weak. The third measured how “active” or “passive” a concept was. Osgood could use these three key factors to locate any concept in an abstract space. Ideas with similar coördinates, he argued, were neighbors in meaning.

For decades, Osgood’s technique found modest use in a kind of personality test. Its true potential didn’t emerge until the nineteen-eighties, when researchers at Bell Labs were trying to solve what they called the “vocabulary problem.” People tend to employ lots of names for the same thing. This was an obstacle for computer users, who accessed programs by typing words on a command line. George Furnas, who worked in the organization’s human-computer-interaction group, described using the company’s internal phone book. “You’re in your office, at Bell Labs, and someone has stolen your calculator,” he said. “You start putting in ‘police,’ or ‘support,’ or ‘theft,’ and it doesn’t give you what you want. Finally, you put in ‘security,’ and it gives you that. But it actually gives you two things: something about the Bell Savings and Security Plan, and also the thing you’re looking for.” Furnas’s group wanted to automate the finding of synonyms for commands and search terms.

They updated Osgood’s approach. Instead of surveying undergraduates, they used computers to analyze the words in about two thousand technical reports. The reports themselves—on topics ranging from graph theory to user-interface design—suggested the dimensions of the space; when multiple reports used similar groups of words, their dimensions could be combined. In the end, the Bell Labs researchers made a space that was more complex than Osgood’s. It had a few hundred dimensions. Many of these dimensions described abstract or “latent” qualities that the words had in common—connections that wouldn’t be apparent to most English speakers. The researchers called their technique “latent semantic analysis,” or L.S.A.

At first, Bell Labs used L.S.A. to create a better internal search engine. Then, in 1997, Susan Dumais, one of Furnas’s colleagues, collaborated with a Bell Labs cognitive scientist, Thomas Landauer, to develop an A.I. system based on it. After processing Grolier’s American Academic Encyclopedia, a work intended for young students, the A.I. scored respectably on the multiple-choice Test of English as a Foreign Language. That year, the two researchers co-wrote a paper that addressed the question “How do people know as much as they do with as little information as they get?” They suggested that our minds might use something like L.S.A., making sense of the world by reducing it to its most important differences and similarities, and employing this distilled knowledge to understand new things. Watching a Disney movie, for instance, I immediately identify a character as “the bad guy”: Scar, from “The Lion King,” and Jafar, from “Aladdin,” just seem close together. Perhaps my brain uses factor analysis to distill thousands of attributes—height, fashion sense, tone of voice—into a single point in an abstract space. The perception of bad-guy-ness becomes a matter of proximity.

In the following years, scientists applied L.S.A. to ever-larger data sets. In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Google’s algorithm turned each word into a “vector,” or point, in high-dimensional space. The vectors generated by the researchers’ program, word2vec, are eerily accurate: if you take the vector for “king” and subtract the vector for “man,” then add the vector for “woman,” the closest nearby vector is “queen.” Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail. Other companies, including Apple and Amazon, built similar systems. Eventually, researchers realized that the “vectorization” made popular by L.S.A. and word2vec could be used to map all sorts of things. Today’s facial-recognition systems have dimensions that represent the length of the nose and the curl of the lips, and faces are described using a string of coördinates in “face space.” Chess A.I.s use a similar trick to “vectorize” positions on the board. The technique has become so central to the field of artificial intelligence that, in 2017, a new, hundred-and-thirty-five-million-dollar A.I. research center in Toronto was named the Vector Institute. Matthew Botvinick, a professor at Princeton whose lab was across the hall from Norman’s, and who is now the head of neuroscience at DeepMind, Alphabet’s A.I. subsidiary, told me that distilling relevant similarities and differences into vectors was “the secret sauce underlying all of these A.I. advances.”

In 2001, a scientist named Jim Haxby brought machine learning to brain imaging: he realized that voxels of neural activity could serve as dimensions in a kind of thought space. Haxby went on to work at Princeton, where he collaborated with Norman. The two scientists, together with other researchers, concluded that just a few hundred dimensions were sufficient to capture the shades of similarity and difference in most fMRI data. At the Princeton lab, the young woman watched the slide show in the scanner. With each new image—beach, cave, forest—her neurons fired in a new pattern. These patterns would be recorded as voxels, then processed by software and transformed into vectors. The images had been chosen because their vectors would end up far apart from one another: they were good landmarks for making a map. Watching the images, my mind was taking a trip through thought space, too.

The larger goal of thought decoding is to understand how our brains mirror the world. To this end, researchers have sought to watch as the same experiences affect many people’s minds simultaneously. Norman told me that his Princeton colleague Uri Hasson has found movies especially useful in this regard. They “pull people’s brains through thought space in synch,” Norman said. “What makes Alfred Hitchcock the master of suspense is that all the people who are watching the movie are having their brains yanked in unison. It’s like mind control in the literal sense.”

One afternoon, I sat in on Norman’s undergraduate class “fMRI Decoding: Reading Minds Using Brain Scans.” As students filed into the auditorium, setting their laptops and water bottles on tables, Norman entered wearing tortoiseshell glasses and earphones, his hair dishevelled.

He had the class watch a clip from “Seinfeld” in which George, Susan (an N.B.C. executive he is courting), and Kramer are hanging out with Jerry in his apartment. The phone rings, and Jerry answers: it’s a telemarketer. Jerry hangs up, to cheers from the studio audience.

“Where was the event boundary in the clip?” Norman asked. The students yelled out in chorus, “When the phone rang!” Psychologists have long known that our minds divide experiences into segments; in this case, it was the phone call that caused the division.

Norman showed the class a series of slides. One described a 2017 study by Christopher Baldassano, one of his postdocs, in which people watched an episode of the BBC show “Sherlock” while in an fMRI scanner. Baldassano’s guess going into the study was that some voxel patterns would be in constant flux as the video streamed—for instance, the ones involved in color processing. Others would be more stable, such as those representing a character in the show. The study confirmed these predictions. But Baldassano also found groups of voxels that held a stable pattern throughout each scene, then switched when it was over. He concluded that these constituted the scenes’ voxel “signatures.”

Norman described another study, by Asieh Zadbood, in which subjects were asked to narrate “Sherlock” scenes—which they had watched earlier—aloud. The audio was played to a second group, who’d never seen the show. It turned out that no matter whether someone watched a scene, described it, or heard about it, the same voxel patterns recurred. The scenes existed independently of the show, as concepts in people’s minds.

Through decades of experimental work, Norman told me later, psychologists have established the importance of scripts and scenes to our intelligence. Walking into a room, you might forget why you came in; this happens, researchers say, because passing through the doorway brings one mental scene to a close and opens another. Conversely, while navigating a new airport, a “getting to the plane” script knits different scenes together: first the ticket counter, then the security line, then the gate, then the aisle, then your seat. And yet, until recently, it wasn’t clear what you’d find if you went looking for “scripts” and “scenes” in the brain.

In a recent P.N.I. study, Norman said, people in an fMRI scanner watched various movie clips of characters in airports. No matter the particulars of each clip, the subjects’ brains all shimmered through the same series of events, in keeping with boundary-defining moments that any of us would recognize. The scripts and the scenes were real—it was possible to detect them with a machine. What most interests Norman now is how they are learned in the first place. How do we identify the scenes in a story? When we enter a strange airport, how do we know intuitively where to look for the security line? The extraordinary difficulty of such feats is obscured by how easy they feel—it’s rare to be confused about how to make sense of the world. But at some point everything was new. When I was a toddler, my parents must have taken me to the supermarket for the first time; the fact that, today, all supermarkets are somehow familiar dims the strangeness of that experience. When I was learning to drive, it was overwhelming: each intersection and lane change seemed chaotic in its own way. Now I hardly have to think about them. My mind instantly factors out all but the important differences.

Norman clicked through the last of his slides. Afterward, a few students wandered over to the lectern, hoping for an audience with him. For the rest of us, the scene was over. We packed up, climbed the stairs, and walked into the afternoon sun.

Like Monti and Owen with Patient 23, today’s thought-decoding researchers mostly look for specific thoughts that have been defined in advance. But a “general-purpose thought decoder,” Norman told me, is the next logical step for the research. Such a device could speak aloud a person’s thoughts, even if those thoughts have never been observed in an fMRI machine. In 2018, Botvinick, Norman’s hall mate, helped write a paper in the journal Nature Communications titled “Toward a Universal Decoder of Linguistic Meaning from Brain Activation.” A team of researchers led by Botvinick’s former postdoc, Francisco Pereira, and Evelina Fedorenko, a neuroscientist at M.I.T., had built a primitive form of what Norman described: a system that could decode novel sentences that subjects read silently to themselves. The system learned which brain patterns were evoked by certain words, and used that knowledge to guess which words were implied by the new patterns it encountered.

Caveman turns down dinosaur requesting to be her pet.

The work at Princeton was funded by iARPA, an R. & D. organization that’s run by the Office of the Director of National Intelligence. Brandon Minnery, the iARPA project manager for the Knowledge Representation in Neural Systems program at the time, told me that he had some applications in mind. If you knew how knowledge was represented in the brain, you might be able to distinguish between novice and expert intelligence agents. You might learn how to teach languages more effectively by seeing how closely a student’s mental representation of a word matches that of a native speaker. Minnery’s most fanciful idea—“Never an official focus of the program,” he said—was to change how databases are indexed. Instead of labelling items by hand, you could show an item to someone sitting in an fMRI scanner—the person’s brain state could be the label. Later, to query the database, someone else could sit in the scanner and simply think of whatever she wanted. The software could compare the searcher’s brain state with the indexer’s. It would be the ultimate solution to the vocabulary problem.

Jack Gallant, a professor at Berkeley who has used thought decoding to reconstruct video montages from brain scans—as you watch a video in the scanner, the system pulls up frames from similar YouTube clips, based only on your voxel patterns—suggested that one group of people interested in decoding were Silicon Valley investors. “A future technology would be a portable hat—like a thinking hat,” he said. He imagined a company paying people thirty thousand dollars a year to wear the thinking hat, along with video-recording eyeglasses and other sensors, allowing the system to record everything they see, hear, and think, ultimately creating an exhaustive inventory of the mind. Wearing the thinking hat, you could ask your computer a question just by imagining the words. Instantaneous translation might be possible. In theory, a pair of wearers could skip language altogether, conversing directly, mind to mind. Perhaps we could even communicate across species. Among the challenges the designers of such a system would face, of course, is the fact that today’s fMRI machines can weigh more than twenty thousand pounds. There are efforts under way to make powerful miniature imaging devices, using lasers, ultrasound, or even microwaves. “It’s going to require some sort of punctuated-equilibrium technology revolution,” Gallant said. Still, the conceptual foundation, which goes back to the nineteen-fifties, has been laid.

Recently, I asked Owen what the new thought-decoding technology meant for locked-in patients. Were they close to having fluent conversations using something like the general-purpose thought decoder? “Most of that stuff is group studies in healthy participants,” Owen told me. “The really tricky problem is doing it in a single person. Can you get robust enough data?” Their bare-bones protocol—thinking about tennis equals yes; thinking about walking around the house equals no—relied on straightforward signals that were statistically robust. It turns out that the same protocol, combined with a series of yes-or-no questions (“Is the pain in the lower half of your body? On the left side?”), still works best. “Even if you could do it, it would take longer to decode them saying ‘it is in my right foot’ than to go through a simple series of yes-or-no questions,” Owen said. “For the most part, I’m quietly sitting and waiting. I have no doubt that, some point down the line, we will be able to read minds. People will be able to articulate, ‘My name is Adrian, and I’m British,’ and we’ll be able to decode that from their brain. I don’t think it’s going to happen in probably less than twenty years.”

In some ways, the story of thought decoding is reminiscent of the history of our understanding of the gene. For about a hundred years after the publication of Charles Darwin’s “On the Origin of Species,” in 1859, the gene was an abstraction, understood only as something through which traits passed from parent to child. As late as the nineteen-fifties, biologists were still asking what, exactly, a gene was made of. When James Watson and Francis Crick finally found the double helix, in 1953, it became clear how genes took physical form. Fifty years later, we could sequence the human genome; today, we can edit it.

Thoughts have been an abstraction for far longer. But now we know what they really are: patterns of neural activation that correspond to points in meaning space. The mind—the only truly private place—has become inspectable from the outside. In the future, a therapist, wanting to understand how your relationships run awry, might examine the dimensions of the patterns your brain falls into. Some epileptic patients about to undergo surgery have intracranial probes put into their brains; researchers can now use these probes to help steer the patients’ neural patterns away from those associated with depression. With more fine-grained control, a mind could be driven wherever one liked. (The imagination reels at the possibilities, for both good and ill.) Of course, we already do this by thinking, reading, watching, talking—actions that, after I’d learned about thought decoding, struck me as oddly concrete. I could picture the patterns of my thoughts flickering inside my mind. Versions of them are now flickering in yours.

On one of my last visits to Princeton, Norman and I had lunch at a Japanese restaurant called Ajiten. We sat at a counter and went through the familiar script. The menus arrived; we looked them over. Norman noticed a dish he hadn’t seen before—“a new point in ramen space,” he said. Any minute now, a waiter was going to interrupt politely to ask if we were ready to order.

“You have to carve the world at its joints, and figure out: what are the situations that exist, and how do these situations work?” Norman said, while jazz played in the background. “And that’s a very complicated problem. It’s not like you’re instructed that the world has fifteen different ways of being, and here they are!” He laughed. “When you’re out in the world, you have to try to infer what situation you’re in.” We were in the lunch-at-a-Japanese-restaurant situation. I had never been to this particular restaurant, but nothing about it surprised me. This, it turns out, might be one of the highest accomplishments in nature.

Norman told me that a former student of his, Sam Gershman, likes using the terms “lumping” and “splitting” to describe how the mind’s meaning space evolves. When you encounter a new stimulus, do you lump it with a concept that’s familiar, or do you split off a new concept? When navigating a new airport, we lump its metal detector with those we’ve seen before, even if this one is a different model, color, and size. By contrast, the first time we raised our hands inside a millimetre-wave scanner—the device that has replaced the walk-through metal detector—we split off a new category.

Norman turned to how thought decoding fit into the larger story of the study of the mind. “I think we’re at a point in cognitive neuroscience where we understand a lot of the pieces of the puzzle,” he said. The cerebral cortex—a crumply sheet laid atop the rest of the brain—warps and compresses experience, emphasizing what’s important. It’s in constant communication with other brain areas, including the hippocampus, a seahorse-shaped structure in the inner part of the temporal lobe. For years, the hippocampus was known only as the seat of memory; patients who’d had theirs removed lived in a perpetual present. Now we were seeing that the hippocampus stores summaries provided to it by the cortex: the sauce after it’s been reduced. We cope with reality by building a vast library of experience—but experience that has been distilled along the dimensions that matter. Norman’s research group has used fMRI technology to find voxel patterns in the cortex that are reflected in the hippocampus. Perhaps the brain is like a hiker comparing the map with the territory.

In the past few years, Norman told me, artificial neural networks that included basic models of both brain regions had proved surprisingly powerful. There was a feedback loop between the study of A.I. and the study of the real human mind, and it was getting faster. Theories about human memory were informing new designs for A.I. systems, and those systems, in turn, were suggesting ideas about what to look for in real human brains. “It’s kind of amazing to have gotten to this point,” he said.

On the walk back to campus, Norman pointed out the Princeton University Art Museum. It was a treasure, he told me.

“What’s in there?” I asked.

“Great art!” he said

After we parted ways, I returned to the museum. I went to the downstairs gallery, which contains artifacts from the ancient world. Nothing in particular grabbed me until I saw a West African hunter’s tunic. It was made of cotton dyed the color of dark leather. There were teeth hanging from it, and claws, and a turtle shell—talismans from past kills. It struck me, and I lingered for a moment before moving on.

Six months later, I went with some friends to a small house in upstate New York. On the wall, out of the corner of my eye, I noticed what looked like a blanket—a kind of fringed, hanging decoration made of wool and feathers. It had an odd shape; it seemed to pull toward something I’d seen before. I stared at it blankly. Then came a moment of recognition, along dimensions I couldn’t articulate—more active than passive, partway between alive and dead. There, the chest. There, the shoulders. The blanket and the tunic were distinct in every way, but somehow still neighbors. My mind had split, then lumped. Some voxels had shimmered. In the vast meaning space inside my head, a tiny piece of the world was finding its proper place. ♦

This article has been updated to include two of the lead researchers who built a system that could decode novel sentences that subjects read silently to themselves.

New Yorker Favorites

  • How we became infected by chain e-mail .
  • Twelve classic movies to watch with your kids.
  • The secret lives of fungi .
  • The photographer who claimed to capture the ghost of Abraham Lincoln .
  • Why are Americans still uncomfortable with atheism ?
  • The enduring romance of the night train .
  • Sign up for our daily newsletter to receive the best stories from The New Yorker .

By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The Kids Are Not All Right. They Want to Be Heard

By Keeanga-Yamahtta Taylor

A TikTok Ban Won’t Fix Social Media

By Kyle Chayka

Can You Read a Book in a Quarter of an Hour?

By Anthony Lane

The Texas School District That Provided the Blueprint for an Attack on Public Education

By Jessica Winter

How close are we to reading minds?

research papers read minds

The technology to decode our thoughts is drawing ever closer. Neuroscientists at the University of Texas have for the first time decoded data from non-invasive brain scans and used them to reconstruct language and meaning from stories that people hear, see or even imagine.

In a  new study published in Nature Neuroscience , Alexander Huth and colleagues successfully recovered the gist of language and sometimes exact phrases from  functional magnetic resonance imaging  (fMRI) brain recordings of three participants. 

Technology that can create language from brain signals could be enormously useful for people who cannot speak due to conditions such as  motor neurone disease . At the same time, it raises concerns for the future privacy of our thoughts.

Language decoded

Language decoding models , also called “speech decoders”, aim to use recordings of a person’s brain activity to discover the words they hear, imagine or say.

Until now, speech decoders have only been used with data from devices surgically implanted in the brain, which limits their usefulness. Other decoders which used non-invasive brain activity recordings have been able to decode single words or short phrases, but not continuous language.

research papers read minds

Table: The Conversation  Source:  Tang et al. / Nature Neuroscience

The new research used the  blood oxygen level dependent signal from fMRI scans, which shows changes in blood flow and oxygenation levels in different parts of the brain. By focusing on patterns of activity in brain regions and networks that process language, the researchers found their decoder could be trained to reconstruct continuous language (including some specific words and the general meaning of sentences).

Specifically, the decoder took the brain responses of three participants as they listened to stories, and generated sequences of words that were likely to have produced those brain responses. These word sequences did well at capturing the general gist of the stories, and in some cases included exact words and phrases.

The researchers also had the participants watch silent movies and imagine stories while being scanned. In both cases, the decoder often managed to predict the gist of the stories.

For example, one user thought “I don’t have my driver’s licence yet”, and the decoder predicted “she has not even started to learn to drive yet”.

Further, when participants actively listened to one story while ignoring another story played simultaneously, the decoder could identify the meaning of the story being actively listened to.

How does it work?

research papers read minds

The decoder could also describe the action when participants watched silent movies.  Tang et al. / Nature Neuroscience

The researchers started out by having each participant lie inside an fMRI scanner and listen to 16 hours of narrated stories while their brain responses were recorded.

These brain responses were then used to train an  encoder  – a computational model that tries to predict how the brain will respond to words a user hears. After training, the encoder could quite accurately predict how each participant’s brain signals would respond to hearing a given string of words.

However, going in the opposite direction – from recorded brain responses to words – is trickier.

The encoder model is designed to link brain responses with “semantic features” or the broad meanings of words and sentences. To do this, the system uses the  original GPT language model , which is the precursor of today’s GPT-4 model. The decoder then generates sequences of words that might have produced the observed brain responses.

The accuracy of each “guess” is then checked by using it to predict previously recorded brain activity, with the prediction then compared to the actual recorded activity.

During this resource-intensive process, multiple guesses are generated at a time, and ranked in order of accuracy. Poor guesses are discarded and good ones kept. The process continues by guessing the next word in the sequence, and so on until the most accurate sequence is determined.

Words and meanings

The study found data from multiple, specific brain regions – including the speech network, the parietal-temporal-occipital association region, and prefrontal cortex – were needed for the most accurate predictions.

One key difference between this work and earlier efforts is the data being decoded. Most decoding systems link brain data to motor features or activity recorded from brain regions involved in the last step of speech output, the movement of the mouth and tongue. This decoder works instead at the level of ideas and meanings.

One limitation of using fMRI data is its low “temporal resolution”. The blood oxygen level dependent signal rises and falls over approximately a 10-second period, during which time a person might have heard 20 or more words. As a result, this technique cannot detect individual words, but only the potential meanings of sequences of words.

No need for privacy panic (yet)

The idea of technology that can “read minds” raises concerns over mental privacy. The researchers conducted additional experiments to address some of these concerns.

These experiments showed we don’t need to worry just yet about having our thoughts decoded while we walk down the street, or indeed without our extensive cooperation.

A decoder trained on one person’s thoughts performed poorly when predicting the semantic detail from another participant’s data. What’s more, participants could disrupt the decoding by diverting their attention to a different task such as naming animals or telling a different story.

Movement in the scanner can also disrupt the decoder as fMRI is highly sensitive to motion, so participant cooperation is essential. Considering these requirements, and the need for high-powered computational resources, it is highly unlikely that someone’s thoughts could be decoded against their will at this stage.

Finally, the decoder does not currently work on data other than fMRI, which is an expensive and often impractical procedure. The group plans to test their approach on other non-invasive brain data in the future.

This article is republished from  The Conversation : the world's leading publisher of research-based news and analysis. A unique collaboration between academics and journalists. It was written by:  Christina Maher ,  University of Sydney .

Christina Maher

Related news

Sydney student jets off to nasa jpl for prestigious internship, fungi makes meal of hard to recycle plastic, with advances in neurotech, how can we protect our brain data.

In a future with more ‘mind reading,’ thanks to neurotech, we may need to rethink freedom of thought

research papers read minds

Professor of Medical Ethics, Humanities, and Law, Western Michigan University

Disclosure statement

Parker Crutchfield does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

View all partners

A dot diagram in the shape of a brain, with an open book beneath it, against a blue background.

Socrates, the ancient Greek philosopher, never wrote things down. He warned that writing undermines memory – that it is nothing but a reminder of some previous thought. Compared to people who discuss and debate, readers “will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing.”

These views may seem peculiar, but his central fear is a timeless one: that technology threatens thought. In the 1950s, Americans panicked about the possibility that advertisers would use subliminal messages hidden in movies to trick consumers into buying things they didn’t really want. Today, the U.S. is in the middle of a similar panic over TikTok, with critics worried about its impact on viewers’ freedom of thought .

To many people, neurotechnologies seem especially threatening, although they are still in their infancy. In January 2024, Elon Musk announced that his company Neuralink had implanted a brain chip in its first human subject – though they accomplished such a feat well after competitors . Fast-forward to March, and that person can already play chess with just his thoughts .

Brain-computer interfaces, called BCIs, have rightfully prompted debate about the appropriate limits of technologies that interact with the nervous system. Looking ahead to the day when wearable and implantable devices may be more widespread, the United Nations has discussed regulations and restrictions on BCIs and related neurotech . Chile has even enshrined neurorights – special protections for brain activity – in its constitution, while other countries are considering doing so.

A cornerstone of neurorights is the idea that all people have a fundamental right to determine what state their brain is in and who is allowed to access that information, the way that people ordinarily have a right to determine what is done with their bodies and property. It’s commonly equated with “freedom of thought.”

Many ethicists and policymakers think this right to mental self-determination is so fundamental that it is never OK to undermine it, and that institutions should impose strict limits on neurotech .

But as my research on neurorights argues, protecting the mind isn’t nearly as easy as protecting bodies and property.

Thoughts vs. things

Creating rules that protect a person’s ability to determine what is done to their body is relatively straightforward. The body has clear boundaries, and things that cross it without permission are not allowed. It is normally obvious when a person violates laws prohibiting assault or battery, for example.

The same is true about regulations that protect a person’s property. Protecting body and property are some of the central reasons people come together to form governments .

Generally, people can enjoy these protections without dramatically limiting how others want to live their lives.

The difficulty with establishing neurorights, on the other hand, is that, unlike bodies and property, brains and minds are under constant influence from outside forces. It’s not possible to fence off a person’s mind such that nothing gets in.

A light-colored wood fence set against a cloudy sky.

Instead, a person’s thoughts are largely the product of other peoples’ thoughts and actions. Everything from how a person perceives colors and shapes to our most basic beliefs are influenced by what others say and do . The human mind is like a sponge, soaking up whatever it happens to be immersed in. Regulations might be able to control the types of liquid in the bucket, but they can’t protect the sponge from getting wet.

Even if that were possible – if there were a way to regulate people’s actions so that they don’t influence others’ thoughts at all – the regulations would be so burdensome that no one would be able to do much of anything.

If I’m not allowed to influence others’ thoughts, then I can never leave my house, because just by my doing so I’m causing people to think and act in certain ways. And as the internet further expands a person’s reach, not only would I not be able to leave the house, I also wouldn’t be able to “like” a post on Facebook, leave a product review, or comment on an article.

In other words, protecting one aspect of freedom of thought – someone’s ability to shield themselves from outside influences – can conflict with another aspect of freedom of thought: freedom of speech, or someone’s ability to express ideas.

Neurotech and control

But there’s another concern at play: privacy. People may not be able to completely control what gets into their heads, but they should have significant control over what goes out – and some people believe societies need “neurorights” regulations to ensure that. Neurotech represents a new threat to our ability to control what thoughts people reveal to others.

There are ongoing efforts, for example, to develop wearable neurotech that would read and adjust the customer’s brainwaves to help them improve their mood or get better sleep. Even though such devices can only be used with the consent of the user, they still take information out of the brain, interpret it, store it and use it for other purposes.

In experiments, it is also becoming easier to use technology to gauge someone’s thoughts. Functional magnetic resonance imaging, or fMRI, can be used to measure changes in blood flow in the brain and produce images of that activity. Artificial intelligence can then analyze those images to interpret what a person is thinking .

Neurotechnology critics fear that as the field develops, it will be possible to extract information about brain activity regardless of whether or not someone wants to disclose it. Hypothetically, that information could one day be used in a range of contexts, from research for new devices to courts of law.

A tiny golden brain, about to hit by a wooden gavel with a gold band on it.

Regulation may be necessary to protect people from neurotech taking information out. For example, nations could prohibit companies that make commercial neurotech devices, like those meant to improve the wearer’s sleep, from storing the brainwave data those devices collect.

Yet I would argue that it may not be necessary, or even feasible, to protect against neurotech putting information into our brains – though it is hard to predict what capabilities neurotech will have even a few years from now.

In part, this is because I believe people tend to overestimate the difference between neurotech and other types of external influence. Think about books. Horror novelist Stephen King has said that writing is telepathy : When an author writes a sentence – say, describing a shotgun over the fireplace – they spark a specific thought in the reader.

In addition, there are already strong protections on bodies and property, which I believe could be used to prosecute anyone who forces invasive or wearable neurotech upon another person.

How different societies will navigate these challenges is an open question. But one thing is certain: With or without neurotech, our control over our own minds is already less absolute than many of us like to think.

  • Brain-computer interface
  • Freedom of thought
  • tech ethics

research papers read minds

Head of School, School of Arts & Social Sciences, Monash University Malaysia

research papers read minds

Chief Operating Officer (COO)

research papers read minds

Clinical Teaching Fellow

research papers read minds

Data Manager

research papers read minds

Director, Social Policy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol
  • PMC10682168

How to deal with mind-reading technologies

Roberto andorno.

1 Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zürich, Switzerland

Andrea Lavazza

2 Centro Universitario Internazionale, Arezzo, Italy

3 Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy

Introduction

In his famous dystopian novel Nineteen Eighty-Four , published in 1949, George Orwell depicts a totalitarian society where citizens are under constant surveillance by the authorities (Orwell, 1989 ). The two protagonists, Winston and Julia, secretly conspire against the state personified by Big Brother. At some point, Julia tells Winston: “They can make you say anything— anything —but they can't make you believe it. They can't get inside you” (p. 174). Julia and Winston are talking about what might happen to them once the so-called Thought Police has arrested them. Julia believes, and Winston agrees, that although the Thought Police can torture them in different ways, they will always have that ultimate refuge of their freedom—their minds—as no one can have direct access to their thoughts. Winston concludes that “with all their cleverness they had never mastered the secret of finding out what another human being was thinking” (p. 174). However, Winston and Julia are wrong. It is only after being arrested they realize to their horror that their most inner thoughts have indeed been deciphered by the authorities. During the final scenes of the novel, his tormentor O'Brien tells Winston exactly what he is thinking, sometimes reproducing his internal monolog word for word.

The ability to read thoughts, once beyond reach in Orwell's time, is gradually becoming a reality through brain imaging technologies such as functional magnetic resonance imaging (fMRI). This procedure is primarily used to localize and measure brain activity with the goal of diagnosing neurological disorders. In the clinical setting, fMRI serves a variety of purposes including preoperative risk assessment of brain surgery (Luna et al., 2021 ), and functional mapping of brain areas to detect functional abnormalities or to monitor patients' post-stroke or post-operative recovery (Crofts et al., 2020 ). In recent years, there has been growing interest in the use of this technique to decode people's thoughts and intentions in order to enable communication for those who have lost the ability to express themselves verbally due to a variety of neurological conditions. Recent studies under the generic umbrella term of mind-reading include two types of techniques. One is based on the detection of the electrical signals for muscles that gives rise to phonation, including lips, tongue, and jaw (Metzger et al., 2023 ). The other decodes the brain activity that correlates with the manifestation of thoughts (Tang et al., 2023 ). Thanks to expert software trained on the individual who is the subject of the experiment, it is also possible to reconstruct verbalizations, images and even the music being heard by the subject (Bellier et al., 2023 ).

Although the possibility of reading thoughts, memories and intentions is still in its very early stages and current results of such attempts are still inaccurate, the prospect that it may become a reality in the not-too-distant future raises obvious concerns regarding privacy issues. Indeed, the technique could theoretically be used to reveal people's thoughts without their consent, and even for malicious purposes, such as blackmail and discrimination. Certainly, neurotechnologies raise a broad spectrum of ethical and legal issues that go far beyond mental privacy, such as new threats to mental integrity (Lavazza and Giorgi, 2023 ), freedom of thought and personal identity. However, due to limited space, this opinion article focuses only on mental privacy. It is crucial to stress that mental privacy is a value of fundamental importance to individuals and society. Indeed, our personal freedom is largely dependent on this inner realm of cognition that no one in principle is allowed to invade.

The dual-use nature of mind-reading technologies

Mind-reading devices are a paradigmatic example of dual-use technologies, as they can be used both to greatly help neurological patients and to seriously harm individuals and society (Andorno, 2022 ). Two risks in particular can be observed here: first, patients may be induced by their own disabling conditions to accept clinical protocols that lack effective safeguards for the protection of their mental privacy. Think also about predictive neurotechnologies that can detect the onset of epileptic seizures or depressive symptoms by recording deep neural activity, and alert the patient to take necessary precautions. Although these technologies are incredibly useful, they can put patient privacy at risk and may lead to discrimination against people with disabilities (Tacca and Gilbert, 2023 ). This is why it is crucial to incorporate serious privacy protections in the development of medical technologies and to involve in this effort all stakeholders, including technology designers themselves.

Second, there is a subtle risk that mind-reading techniques could become quickly widely used and successful in various fields before adequate legal measures are implemented. This could lead to a culture where privacy violations are gradually tolerated, similarly as it happened over the past two decades with the rapid success of social media and the tendency of many users to overlook the privacy of their personal data. It is true that current consumer wearable devices in this area are EEG-based and are used to monitor mental states (depression, stress, and level of concentration) rather than, strictly speaking, for “mind-reading” purposes. However, the fact is that technological advances are leading to a decrease in the costs and size of brain imaging tools. As a result, it is highly possible that within the next decade or so, wearable mind-reading devices will become commonplace, much like social media is today. In this context, there exists a genuine risk that users of such devices may not prioritize the confidentiality of their brain data.

It is true that fMRI is not yet advanced enough to be used for widespread and accurate mind-reading, and that it would be difficult to perform it without people's cooperation (Reardon, 2023 ). However, as the technology continues to develop at a rapid pace, the risks of violation of mental privacy may become a reality soon. For instance, although today the neurological correlations to mental activity that can be identified through fMRI are specific to every individual (“brain fingerprint”), a study has shown that the use of AI tools may help to identify similarities in brain activity patterns of different individuals and lead to the development of a kind of universal mind-reading tool (Chen et al., 2017 ).

More recently, researchers from the University of Texas at Austin have reported that while fMRI is at present only able to decode a small set of words or phrases, a new AI tool known as “semantic decoder” allows the reconstruction of continuous language, that is, longer sequences of words (Tang et al., 2023 ). Regarding the argument that mind-reading is not to be feared because it cannot be performed without the individual's cooperation, let us point out that brain data initially collected for clinical or research purposes with the individual's consent could perfectly be misused later for malevolent purposes. It is also possible that such data are collected under some form of coercion, for instance, from people in an employment relationship, where the individuals' cooperation may only appear to be voluntary (Muhl and Andorno, 2023 ).

Possible measures to protect mental privacy

What ethical approaches can contribute to safeguarding mental privacy? There are two distinct, complementary models that can be used to achieve this objective. The first model, referred to as “embedded ethics,” involves integrating specific safeguards into the design and production of neurodevices on the initiative of scientists and developers themselves.

The second model can be called “adversarial ethics,” in which external parties, such as lawmakers and civil society, require researchers to comply with certain ethical and legal standards. It is clear that in light of potential threats to mental privacy, the adoption of some legal measures will be necessary in the coming years. In this regard, the formal recognition of a right to mental privacy, as proposed by several authors (Ienca and Andorno, 2017 ; Yuste et al., 2017 ; Lavazza, 2018 ) could contribute to mitigating the misuse of mind-reading technologies.

However, the mere formal recognition of such a right would be largely ineffective without concrete legal measures from civil, criminal, and labor law. To be more precise, legal regulations should require the free and specific informed consent of individuals for the collection and use of their brain data. Simultaneously, it would be beneficial if data protection laws explicitly stated that mental data falls under the category of sensitive personal information. This would ensure that enhanced security measures are put in place to prevent unauthorized third parties from accessing the identity of individuals whose data is being protected.

Of course, there are many particular issues regarding mental privacy that would need to be explored. For instance, we may discuss the acceptability or not of using mind-reading in forensics to prevent lying by defendants and witnesses. Why not also using the technology in the selection of candidates for important public positions? Should not a presidential candidate, who will have the power to impose new taxes or wage a war if elected, be as transparent as possible with their constituencies? These and other similar hypothetical scenarios may sound dystopian today, but they are now within the realm of possibility and need to be seriously considered.

In addition to the measures suggested above, it would be advisable to establish a mechanism for the effective judicial protection of mental privacy. In this regard, on the model of habeas corpus and habeas data , it is worth considering the proposals made in recent years for the recognition of a so-called “habeas mentem” or “habeas cogitationem” action (from “cogitation”: thought), which would function as a procedural and urgent tool to enforce the guarantees related to the right to mental privacy as well as other rights related to neurotechnologies (Muñoz and Marinaro, forthcoming; Stanzione, 2021 ).

In recent literature, scenarios have been presented where technology is even more invasive than that available to Big Brother. As a result, it has become urgent to consider ways in which society can deal with mind-reading technologies in a timely and legitimate manner. This is not an attempt to foster prejudice against scientific and technological progress, but rather to safeguard people's right to mental privacy, which is likely to become seriously jeopardized in the coming decades.

Without delving into the ongoing theoretical debate about whether a right to mental privacy would be a novel right or simply an expansion of the already established right to privacy, this opinion piece aimed to propose some concrete measures that can be taken to reduce the potential threats to mental privacy. However, it is important to consider that criminal groups and undemocratic states may still use mind-reading devices to achieve their nefarious goals. This is why it is imperative that, in parallel to domestic measures, effective international standards and procedures are established to promote respect for people's inner life. Fortunately, various international organizations are already taken the first steps in this direction [UNESCO IBC (International Bioethics Committee), 2021 ; UN Human Rights Council, 2022 ].

Author contributions

RA: Writing—original draft, Writing—review & editing. AL: Writing—review & editing.

Funding Statement

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

  • Andorno R. (2022). “Human dignity, life sciences technologies and the renewed imperative to preserve human freedom,” in The Cambridge Handbook of Information Technology, Life Sciences and Human Rights , eds M. Ienca, O. Pollicino, L. Liguori, E. Stefanini, and R. Andorno (Cambridge: Cambridge University Press), 273–285. [ Google Scholar ]
  • Bellier L., Llorens A., Marciano D., Gunduz A., Schalk G., Brunner P., et al.. (2023). Music can be reconstructed from human auditory cortex activity using nonlinear decoding models . PLoS Biol. 21 , e3002176. 10.1371/journal.pbio.3002176 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chen J., Leong Y. C., Honey C. J., Yong C. H., Norman K. A., Hasson U. (2017). Shared memories reveal shared structure in neural activity across individuals . Nat. Neurosci. 20 , 115–125. 10.1038/nn.4450 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Crofts A., Kelly M. E., Gibson C. L. (2020). Imaging functional recovery following ischemic stroke: clinical and preclinical fMRI studies . J. Neuroimaging 30 , 5–14. 10.1111/jon.12668 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ienca M., Andorno R. (2017). Towards new human rights in the age of neuroscience and neurotechnology . Life Sci. Soc. Policy 13 , 5. 10.1186/s40504-017-0050-1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lavazza A. (2018). Freedom of thought and mental integrity: the moral requirements for any neural prosthesis . Front. Neurosci. 12 , 82. 10.3389/fnins.2018.00082 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lavazza A., Giorgi R. (2023). Philosophical foundation of the right to mental integrity in the age of neurotechnologies . Neuroethics 16 , 10. 10.1007/s12152-023-09517-2 [ CrossRef ] [ Google Scholar ]
  • Luna L. P., Sherbaf F. G., Sair H. I., Mukherjee D., Oliveira I. B., Köhler C. A. (2021). Can preoperative mapping with functional MRI reduce morbidity in brain tumor resection? A systematic review and meta-analysis of 68 observational studies . Radiology 300 , 338–349. 10.1148/radiol.2021204723 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Metzger S. L., Littlejohn K. T., Silva A. B., Moses D. A., Seaton M. P., Wang R., et al.. (2023). A high-performance neuroprosthesis for speech decoding and avatar control . Nature 620 , 1037–1046. 10.1038/s41586-023-06443-4 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Muhl E., Andorno R. (2023). Neurosurveillance in the workplace: do employers have the right to monitor employees' minds? Front. Hum. Dyn. 5 , 1245619. 10.3389/fhumd.2023.1245619 [ CrossRef ] [ Google Scholar ]
  • Muñoz J. M., Marinaro J. A. (forthcoming). “You shall have the thought”: habeas cogitationem as a new legal remedy to enforce freedom of thinking neurorights . Neuroethics [ Google Scholar ]
  • Orwell G. (1989). Nineteen Eighty-Four . London: Penguin Books. [ Google Scholar ]
  • Reardon S. (2023). Mind-reading machines are here: is it time to worry? Nature 617 , 236. 10.1038/d41586-023-01486-z [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Stanzione P. (2021). Introductory Lecture to the Congress “Privacy e Neurodiritti: la Persona al Tempo delle Neuroscienze”, Rome, 28 January 2021 . Available online at: https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9527139 (accessed November 4, 2023).
  • Tacca A., Gilbert F. (2023). Why won't you listen to me? Predictive neurotechnology and epistemic authority . Neuroethics 16 , 22. 10.1007/s12152-023-09527-0 [ CrossRef ] [ Google Scholar ]
  • Tang J., LeBel A., Jain S., Huth A. G. (2023). Semantic reconstruction of continuous language from non-invasive brain recordings . Nat. Neurosci. 26 , 858–866. 10.1038/s41593-023-01304-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • UN Human Rights Council (2022). Resolution 51/3 on Neurotechnology and Human Rights . Available online at: https://documents-dds-ny.un.org/doc/UNDOC/GEN/G22/525/01/PDF/G2252501.pdf (accessed November 4, 2023).
  • UNESCO IBC (International Bioethics Committee) (2021). Report of the International Bioethics Committee of UNESCO (IBC) on the Ethical Issues of Neurotechnology . Available online at: https://unesdoc.unesco.org/ark:/48223/pf0000378724 (accessed November 4, 2023).
  • Yuste R., Goering S., Arcas B., Bi G., Carmena J. M., Carter A., et al.. (2017). Four ethical priorities for neurotecnologies and AI . Nature 551 , 159–163. 10.1038/551159a [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

How to Read Minds

Profile image of Tim Bayne

2012, In S. Edwards, S. Richmond & G. Rees (Eds) I Know What You Are Thinking: Brain Imaging and Mental Privacy

Related Papers

Journal of Consciousness Studies

Bernard J Baars

research papers read minds

andreas roepstorff

Magnetoencephalography

Peter Walla

The Biological Bulletin

Alexei Samsonovich

Earths International Research Society

Functional brain imaging offers new opportunities for the study of that most pervasive of cognitive conditions, human consciousness. Since consciousness is attendant to so much of human cognitive life, its study requires secondary analysis of multiple experimental datasets. Here, four preprocessed datasets from the National fMRI Data Center are considered: Hazeltine et al., Neural activation during response competi- tion; Ishai et al., The representation of objects in the human occipital and temporal cortex; Mechelli et al., The effects of presentation rate during word and pseudoword reading; and Postle et al., Activity in human frontal cortex associated with spatial working memory and saccadic behavior. The study of consciousness also draws from multiple disciplines. In this article, the philosophical subdiscipline of phenomenology provides initial characterization of phenomenal structures conceptually necessary for an analysis of consciousness. These structures include phenomenal intentionality, phenomenal superposition, and experienced temporality. The empirical predictions arising from these structures require new interpretive methods for their confirmation. These methods begin with single-subject (preprocessed) scan series, and consider the patterns of all voxels as potential multivariate encodings of phenomenal information. Twenty-seven subjects from the four studies were analyzed with multivariate methods, revealing analogues of phenomenal structures, particularly the structures of temporality. In a second interpretive approach, artificial neural networks were used to detect a more explicit prediction from phenomenology, namely, that present experience contains and is inflected by past states of awareness and anticipated events. In all of 21 subjects in this analysis, nets were successfully trained to extract aspects of relative past and future brain states, in comparison with statistically similar controls. This exploratory study thus concludes that the proposed methods for ‘‘neurophenomenology’’ warrant further application, includ- ing the exploration of individual differences, multivariate differences between cognitive task conditions, and explora- tion of specific brain regions possibly contributing to the observations. All of these attractive questions, however, must be reserved for future research.

Dragan Marinkovic

The new research methodology of brain imaging has aim to make link between vast complexity of human perceptual, emotional and cognitive processes on one hand, and the human brain on the other side. Numeral brain imaging techniques are nowadays accessible: Computerized Tomography, Positron Emission Tomography, Magnetoencephalography, Magnetic Resonance Imaging etc. The technique most frequently used in order to detect " brain in action " is functional magnetic resonance imaging (fMRI). fMRI detects a hemodynamic response, the reaction of the vascular system, to the enlarged necessity for oxygen of neurons in a activated area. The technique has many potential practical applications including reading of brain states, brain–computer interfaces, communicating with locked-in patients, lie detection, etc. In this paper some of the advances of application of fMRI in mind reading and their potential implication have been discussed.

Geraint Rees

Abstract Recent advances in human neuroimaging have shown that it is possible to accurately decode a person's conscious experience based only on non-invasive measurements of their brain activity. Such'brain reading'has mostly been studied in the domain of visual perception, where it helps reveal the way in which individual experiences are encoded in the human brain. The same approach can also be extended to other types of mental state, such as covert attitudes and lie detection.

Journal of Psychophysiology

Martin M Monti

Andrea Soddu

Ashok K . Mukhopadhyay

The readers and authors of the journal are all neuroscientists. The purpose of such review in this journal is to make them familiar with an emerging worldview that the brain is not the source of consciousness and cognitive faculty could not be localised within the specifics of the brain. Further, the relationship between consciousness, cognition and behavior are not horizontal but vertical in time involving nature‟s prequantum, pre-prequantum nests and the decision-making nest of consciousness. This calls for a cautious interpretation of neuroimaging data in the context of consciousness for management of neurological, psychiatric and psychological diseases. Method: The review is based upon author‟s extensive publications on the emerging worldview of a neuraxis which because of ontological reversal works like an inverted tree with roots up, open to depths of the nature and the branches down as peripheral nerves. This view has been extended to build up the idea that systems psyche wor...

RELATED PAPERS

Gertrud Stauber

Journal of Teacher Education

NANCY MURRI

2022 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA)

Shifat Kayser , Furkan Heybetli

European Heart Journal

Estelle Gandjbakhch

Revista de Políticas Sociales

Máxima Guglialmelli

Acta chirurgica Belgica

ismail cem sormaz

William Macias

william macias

Françoise Cestor

Pirtondim Berutu

Journal of Mathematical Modelling and Algorithms in Operations Research

Samia Harrouni

Paul de Pessemier

Journal of the American Geriatrics Society

Amanda Petrik

JABI: Jurnal Abdimas Bhakti Indonesia

Dwi Diana Putri

jadranka garmaz

Nicoletta Bazzano , Valeria Melis

PLoS medicine

Shadi Rahimzadeh

Osgoode Hall Law Journal

Edward Waitzer

Henrique Leitão

Revista Eletrônica Acervo Saúde

Athanara Alves De Sousa

Vulca Fidolini

IMF Working Papers

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Our approach

  • Responsibility
  • Infrastructure
  • Try Meta AI

At every moment of every day, our brains meticulously sculpt a wealth of sensory signals into meaningful representations of the world around us. Yet how this continuous process actually works remains poorly understood.

Today, Meta is announcing an important milestone in the pursuit of that fundamental question. Using magnetoencephalography (MEG), a non-invasive neuroimaging technique in which thousands of brain activity measurements are taken per second, we showcase an AI system capable of decoding the unfolding of visual representations in the brain with an unprecedented temporal resolution.

RECOMMENDED READS

  • Using AI to decode speech from brain activity
  • Studying the brain to build AI that processes language as people do

This AI system can be deployed in real time to reconstruct, from brain activity, the images perceived and processed by the brain at each instant. This opens up an important avenue to help the scientific community understand how images are represented in the brain, and then used as foundations of human intelligence. Longer term, it may also provide a stepping stone toward non-invasive brain-computer interfaces in a clinical setting that could help people who, after suffering a brain lesion, have lost their ability to speak.

Leveraging our recent architecture trained to decode speech perception from MEG signals, we develop a three-part system consisting of an image encoder, a brain encoder, and an image decoder. The image encoder builds a rich set of representations of the image independently of the brain. The brain encoder then learns to align MEG signals to these image embeddings. Finally, the image decoder generates a plausible image given these brain representations.

research papers read minds

We train this architecture on a public dataset of MEG recordings acquired from healthy volunteers and released by Things , an international consortium of academic researchers sharing experimental data based on the same image database.

We first compare the decoding performance obtained with a variety of pretrained image modules and show that the brain signals best align with modern computer vision AI systems like DINOv2 , a recent self-supervised architecture able to learn rich visual representations without any human annotations. This result confirms that self-supervised learning leads AI systems to learn brain-like representations: The artificial neurons in the algorithm tend to be activated similarly to the physical neurons of the brain in response to the same image.

This functional alignment between such AI systems and the brain can then be used to guide the generation of an image similar to what the participants see in the scanner. While our results show that images are better decoded with functional Magnetic Resonance Imaging ( fMRI ), our MEG decoder can be used at every instant of time and thus produces a continuous flux of images decoded from brain activity.

While the generated images remain imperfect, the results suggest that the reconstructed image preserves a rich set of high-level features, such as object categories. However, the AI system often generates inaccurate low-level features by misplacing or mis-orienting some objects in the generated images. In particular, using the Natural Scene Dataset , we show that images generated from MEG decoding remain less precise than the decoding obtained with fMRI, a comparably slow-paced but spatially precise neuroimaging technique.

Overall, our results show that MEG can be used to decipher, with millisecond precision, the rise of complex representations generated in the brain. More generally, this research strengthens Meta’s long-term research initiative to understand the foundations of human intelligence, identify its similarities as well as differences compared to current machine learning algorithms, and ultimately guide the development of AI systems designed to learn and reason like humans .

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

research papers read minds

Product experiences

Foundational models

Latest news

Meta © 2024

Mind-Reading Computers

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Ultrasound offers a new way to perform deep brain stimulation

Press contact :.

Closeup microscopic view of a device focusing on 3 golden prongs emanating from a purple circular shape against a green backdrop

Previous image Next image

Deep brain stimulation, by implanted electrodes that deliver electrical pulses to the brain, is often used to treat Parkinson’s disease and other neurological disorders. However, the electrodes used for this treatment can eventually corrode and accumulate scar tissue, requiring them to be removed.

MIT researchers have now developed an alternative approach that uses ultrasound instead of electricity to perform deep brain stimulation, delivered by a fiber about the thickness of a human hair. In a study of mice, they showed that this stimulation can trigger neurons to release dopamine, in a part of the brain that is often targeted in patients with Parkinson’s disease.

“By using ultrasonography, we can create a new way of stimulating neurons to fire in the deep brain,” says Canan Dagdeviren, an associate professor in the MIT Media Lab and the senior author of the new study. “This device is thinner than a hair fiber, so there will be negligible tissue damage, and it is easy for us to navigate this device in the deep brain.”

Video thumbnail

In addition to offering a potentially safer way to deliver deep brain stimulation, this approach could also become a valuable tool for researchers seeking to learn more about how the brain works.

MIT graduate student Jason Hou and MIT postdoc Md Osman Goni Nayeem are the lead authors of the paper, along with collaborators from MIT’s McGovern Institute for Brain Research, Boston University, and Caltech. The study appears today in Nature Communications .

Deep in the brain

Dagdeviren’s lab has previously developed wearable ultrasound devices that can be used to deliver drugs through the skin or perform diagnostic imaging on various organs . However, ultrasound cannot penetrate deeply into the brain from a device attached to the head or skull.

“If we want to go into the deep brain, then it cannot be just wearable or attachable anymore. It has to be implantable,” Dagdeviren says. “We carefully customize the device so that it will be minimally invasive and avoid major blood vessels in the deep brain.”

Deep brain stimulation with electrical impulses is FDA-approved to treat symptoms of Parkinson’s disease. This approach uses millimeter-thick electrodes to activate dopamine-producing cells in a brain region called the substantia nigra. However, once implanted in the brain, the devices eventually begin to corrode, and scar tissue that builds up surrounding the implant can interfere with the electrical impulses.

The MIT team set out to see if they could overcome some of those drawbacks by replacing electrical stimulation with ultrasound. Most neurons have ion channels that are responsive to mechanical stimulation, such as the vibrations from sound waves, so ultrasound can be used to elicit activity in those cells. However, existing technologies for delivering ultrasound to the brain through the skull can’t reach deep into the brain with high precision because the skull itself can interfere with the ultrasound waves and cause off-target stimulation.

“To precisely modulate neurons, we must go deeper, leading us to design a new kind of ultrasound-based implant that produces localized ultrasound fields,” Nayeem says. To safely reach those deep brain regions, the researchers designed a hair-thin fiber made from a flexible polymer. The tip of the fiber contains a drum-like ultrasound transducer with a vibrating membrane. When this membrane, which encapsulates a thin piezoelectric film, is driven by a small electrical voltage, it generates ultrasonic waves that can be detected by nearby cells.

“It’s tissue-safe, there’s no exposed electrode surface, and it’s very low-power, which bodes well for translation to patient use,” Hou says.

In tests in mice, the researchers showed that this ultrasound device, which they call ImPULS (Implantable Piezoelectric Ultrasound Stimulator), can provoke activity in neurons of the hippocampus. Then, they implanted the fibers into the dopamine-producing substantia nigra and showed that they could stimulate neurons in the dorsal striatum to produce dopamine.

“Brain stimulation has been one of the most effective, yet least understood, methods used to restore health to the brain. ImPULS gives us the ability to stimulate brain cells with exquisite spatial-temporal resolution and in a manner that doesn’t produce the kind of damage or inflammation as other methods. Seeing its effectiveness in areas like the hippocampus opened an entirely new way for us to deliver precise stimulation to targeted circuits in the brain,” says Steve Ramirez, an assistant professor of psychological and brain sciences at Boston University, and a faculty member at B.U.’s Center for Systems Neuroscience, who is also an author of the study.

A customizable device

All of the components of the device are biocompatible, including the piezoelectric layer, which is made of a novel ceramic called potassium sodium niobate, or KNN. The current version of the implant is powered by an external power source, but the researchers envision that future versions could be powered a small implantable battery and electronics unit.

The researchers developed a microfabrication process that enables them to easily alter the length and thickness of the fiber, as well as the frequency of the sound waves produced by the piezoelectric transducer. This could allow the devices to be customized for different brain regions.

“We cannot say that the device will give the same effect on every region in the brain, but we can easily and very confidently say that the technology is scalable, and not only for mice. We can also make it bigger for eventual use in humans,” Dagdeviren says.

The researchers now plan to investigate how ultrasound stimulation might affect different regions of the brain, and if the devices can remain functional when implanted for year-long timescales. They are also interested in the possibility of incorporating a microfluidic channel, which could allow the device to deliver drugs as well as ultrasound.

In addition to holding promise as a potential therapeutic for Parkinson’s or other diseases, this type of ultrasound device could also be a valuable tool to help researchers learn more about the brain, the researchers say.

“Our goal to provide this as a research tool for the neuroscience community, because we believe that we don’t have enough effective tools to understand the brain,” Dagdeviren says. “As device engineers, we are trying to provide new tools so that we can learn more about different regions of the brain.”

The research was funded by the MIT Media Lab Consortium and the Brain and Behavior Foundation Research (BBRF) NARSAD Young Investigator Award.

Share this news article on:

Related links.

  • Canan Dagdeviren
  • Conformable Decoders Group
  • School of Architecture and Planning

Related Topics

  • Neuroscience
  • Brain and cognitive sciences
  • Medical devices
  • Parkinson's

Related Articles

A gloved hand holds a soft, flexible patch made of silicone. It has 5 square sensors positioned in a cross.

A new ultrasound patch can measure how full your bladder is

A blue glowing fiber in darkness. The fiber is held by finger and seems to light up with it touches another hand.

Soft optical fibers block pain while moving and stretching with the body

Standing before a green background, photo shows a woman's torso from shoulders to waist. She is wearing a white plastic meshlike device with honeycomb-shaped holes and small metallic parts over one breast. The device is attached to a black sports bra. In one hand she holds a green circuit board that hangs via thin, flat cable from the device.

A wearable ultrasound scanner could detect breast cancer earlier

Previous item Next item

More MIT News

Namrata Kala sits in glass-walled building

Improving working environments amid environmental distress

Read full story →

Ashesh Rambachan converses with a student in the front of a classroom.

A data-driven approach to making better choices

On the left, Erik Lin-Greenberg talks, smiling, with two graduate students in his office. On the right, Tracy Slatyer sits with two students on a staircase, conversing warmly.

Paying it forward

Portrait photo of John Fucillo posing on a indoor stairwell

John Fucillo: Laying foundations for MIT’s Department of Biology

Graphic of hand holding a glowing chip-based 3D printer

Researchers demonstrate the first chip-based 3D printer

Primordial black hole forming amid a sea of color-charged quarks and gluons

Exotic black holes could be a byproduct of dark matter

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram
  • Categories: Engaging with Courses , Strategies for Learning

A student on his laptop in the library.

Reading is one of the most important components of college learning, and yet it’s one we often take for granted. Of course, students who come to Harvard know how to read, but many are unaware that there are different ways to read and that the strategies they use while reading can greatly impact memory and comprehension. Furthermore, students may find themselves encountering kinds of texts they haven’t worked with before, like academic articles and books, archival material, and theoretical texts.  

So how should you approach reading in this new environment? And how do you manage the quantity of reading you’re asked to cover in college? 

Start by asking “Why am I reading this?”

To read effectively, it helps to read with a goal . This means understanding before you begin reading what you need to get out of that reading. Having a goal is useful because it helps you focus on relevant information and know when you’re done reading, whether your eyes have seen every word or not. 

Some sample reading goals:

  • To find a paper topic or write a paper; 
  • To have a comment for discussion; 
  • To supplement ideas from lecture; 
  • To understand a particular concept; 
  • To memorize material for an exam; 
  • To research for an assignment; 
  • To enjoy the process (i.e., reading for pleasure!). 

Your goals for reading are often developed in relation to your instructor’s goals in assigning the reading, but sometimes they will diverge. The point is to know what you want to get out of your reading and to make sure you’re approaching the text with that goal in mind. Write down your goal and use it to guide your reading process. 

Next, ask yourself “How should I read this?”  

Not every text you’re assigned in college should be read the same way.  Depending on the type of reading you’re doing and your reading goal, you may find that different reading strategies are most supportive of your learning. Do you need to understand the main idea of your text? Or do you need to pay special attention to its language? Is there data you need to extract? Or are you reading to develop your own unique ideas?  

The key is to choose a reading strategy that will help you achieve your reading goal. Factors to consider might be: 

  • The timing of your reading (e.g., before vs. after class) 
  • What type of text you are reading (e.g., an academic article vs. a novel) 
  • How dense or unfamiliar a text is 
  • How extensively you will be using the text 
  • What type of critical thinking (if any) you are expected to bring to the reading 

Based on your consideration of these factors, you may decide to skim the text or focus your attention on a particular portion of it. You also might choose to find resources that can assist you in understanding the text if it is particularly dense or unfamiliar. For textbooks, you might even use a reading strategy like SQ3R .

Finally, ask yourself “How long will I give this reading?”  

Often, we decide how long we will read a text by estimating our reading speed and calculating an appropriate length of time based on it. But this can lead to long stretches of engaging ineffectually with texts and losing sight of our reading goals. These calculations can also be quite inaccurate, since our reading speed is often determined by the density and familiarity of texts, which varies across assignments. 

For each text you are reading, ask yourself “based on my reading goal, how long does this reading deserve ?” Sometimes, your answer will be “This is a super important reading. So, it takes as long as it takes.” In that case, create a time estimate using your best guess for your reading speed. Add some extra time to your estimate as a buffer in case your calculation is a little off. You won’t be sad to finish your reading early, but you’ll struggle if you haven’t given yourself enough time. 

For other readings, once we ask how long the text deserves, we will realize based on our other academic commitments and a text’s importance in the course that we can only afford to give a certain amount of time to it. In that case, you want to create a time limit for your reading. Try to come up with a time limit that is appropriate for your reading goal. For instance, let’s say I am working with an academic article. I need to discuss it in class, but I can only afford to give it thirty minutes of time because we’re reading several articles for that class. In this case, I will set an alarm for thirty minutes and spend that time understanding the thesis/hypothesis and looking through the research to look for something I’d like to discuss in class. In this case, I might not read every word of the article, but I will spend my time focusing on the most important parts of the text based on how I need to use it. 

If you need additional guidance or support, reach out to the course instructor and the ARC.  

If you find yourself struggling through the readings for a course, you can ask the course instructor for guidance. Some ways to ask for help are: “How would you recommend I go about approaching the reading for this course?” or “Is there a way for me to check whether I am getting what I should be out of the readings?” 

If you are looking for more tips on how to read effectively and efficiently, book an appointment with an academic coach at the ARC to discuss your specific assignments and how you can best approach them! 

SQ3R is a form of reading and note taking that is especially suited to working with textbooks and empirical research articles in the sciences and social sciences. It is designed to facilitate your reading process by drawing your attention to the material you don’t know, while building on the pre-existing knowledge you already have. It’s a great first step in any general study plan. Here are the basic components:

When using SQ3R, you don’t start by reading, but by “surveying” the text as a whole. What does that mean? Surveying involves looking at all the components of the text—like its subheadings, figures, review questions, etc.—to get a general sense of what the text is trying to achieve. 

The next step of SQ3R still doesn’t involve reading! Now your job is to create questions around the material you noted in your survey. Make note of the things you already seem to understand even without reading, and then write out questions about the material that seems new or that you don’t fully understand. This list of questions will help guide your reading, allowing you to focus on what you need to learn about the topic. The goal is to be able to answer these questions by the end of your reading (and to use them for active study as well!). 

Now that you’ve surveyed and questioned your text, it’s finally time to read! Read with an eye toward answering your questions, and highlight or make marginal notes to yourself to draw your attention to important parts of the text. 

If you’ve read your text with an eye to your questions, you will now want to practice answering them out loud. You can also take notes on your answers. This will help you know what to focus on as you review. 

As you study, look back at your questions. You might find it helpful to move those questions off the physical text. For example, when you put questions on flashcards, you make it hard to rely on memory cues embedded on the page and, thus, push yourself to depend on your own memory for the answer. (Of course, drawing from your memory is what you’ll need to do for the test!) 

Seeing Textbooks in a New Light

Textbooks can be a fantastic supportive resource for your learning. They supplement the learning you’ll do in the classroom and can provide critical context for the material you cover there. In some courses, the textbook may even have been written by the professor to work in harmony with lectures.  

There are a variety of ways in which professors use textbooks, so you need to assess critically how and when to read the textbook in each course you take.  

Textbooks can provide: 

  • A fresh voice through which to absorb material. For challenging concepts, they can offer new language and details that might fill in gaps in your understanding. 
  • The chance to “preview” lecture material, priming your mind for the big ideas you’ll be exposed to in class. 
  • The chance to review material, making sense of the finer points after class. 
  • A resource that is accessible any time, whether it’s while you are studying for an exam, writing a paper, or completing a homework assignment.

Textbook reading is similar to and different from other kinds of reading . Some things to keep in mind as you experiment with its use: 

The answer is “both” and “it depends.” In general, reading or at least previewing the assigned textbook material before lecture will help you pay attention in class and pull out the more important information from lecture, which also tends to make note-taking easier. If you read the textbook before class, then a quick review after lecture is useful for solidifying the information in memory, filling in details that you missed, and addressing gaps in your understanding. In addition, reading before and/or after class also depends on the material, your experience level with it, and the style of the text. It’s a good idea to experiment with when works best for you!

 Just like other kinds of course reading, it is still important to read with a goal . Focus your reading goals on the particular section of the textbook that you are reading: Why is it important to the course I’m taking? What are the big takeaways? Also take note of any questions you may have that are still unresolved.

Reading linearly (left to right and top to bottom) does not always make the most sense. Try to gain a sense of the big ideas within the reading before you start: Survey for structure, ask Questions, and then Read – go back to flesh out the finer points within the most important and detail-rich sections.

Summarizing pushes you to identify the main points of the reading and articulate them succinctly in your own words, making it more likely that you will be able to retrieve this information later. To further strengthen your retrieval abilities, quiz yourself when you are done reading and summarizing. Quizzing yourself allows what you’ve read to enter your memory with more lasting potential, so you’ll be able to recall the information for exams or papers. 

Marking Text

Marking text, which often involves making marginal notes, helps with reading comprehension by keeping you focused. It also helps you find important information when reviewing for an exam or preparing to write an essay. The next time you’re reading, write notes in the margins as you go or, if you prefer, make notes on a separate document. 

Your marginal notes will vary depending on the type of reading. Some possible areas of focus: 

  • What themes do you see in the reading that relate to class discussions? 
  • What themes do you see in the reading that you have seen in other readings? 
  • What questions does the reading raise in your mind? 
  • What does the reading make you want to research more? 
  • Where do you see contradictions within the reading or in relation to other readings for the course? 
  • Can you connect themes or events to your own experiences? 

Your notes don’t have to be long. You can just write two or three words to jog your memory. For example, if you notice that a book has a theme relating to friendship, you can just write, “pp. 52-53 Theme: Friendship.” If you need to remind yourself of the details later in the semester, you can re-read that part of the text more closely.

Reading Workshops

If you are looking for help with developing best practices and using strategies for some of the tips listed above, come to an ARC workshop on reading!

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value

If 2023 was the year the world discovered generative AI (gen AI) , 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey  on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Respondents’ expectations for gen AI’s impact remain as high as they were last year , with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.

About the authors

This article is a collaborative effort by Alex Singla , Alexander Sukharevsky , Lareina Yee , and Michael Chui , with Bryce Hall , representing views from QuantumBlack, AI by McKinsey, and McKinsey Digital.

Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value.

AI adoption surges

Interest in generative AI has also brightened the spotlight on a broader set of AI capabilities. For the past six years, AI adoption by respondents’ organizations has hovered at about 50 percent. This year, the survey finds that adoption has jumped to 72 percent (Exhibit 1). And the interest is truly global in scope. Our 2023 survey found that AI adoption did not reach 66 percent in any region; however, this year more than two-thirds of respondents in nearly every region say their organizations are using AI. 1 Organizations based in Central and South America are the exception, with 58 percent of respondents working for organizations based in Central and South America reporting AI adoption. Looking by industry, the biggest increase in adoption can be found in professional services. 2 Includes respondents working for organizations focused on human resources, legal services, management consulting, market research, R&D, tax preparation, and training.

Also, responses suggest that companies are now using AI in more parts of the business. Half of respondents say their organizations have adopted AI in two or more business functions, up from less than a third of respondents in 2023 (Exhibit 2).

Gen AI adoption is most common in the functions where it can create the most value

Most respondents now report that their organizations—and they as individuals—are using gen AI. Sixty-five percent of respondents say their organizations are regularly using gen AI in at least one business function, up from one-third last year. The average organization using gen AI is doing so in two functions, most often in marketing and sales and in product and service development—two functions in which previous research  determined that gen AI adoption could generate the most value 3 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. —as well as in IT (Exhibit 3). The biggest increase from 2023 is found in marketing and sales, where reported adoption has more than doubled. Yet across functions, only two use cases, both within marketing and sales, are reported by 15 percent or more of respondents.

Gen AI also is weaving its way into respondents’ personal lives. Compared with 2023, respondents are much more likely to be using gen AI at work and even more likely to be using gen AI both at work and in their personal lives (Exhibit 4). The survey finds upticks in gen AI use across all regions, with the largest increases in Asia–Pacific and Greater China. Respondents at the highest seniority levels, meanwhile, show larger jumps in the use of gen Al tools for work and outside of work compared with their midlevel-management peers. Looking at specific industries, respondents working in energy and materials and in professional services report the largest increase in gen AI use.

Investments in gen AI and analytical AI are beginning to create value

The latest survey also shows how different industries are budgeting for gen AI. Responses suggest that, in many industries, organizations are about equally as likely to be investing more than 5 percent of their digital budgets in gen AI as they are in nongenerative, analytical-AI solutions (Exhibit 5). Yet in most industries, larger shares of respondents report that their organizations spend more than 20 percent on analytical AI than on gen AI. Looking ahead, most respondents—67 percent—expect their organizations to invest more in AI over the next three years.

Where are those investments paying off? For the first time, our latest survey explored the value created by gen AI use by business function. The function in which the largest share of respondents report seeing cost decreases is human resources. Respondents most commonly report meaningful revenue increases (of more than 5 percent) in supply chain and inventory management (Exhibit 6). For analytical AI, respondents most often report seeing cost benefits in service operations—in line with what we found last year —as well as meaningful revenue increases from AI use in marketing and sales.

Inaccuracy: The most recognized and experienced risk of gen AI use

As businesses begin to see the benefits of gen AI, they’re also recognizing the diverse risks associated with the technology. These can range from data management risks such as data privacy, bias, or intellectual property (IP) infringement to model management risks, which tend to focus on inaccurate output or lack of explainability. A third big risk category is security and incorrect use.

Respondents to the latest survey are more likely than they were last year to say their organizations consider inaccuracy and IP infringement to be relevant to their use of gen AI, and about half continue to view cybersecurity as a risk (Exhibit 7).

Conversely, respondents are less likely than they were last year to say their organizations consider workforce and labor displacement to be relevant risks and are not increasing efforts to mitigate them.

In fact, inaccuracy— which can affect use cases across the gen AI value chain , ranging from customer journeys and summarization to coding and creative content—is the only risk that respondents are significantly more likely than last year to say their organizations are actively working to mitigate.

Some organizations have already experienced negative consequences from the use of gen AI, with 44 percent of respondents saying their organizations have experienced at least one consequence (Exhibit 8). Respondents most often report inaccuracy as a risk that has affected their organizations, followed by cybersecurity and explainability.

Our previous research has found that there are several elements of governance that can help in scaling gen AI use responsibly, yet few respondents report having these risk-related practices in place. 4 “ Implementing generative AI with speed and safety ,” McKinsey Quarterly , March 13, 2024. For example, just 18 percent say their organizations have an enterprise-wide council or board with the authority to make decisions involving responsible AI governance, and only one-third say gen AI risk awareness and risk mitigation controls are required skill sets for technical talent.

Bringing gen AI capabilities to bear

The latest survey also sought to understand how, and how quickly, organizations are deploying these new gen AI tools. We have found three archetypes for implementing gen AI solutions : takers use off-the-shelf, publicly available solutions; shapers customize those tools with proprietary data and systems; and makers develop their own foundation models from scratch. 5 “ Technology’s generational moment with generative AI: A CIO and CTO guide ,” McKinsey, July 11, 2023. Across most industries, the survey results suggest that organizations are finding off-the-shelf offerings applicable to their business needs—though many are pursuing opportunities to customize models or even develop their own (Exhibit 9). About half of reported gen AI uses within respondents’ business functions are utilizing off-the-shelf, publicly available models or tools, with little or no customization. Respondents in energy and materials, technology, and media and telecommunications are more likely to report significant customization or tuning of publicly available models or developing their own proprietary models to address specific business needs.

Respondents most often report that their organizations required one to four months from the start of a project to put gen AI into production, though the time it takes varies by business function (Exhibit 10). It also depends upon the approach for acquiring those capabilities. Not surprisingly, reported uses of highly customized or proprietary models are 1.5 times more likely than off-the-shelf, publicly available models to take five months or more to implement.

Gen AI high performers are excelling despite facing challenges

Gen AI is a new technology, and organizations are still early in the journey of pursuing its opportunities and scaling it across functions. So it’s little surprise that only a small subset of respondents (46 out of 876) report that a meaningful share of their organizations’ EBIT can be attributed to their deployment of gen AI. Still, these gen AI leaders are worth examining closely. These, after all, are the early movers, who already attribute more than 10 percent of their organizations’ EBIT to their use of gen AI. Forty-two percent of these high performers say more than 20 percent of their EBIT is attributable to their use of nongenerative, analytical AI, and they span industries and regions—though most are at organizations with less than $1 billion in annual revenue. The AI-related practices at these organizations can offer guidance to those looking to create value from gen AI adoption at their own organizations.

To start, gen AI high performers are using gen AI in more business functions—an average of three functions, while others average two. They, like other organizations, are most likely to use gen AI in marketing and sales and product or service development, but they’re much more likely than others to use gen AI solutions in risk, legal, and compliance; in strategy and corporate finance; and in supply chain and inventory management. They’re more than three times as likely as others to be using gen AI in activities ranging from processing of accounting documents and risk assessment to R&D testing and pricing and promotions. While, overall, about half of reported gen AI applications within business functions are utilizing publicly available models or tools, gen AI high performers are less likely to use those off-the-shelf options than to either implement significantly customized versions of those tools or to develop their own proprietary foundation models.

What else are these high performers doing differently? For one thing, they are paying more attention to gen-AI-related risks. Perhaps because they are further along on their journeys, they are more likely than others to say their organizations have experienced every negative consequence from gen AI we asked about, from cybersecurity and personal privacy to explainability and IP infringement. Given that, they are more likely than others to report that their organizations consider those risks, as well as regulatory compliance, environmental impacts, and political stability, to be relevant to their gen AI use, and they say they take steps to mitigate more risks than others do.

Gen AI high performers are also much more likely to say their organizations follow a set of risk-related best practices (Exhibit 11). For example, they are nearly twice as likely as others to involve the legal function and embed risk reviews early on in the development of gen AI solutions—that is, to “ shift left .” They’re also much more likely than others to employ a wide range of other best practices, from strategy-related practices to those related to scaling.

In addition to experiencing the risks of gen AI adoption, high performers have encountered other challenges that can serve as warnings to others (Exhibit 12). Seventy percent say they have experienced difficulties with data, including defining processes for data governance, developing the ability to quickly integrate data into AI models, and an insufficient amount of training data, highlighting the essential role that data play in capturing value. High performers are also more likely than others to report experiencing challenges with their operating models, such as implementing agile ways of working and effective sprint performance management.

About the research

The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 981 said their organizations had adopted AI in at least one business function, and 878 said their organizations were regularly using gen AI in at least one function. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

Alex Singla and Alexander Sukharevsky  are global coleaders of QuantumBlack, AI by McKinsey, and senior partners in McKinsey’s Chicago and London offices, respectively; Lareina Yee  is a senior partner in the Bay Area office, where Michael Chui , a McKinsey Global Institute partner, is a partner; and Bryce Hall  is an associate partner in the Washington, DC, office.

They wish to thank Kaitlin Noe, Larry Kanter, Mallika Jhamb, and Shinjini Srivastava for their contributions to this work.

This article was edited by Heather Hanselman, a senior editor in McKinsey’s Atlanta office.

Explore a career with us

Related articles.

One large blue ball in mid air above many smaller blue, green, purple and white balls

Moving past gen AI’s honeymoon phase: Seven hard truths for CIOs to get from pilot to scale

A thumb and an index finger form a circular void, resembling the shape of a light bulb but without the glass component. Inside this empty space, a bright filament and the gleaming metal base of the light bulb are visible.

A generative AI reset: Rewiring to turn potential into value in 2024

High-tech bees buzz with purpose, meticulously arranging digital hexagonal cylinders into a precisely stacked formation.

Implementing generative AI with speed and safety

FinancialResearch.gov

Conferences, 2024 financial stability conference – call for papers.

Published: June 4, 2024

Share on Facebook Share on Linked In Logo for Twitter

The Federal Reserve Bank of Cleveland and the Office of Financial Research invite the submission of research and policy-oriented papers for the 2024 Financial Stability Conference on November 21–22, 2024. The conference will be held in person in Cleveland, Ohio, and virtually.

Markets and institutions, increasingly interconnected, are being challenged by the dizzying pace of changes in the financial system, accelerating the buildup of risk and threats to solvency. Regulatory adaptations add another layer of complexity to the issue. Increasingly sophisticated algorithms and the rise of generative artificial intelligence may create new vulnerabilities across the system as banks, nonbank financial institutions, and financial markets exploit nascent opportunities. The twelfth annual conference will explore how firms and markets can become resilient or even antifragile and how regulators can encourage and accommodate needed changes.

Conference Format

The conference will bring together policymakers, market participants, and researchers in two types of sessions:

  • Policy Discussions These sessions include keynote addresses and panel discussions in which participants from industry, regulatory agencies, and academia share their insights.
  • Research Forums These forums follow the format of an academic workshop and comprise sessions to discuss submitted papers.

We welcome submissions of research on topics related to potential financial stability risks faced by financial markets and institutions, sources of financial system resilience, and related public policy. Conference topics include but are not limited to the following:

Emerging Risks

As the financial system continues to evolve, new risks emerge along with new businesses, new strategies, and new technologies. Old problems take on new dimensions as fiscal and monetary policies adapt to new economic and political realities, thereby adding new stresses to regulatory frameworks that themselves struggle to adapt. As information technology moves risk out of closely regulated sectors, it also creates new vulnerabilities from cyber-attacks. A rapidly changing physical environment and the prospect of nonhuman intelligences add even more uncertainty.

  • Financial stability concerns related to faster payments and equity transactions such as the implementation of t+1 settlement
  • The financial stability implications of generative AI and deep learning
  • Cryptocurrencies, smart contracts, and blockchain
  • Cyber-attacks
  • Climate risk
  • Interaction of monetary policy with macroprudential supervision
  • Sources of resilience in the financial sector

Financial Institutions

A riskier macroeconomic environment poses challenges for financial institutions and their supervisors. Risk management tools and strategies will be tested by fluctuations in inflation and output and by new regulations designed to mitigate vulnerabilities. Network effects, including interactions with a rapidly evolving fintech and crypto sector, may lead to further risks at a systemic level. How are institutions adapting to these risks and associated regulatory changes? How prepared are regulators and policymakers? Are existing microprudential and macroprudential toolkits sufficient?

  • Bank lending to nonbank financial institutions (NBFI)
  • Insurance markets
  • Banking as a service (BaaS)
  • Regional banks
  • Interest rate risk
  • Risks of rapid growth
  • Unrealized losses on balance sheets and mark-to-market accounting
  • Impact of reforms to lenders of last resort, deposit Insurance, capital rules, and the FHLB system

Financial Markets

Inflation and the associated responses of central banks around the world have contributed to stress to financial markets that has not been seen in the recent past. Financial stability threats may arise from resulting reallocations through volatility spikes, fire sales, and financial contagion. The continued development of algorithms, decentralized finance (DeFi), and complex artificial intelligence has the potential to add novel risks to financial markets. To what extent do investors recognize these risks, and how does recognition affect investors’ allocations? How does opacity resulting from deficiencies in reporting, risk management, and operation standards for these risks affect investor behavior?

  • Risks associated with high levels and issuance of public debt (for example, recent volatility around Treasury funding announcements, concerns about primary dealers and principal trading firms, the SEC’s recent rule about what defines a dealer and what that might mean for Treasury markets)
  • Short-term funding
  • Implications of deficits, central bank balance sheet policies, and financial stability
  • The impact of technological innovation on financial markets

Real Estate Markets

Real estate is often one of the sectors most affected by and can be a cause of financial instability. Construction and housing play a major role in the transmission of monetary policy, and real estate-based lending remains a major activity of banks, insurance companies, and mortgage companies. A complex and active securities market ties together financial institutions and markets in both residential and commercial real estate.

  • Commercial real estate (CRE)
  • Nonbank originators and servicers
  • International contagion
  • Implications of remote work and the impact of COVID-19
  • Effects of monetary policy on real estate markets

Scientific Committee

  • Vikas Agarwal, Georgia State University
  • Marco Di Maggio, Harvard University
  • Michael Fleming, Federal Reserve Bank of New York
  • Rod Garratt, University of California, Santa Barbara
  • Mariassunta Giannetti, Stockholm School of Economics
  • Arpit Gupta, New York University, Stern School of Business
  • Zhiguo He, Stanford University
  • Zhaogang Song, Johns Hopkins University
  • Russell R. Wermers, Robert H. Smith School of Business, The University of Maryland at College Park

Paper Submission Procedure

The deadline for submissions is Friday, July 5, 2024. Please submit completed papers through Conference Maker . Notification of acceptance will be provided by Friday, September 6, 2024. Final conference papers are due on Friday, November 1, 2024. In-person paper presentations are preferred. Questions should be directed to [email protected] .

Back to Conferences

You are now leaving the OFR’s website.

You will be redirected to:

You are now leaving the OFR Website. The website associated with the link you have selected is located on another server and is not subject to Federal information quality, privacy, security, and related guidelines. To remain on the OFR Website, click 'Cancel'. To continue to the other website you selected, click 'Proceed'. The OFR does not endorse this other website, its sponsor, or any of the views, activities, products, or services offered on the website or by any advertiser on the website.

Thank you for visiting www.financialresearch.gov.

Illustration of a distant mountain with winding forest stream in the foreground

2024 Environmental Performance Index: A Surprise Top Ranking, Global Biodiversity Commitment Tested

The Baltic nation of Estonia is No. 1 in the 2024 rankings, while Denmark, one of the top ranked countries in the 2022 EPI dropped to 10 th place, highlighting the challenges of reducing emissions in hard-to-decarbonize industries. Meanwhile, “paper parks” are proving a global challenge to international biodiversity commitments.

  Listen to Article

In 2022, at the UN Biodiversity Conference, COP 15, in Montreal over 190 countries made what has been called “the biggest conservation commitment the world has ever seen.”  The Kunming-Montreal Global Biodiversity Framework called for the effective protection and management of 30% of the world’s terrestrial, inland water, and coastal and marine areas by the year 2030 — commonly known as the 30x30 target. While there has been progress toward reaching this ambitious goal of protecting 30% of land and seas on paper, just ahead of World Environment Day, the 2024 Environmental Performance Index (EPI) , an analysis by Yale researchers that provides a data-driven summary of the state of sustainability around the world, shows that in many cases such protections have failed to halt ecosystem loss or curtail environmentally destructive practices.

A new metric that assesses how well countries are protecting important ecosystems indicated that while nations have made progress in protecting land and seas, many of these areas are “paper parks” where commercial activities such as mining and trawling continue to occur — sometimes at a higher rate than in non-protected areas. The EPI analyses show that in 23 countries, more than 10% of the land protected is covered by croplands and buildings, and in 35 countries there is more fishing activity inside marine protected areas than outside. 

“Protected areas are failing to achieve their goals in different ways,” said Sebastián Block Munguía, a postdoctoral associate with the Yale Center for Environmental Law and Policy (YCELP) and the lead author of the report. “In Europe, destructive fishing is allowed inside marine protected areas, and a large fraction of the area protected in land is covered by croplands, not natural ecosystems. In many developing countries, even when destructive activities are not allowed in protected areas, shortages of funding and personnel make it difficult to enforce rules.”

The 2024 EPI, published by the Yale Center for Environmental Law and Policy and Columbia University’s Center for International Earth Science Information Network ranks 180 countries based on 58 performance indicators to track progress on mitigating climate change, promoting environmental health, and safeguarding ecosystem vitality. The data evaluates efforts by the nations to reach U.N. sustainability goals, the 2015 Paris Climate Change Agreement, as well as the Kunming-Montreal Global Biodiversity Framework. The data for the index underlying different indicators come from a variety of academic institutions and international organizations and cover different periods. Protected area coverage indicators are based on data from March 2024, while greenhouse emissions data are from 2022.

Estonia has decreased its GHG emissions by 59% compared to 1990. The energy sector will be the biggest contributor in reducing emissions in the coming years as we have an aim to produce 100% of our electricity consumption from renewables by 2030.”

The index found that many countries that were leading in sustainability goals have fallen behind or stalled, illustrating the challenges of reducing emissions in hard-to-decarbonize industries and resistant sectors such as agriculture. In several countries, recent drops in agricultural greenhouse gas emissions (GHG) have been the result of external circumstances, not policy. For example, in Albania, supply chain disruptions led to more expensive animal feed that resulted in a sharp reduction in cows and, consequentially, nitrous oxide and methane emissions.

Estonia leads this year’s rankings with a 40% drop in GHG emissions over the last decade, largely attributed to replacing dirty oil shale power plants with cleaner energy sources. The country is drafting a proposal to achieve by 2040 a CO2 neutral energy sector and a CO2 neutral public transport network in bigger cities.

“Estonia has decreased its GHG emissions by 59% compared to 1990. The energy sector will be the biggest contributor in reducing emissions in the coming years as we have an aim to produce 100% of our electricity consumption from renewables by 2030,” said Kristi Klaas, Estonia’s vice-minister for Green Transition. Klaas discussed some of the policies that led to the country's success as well as ongoing challenges, such as reducing emissions in the agriculture sector, at a webinar hosted by YCELP on June 3.  Dr. Abdullah Ali Abdullah Al-Amri, chairman of the Environment Authority of Oman, also joined the webinar to discuss efforts aimed at protecting the county's multiple ecosystems with rare biodiversity and endangered species, such as the Arabian oryx, and subspecies, such as the Arabian leopard. 

Satellite image of the New Haven area

Subscribe to “YSE 3”

Biweekly, we highlight three news and research stories about the work we’re doing at Yale School of the Environment.

 Denmark, the top ranked country in the 2022 EPI dropped to 10th place, as its pace of decarbonization slowed, highlighting that those early gains from implementing “low-hanging-fruit policies, such as switching to electricity generation from coal to natural gas and expanding renewable power generation are themselves insufficient,” the index notes. Emissions in the world’s largest economies such as the U.S. (which is ranked 34th) are falling too slowly or still rising — such as in China, Russia, and India, which is ranked 176th.

Over the last decade only five countries — Estonia, Finland, Greece, Timor-Leste, and the United Kingdom — have cut their GHG emissions over the last decade at the rate needed to reach net zero by 2050. Vietnam and other developing countries in Southeast and Southern Asia — such as Pakistan, Laos, Myanmar, and Bangladesh — are ranked the lowest, indicating the urgency of international cooperation to help provide a path for struggling nations to achieve sustainability.

“The 2024 Environmental Performance Index highlights a range of critical sustainability challenges from climate change to biodiversity loss and beyond — and reveals trends suggesting that countries across the world need to redouble their efforts to protect critical ecosystems and the vitality of our planet,” said Daniel Esty, Hillhouse Professor of Environmental Law and Policy and director of YCELP.

  • Daniel C. Esty
  • Sebastián Block Munguía
  • Ecosystem Management and Conservation
  • Environmental Policy Analysis

Media Contact

Portrait of Fran Silverman

Fran Silverman

Associate Director of Communications

Research in the News

A path in the Sinharaja rainforest in Sri Lanka

Climate Change Threatens Resilience of Sri Lankan Rainforests

 

An Inside Look at Beech Leaf Disease

An uncompleted construction project in India

Achieving Sustainable Urban Growth on a Global Scale

Connect with us.

  • Request Information
  • Register for Events

Broadband Internet Access, Economic Growth, and Wellbeing

Between 2000 and 2008, access to high-speed, broadband internet grew significantly in the United States, but there is debate on whether access to high-speed internet improves or harms wellbeing. We find that a ten percent increase in the proportion of county residents with access to broadband internet leads to a 1.01 percent reduction in the number of suicides in a county, as well as improvements in self-reported mental and physical health. We further find that this reduction in suicide deaths is likely due to economic improvements in counties that have access to broadband internet. Counties with increased access to broadband internet see reductions in poverty rate and unemployment rate. In addition, zip codes that gain access to broadband internet see increases in the numbers of employees and establishments. In addition, heterogeneity analysis indicates that the positive effects are concentrated in the working age population, those between 25 and 64 years old. This pattern is precisely what is predicted by the literature linking economic conditions to suicide risk.

We are grateful to participants at the Association of Public Policy and Management and the Washington Area Labor Symposium conferences for their helpful comments. Any errors or conclusions are our own. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.

MARC RIS BibTeΧ

Download Citation Data

Mentioned in the News

More from nber.

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

15th Annual Feldstein Lecture, Mario Draghi, "The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone cover slide

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS EXPLAINER
  • 02 May 2023

Mind-reading machines are here: is it time to worry?

  • Sara Reardon 0

Sara Reardon is a freelance journalist based in Bozeman, Montana.

You can also search for this author in PubMed   Google Scholar

The little voice inside your head can now be decoded by a brain scanner — at least some of the time. Researchers have developed the first non-invasive method of determining the gist of imagined speech, presenting a possible communication outlet for people who cannot talk . But how close is the technology — which is currently only moderately accurate — to achieving true mind-reading? And how can policymakers ensure that such developments are not misused?

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 617 , 236 (2023)

doi: https://doi.org/10.1038/d41586-023-01486-z

Tang, J., LeBel, A., Jain, S. & Huth, A. G. Nature Neurosci . https://doi.org/10.1038/s41593-023-01304-9 (2023).

Article   Google Scholar  

Download references

Reprints and permissions

Related Articles

research papers read minds

  • Machine learning
  • Neuroscience

Need a policy for using ChatGPT in the classroom? Try asking students

Need a policy for using ChatGPT in the classroom? Try asking students

Career Column 05 JUN 24

Meta’s AI system is a boost to endangered languages — as long as humans aren’t forgotten

Meta’s AI system is a boost to endangered languages — as long as humans aren’t forgotten

Editorial 05 JUN 24

Meta’s AI translation model embraces overlooked languages

Meta’s AI translation model embraces overlooked languages

News & Views 05 JUN 24

What we do — and don’t — know about how misinformation spreads online

What we do — and don’t — know about how misinformation spreads online

‘Rainbow’, ‘like a cricket’: every bird in South Africa now has an isiZulu name

‘Rainbow’, ‘like a cricket’: every bird in South Africa now has an isiZulu name

News 06 JUN 24

Structure and topography of the synaptic V-ATPase–synaptophysin complex

Article 05 JUN 24

Senescent glia link mitochondrial dysfunction and lipid accumulation

Senescent glia link mitochondrial dysfunction and lipid accumulation

Faculty Positions on Public Health and Parasitology in Chang Gung University, Taiwan

Faculty openings for expertise on Public Health and Parasitology in tawian

CGU is located near the A7 station of Taiwan Taoyuan International Airport MRT (Mass Rapid Transit)

Chang Gung University

research papers read minds

Faculty Positions, Aging and Neurodegeneration, Westlake Laboratory of Life Sciences and Biomedicine

Applicants with expertise in aging and neurodegeneration and related areas are particularly encouraged to apply.

Hangzhou, Zhejiang, China

Westlake Laboratory of Life Sciences and Biomedicine (WLLSB)

research papers read minds

Faculty Positions in Chemical Biology, Westlake University

We are seeking outstanding scientists to lead vigorous independent research programs focusing on all aspects of chemical biology including...

School of Life Sciences, Westlake University

research papers read minds

Postdoctoral Associate- Cancer Epidemiology

Houston, Texas (US)

Baylor College of Medicine (BCM)

research papers read minds

Head of Climate Science and Impacts Team (f/m/d)

You will play a pivotal role in shaping the scientific outputs and supporting the organisation's mission and culture. Your department has 20+ staff.

Ritterstraße 3, 10969 Berlin

Climate Analytics gGmbH

research papers read minds

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

IMAGES

  1. (PDF) Cells that read minds

    research papers read minds

  2. Mind Reading: Human Origins and Theory of Mind

    research papers read minds

  3. Infographic: How to read a scientific paper

    research papers read minds

  4. How to Read Research Papers?

    research papers read minds

  5. What If People Could Read Minds Essay Example

    research papers read minds

  6. The easiest method on how to read minds like a true mentalist...

    research papers read minds

VIDEO

  1. Can Machines Read our Minds?

  2. How to Read Minds by Reading People

  3. Sleeping Minds 😴 but Responsive Minds? #science #brainscience #neuroscience #sleep

  4. AI researchers experimenting with reading minds

  5. Reading Tips for Research Papers. Read quickly and effectively with a help from AI

  6. Launching scientific writing course 😍 #research #researchmatters #education #course

COMMENTS

  1. Brain Recording, Mind-Reading, and Neurotechnology: Ethical ...

    The kinds of concerns being discussed here are not based in mind-reading per se, but rather in effects likely to occur in the context of widespread neurotechnology use. Beyond the market context however, in the realm of ongoing research, at least one sort of mind-reading might appear to be technically possible in a limited sense at least.

  2. Mind-reading devices are revealing the brain's secrets

    Credit: Silvia Marchesotti. Moving a prosthetic arm. Controlling a speaking avatar. Typing at speed. These are all things that people with paralysis have learnt to do using brain-computer ...

  3. Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from

    Covert speech seems a contentful medium, and one that really could be decoded in a mind-reading scenario. In terms of research-grade neurotechnology, in the context of controlled conditions, devices that are triggered by covert speech activity could be triggered by mentalised speech not intended for externalisation (Bocquelet et al. 2016 ...

  4. The Science of Mind Reading

    The Science of Mind Reading. Researchers are pursuing age-old questions about the nature of thoughts—and learning how to read them. By James Somers. November 29, 2021. It isn't so much that ...

  5. How close are we to reading minds? A new study decodes language and

    The new research used the blood oxygen level dependent signal from fMRI scans, which shows changes in blood flow and oxygenation levels in different parts of the brain. By focusing on patterns of ...

  6. The rise of brain-reading technology: what you need to know

    By. Liam Drew. Ann, who was left paralysed by a stroke, uses a brain-computer interface to translate brain signals into the speech and movement of an avatar. Credit: Noah Berger. In a laboratory ...

  7. Can we read minds by imaging brains?

    1. Introduction. In recent years, the term "mind reading" has come to be used to describe a family of brain imaging experiments aimed at decoding patterns in neural data. The term can be found, not only in publications aimed at a popular audience (Intagliata, 2008; Poldrack, 2018; Wilson, 2019 ), but also in mainstream neuroscience journals ...

  8. Mind-reading machines are here: is it time to worry?

    For starters, fMRI scanners are not portable, making it difficult to scan someone's brain without their cooperation. She also doubts that it would be worth the time or cost to train a decoder ...

  9. The cultural evolution of mind reading

    Background. We use "theory of mind" or "mind reading" to understand our own thoughts and feelings and those of other agents. Mind reading has been a focus of philosophical interest for centuries and of intensive scientific inquiry for 35 years. It plays a pivotal role in human social interaction and communication.

  10. How close are we to reading minds?

    The new research used the blood oxygen level dependent signalfrom fMRI scans, which shows changes in blood flow and oxygenation levels in different parts of the brain. By focusing on patterns of activity in brain regions and networks that process language, the researchers found their decoder could be trained to reconstruct continuous language (including some specific words and the general ...

  11. In a future with more 'mind reading,' thanks to neurotech, we may need

    Published: April 9, 2024 8:17am EDT. Our minds are buffeted by all kinds of influences, though some seem more menacing than others. wenjin chen/DigitalVision Vectoria via Getty Images. Brain ...

  12. How to deal with mind-reading technologies

    Recent studies under the generic umbrella term of mind-reading include two types of techniques. One is based on the detection of the electrical signals for muscles that gives rise to phonation, including lips, tongue, and jaw (Metzger et al., 2023 ). The other decodes the brain activity that correlates with the manifestation of thoughts (Tang ...

  13. How the Mind Reads Other Minds

    Mind reading takes shape The psychologists David Premack and Guy Woodruff, who first coined the term "theory of mind," believed that chimpanzees and perhaps other primates could read intentions. Subsequent research has shown that primates are remarkably sophisticated in their relationships: They can deceive, form alliances, and bear grudges ...

  14. (PDF) How to Read Minds

    The technique has many potential practical applications including reading of brain states, brain-computer interfaces, communicating with locked-in patients, lie detection, etc. In this paper some of the advances of application of fMRI in mind reading and their potential implication have been discussed. Download Free PDF.

  15. Toward a real-time decoding of images from brain activity

    We train this architecture on a public dataset of MEG recordings acquired from healthy volunteers and released by Things, an international consortium of academic researchers sharing experimental data based on the same image database.. We first compare the decoding performance obtained with a variety of pretrained image modules and show that the brain signals best align with modern computer ...

  16. Mind Reading Computer

    A computer can, in a very real sense, read human minds, and if the authors could learn to identify brain waves generated by specific thoughts or commands, the machine might even be able to react to those commands by moving a dot across a TV screen. A computer can, in a very real sense, read human minds. Although the dot's gyrations are directed by a computer, the machine was only carrying out ...

  17. Mind reading machines: automated inference of cognitive mental states

    Mind reading encompasses our ability to attribute mental states to others, and is essential for operating in a complex social environment. The goal in building mind reading machines is to enable computer technologies to understand and react to people's emotions and mental states. This paper describes a system, for the automated inference of cognitive mental states from observed facial ...

  18. New 'Mind-Reading' AI Translates Thoughts Directly From Brainwaves

    New 'Mind-Reading' AI Translates Thoughts Directly From Brainwaves - Without Implants. A world-first, non-invasive AI system can turn silent thoughts into text while only requiring users to wear a snug-fitting cap. The Australian researchers who developed the technology, called DeWave, tested the process using data from more than two dozen ...

  19. (PDF) Mind-Reading System

    www.ijacsa.thesai.org. Mind-Reading System - A Cutting-Edge Technology. Farhad Shir, P h.D. McGinn IP Law, PLLC. Vienna, V irginia, U.S.A. Abstract —In this paper, we describe a human-computer ...

  20. Mind-reading computers turn brain activity into speech

    By. Shamini Bundell. For people with paralysis and other conditions, brain-computer-interfaces could provide a way to communicate without needing to be able to speak. The technology to do this has ...

  21. Mind-Reading Computers

    Here the author considers current research and possible scenarios. Published in: Computing in Science & Engineering ( Volume: 14 , Issue: 4 , July-Aug. 2012 ) Page(s): 104 - 104

  22. How to 'Read Minds' to Get Ahead in Business

    Much of Ames' research focuses on the idea of "mind reading," or how people make inferences about others — rightly or wrongly, and often through the lens of their own experiences. "Mind reading is more of a colloquial term. I'm using it in a slightly cheeky way to refer to what we think other people think, want, and feel," he explains.

  23. Ultrasound offers a new way to perform deep brain stimulation

    MIT graduate student Jason Hou and MIT postdoc Md Osman Goni Nayeem are the lead authors of the paper, along with collaborators from MIT's McGovern Institute for Brain Research, Boston University, and Caltech. ... The research was funded by the MIT Media Lab Consortium and the Brain and Behavior Foundation Research (BBRF) NARSAD Young ...

  24. Reading

    Some sample reading goals: To find a paper topic or write a paper; To have a comment for discussion; To supplement ideas from lecture; To understand a particular concept; To memorize material for an exam; To research for an assignment; To enjoy the process (i.e., reading for pleasure!). Your goals for reading are often developed in relation to ...

  25. The state of AI in early 2024: Gen AI adoption spikes and starts to

    About the research. The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 981 said their organizations had adopted AI in at least one business function, and ...

  26. 2024 Financial Stability Conference

    Notification of acceptance will be provided by Friday, September 6, 2024. Final conference papers are due on Friday, November 1, 2024. In-person paper presentations are preferred. Questions should be directed to [email protected]. The 2024 Financial Stability Conference hosted by the OFR and Federal Reserve Bank of ...

  27. 2024 Environmental Performance Index: A Surprise Top Ranking, Global

    203-436-4842. The Baltic nation of Estonia is No. 1 in the 2024 rankings, while Denmark, one of the top ranked countries in the 2022 EPI dropped to 10th place, highlighting the challenges of reducing emissions in hard-to-decarbonize industries. Meanwhile, "paper parks" are proving a global challenge to international biodiversity commitments.

  28. Can we read minds by imaging brains?

    ORIGINAL PAPER Can we read minds by imaging brains? Charles Rathkopf , Jan Hendrik Heinrichs and Bert Heinrichs Institute for Neuroscience and Medicine, Jülich Research Center, Germany ... CONTACT Charles Rathkopf [email protected] Jülich Research Center, Wilhelm-Johnen-Straß, Jülich 52428, Germany PHILOSOPHICAL PSYCHOLOGY

  29. Broadband Internet Access, Economic Growth, and Wellbeing

    Broadband Internet Access, Economic Growth, and Wellbeing. Kathryn R. Johnson & Claudia Persico. Working Paper 32517. DOI 10.3386/w32517. Issue Date May 2024. Between 2000 and 2008, access to high-speed, broadband internet grew significantly in the United States, but there is debate on whether access to high-speed internet improves or harms ...

  30. PDF Mind-reading machines are here: is it time to worry?

    address how mind-reading technologies can and cannot be legally used. Lázaro-Muñoz says that policy action could mirror a US law that stops insurers and