Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Neuroscientists find a way to make object-recognition models perform better

Press contact :, media download.

color change pixels of cat

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

color change pixels of cat

Previous image Next image

Computer vision models known as convolutional neural networks can be trained to recognize objects nearly as accurately as humans do. However, these models have one significant flaw: Very small changes to an image, which would be nearly imperceptible to a human viewer, can trick them into making egregious errors such as classifying a cat as a tree.

A team of neuroscientists from MIT, Harvard University, and IBM have developed a way to alleviate this vulnerability, by adding to these models a new layer that is designed to mimic the earliest stage of the brain’s visual processing system. In a new study, they showed that this layer greatly improved the models’ robustness against this type of mistake.

“Just by making the models more similar to the brain’s primary visual cortex, in this single stage of processing, we see quite significant improvements in robustness across many different types of perturbations and corruptions,” says Tiago Marques, an MIT postdoc and one of the lead authors of the study.

Convolutional neural networks are often used in artificial intelligence applications such as self-driving cars, automated assembly lines, and medical diagnostics. Harvard graduate student Joel Dapello, who is also a lead author of the study, adds that “implementing our new approach could potentially make these systems less prone to error and more aligned with human vision.”

“Good scientific hypotheses of how the brain’s visual system works should, by definition, match the brain in both its internal neural patterns and its remarkable robustness. This study shows that achieving those scientific gains directly leads to engineering and application gains,” says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the Center for Brains, Minds, and Machines and the McGovern Institute for Brain Research, and the senior author of the study.

The study, which is being presented at the NeurIPS conference this month, is also co-authored by MIT graduate student Martin Schrimpf, MIT visiting student Franziska Geiger, and MIT-IBM Watson AI Lab Co-director David Cox.

Mimicking the brain

Recognizing objects is one of the visual system’s primary functions. In just a small fraction of a second, visual information flows through the ventral visual stream to the brain’s inferior temporal cortex, where neurons contain information needed to classify objects. At each stage in the ventral stream, the brain performs different types of processing. The very first stage in the ventral stream, V1, is one of the most well-characterized parts of the brain and contains neurons that respond to simple visual features such as edges.

“It’s thought that V1 detects local edges or contours of objects, and textures, and does some type of segmentation of the images at a very small scale. Then that information is later used to identify the shape and texture of objects downstream,” Marques says. “The visual system is built in this hierarchical way, where in early stages neurons respond to local features such as small, elongated edges.”

For many years, researchers have been trying to build computer models that can identify objects as well as the human visual system. Today’s leading computer vision systems are already loosely guided by our current knowledge of the brain’s visual processing. However, neuroscientists still don’t know enough about how the entire ventral visual stream is connected to build a model that precisely mimics it, so they borrow techniques from the field of machine learning to train convolutional neural networks on a specific set of tasks. Using this process, a model can learn to identify objects after being trained on millions of images.

Many of these convolutional networks perform very well , but in most cases, researchers don’t know exactly how the network is solving the object-recognition task. In 2013, researchers from DiCarlo’s lab showed that some of these neural networks could not only accurately identify objects, but they could also predict how neurons in the primate brain would respond to the same objects much better than existing alternative models. However, these neural networks are still not able to perfectly predict responses along the ventral visual stream, particularly at the earliest stages of object recognition, such as V1.

These models are also vulnerable to so-called “adversarial attacks.” This means that small changes to an image, such as changing the colors of a few pixels, can lead the model to completely confuse an object for something different — a type of mistake that a human viewer would not make.

As a first step in their study, the researchers analyzed the performance of 30 of these models and found that models whose internal responses better matched the brain’s V1 responses were also less vulnerable to adversarial attacks. That is, having a more brain-like V1 seemed to make the model more robust. To further test and take advantage of that idea, the researchers decided to create their own model of V1, based on existing neuroscientific models, and place it at the front of convolutional neural networks that had already been developed to perform object recognition.

When the researchers added their V1 layer, which is also implemented as a convolutional neural network, to three of these models, they found that these models became about four times more resistant to making mistakes on images perturbed by adversarial attacks. The models were also less vulnerable to misidentifying objects that were blurred or distorted due to other corruptions.

“Adversarial attacks are a big, open problem for the practical deployment of deep neural networks. The fact that adding neuroscience-inspired elements can improve robustness substantially suggests that there is still a lot that AI can learn from neuroscience, and vice versa,” Cox says.

Better defense

Currently, the best defense against adversarial attacks is a computationally expensive process of training models to recognize the altered images. One advantage of the new V1-based model is that it doesn’t require any additional training. It is also better able to handle a wide range of distortions, beyond adversarial attacks.

The researchers are now trying to identify the key features of their V1 model that allows it to do a better job resisting adversarial attacks, which could help them to make future models even more robust. It could also help them learn more about how the human brain is able to recognize objects.

“One big advantage of the model is that we can map components of the model to particular neuronal populations in the brain,” Dapello says. “We can use this as a tool for novel neuroscientific discoveries, and also continue developing this model to improve its performance under this challenging task.”

The research was funded by the PhRMA Foundation Postdoctoral Fellowship in Informatics, the Semiconductor Research Corporation, DARPA, the MIT Shoemaker Fellowship, the U.S. Office of Naval Research, the Simons Foundation, and the MIT-IBM Watson AI Lab.

Share this news article on:

Related links.

  • DiCarlo Lab
  • McGovern Institute
  • Department of Brain and Cognitive Sciences
  • MIT-IBM Watson AI Lab
  • Center for Brains, Minds and Machines

Related Topics

  • Brain and cognitive sciences
  • Quest for Intelligence
  • Center for Brains Minds and Machines
  • Artificial intelligence
  • Defense Advanced Research Projects Agency (DARPA)

Related Articles

A computer model of vision created by MIT neuroscientists designed these images that can stimulate very high activity in individual neurons.

Putting vision models to the test

MIT researchers have found that the part of the visual cortex known as the inferotemporal (IT) cortex is required to distinguish between different objects.

How the brain distinguishes between objects

A team of MIT neuroscientists has found that some computer programs can identify the objects in these images just as well as the primate brain.

In one aspect of vision, computers catch up to primate brain

Adversarial examples are slightly altered inputs that cause neural networks to make classification mistakes they normally wouldn’t, such as classifying an image of a cat as a dog.

How to tell whether machine-learning systems are robust enough for the real world

Previous item Next item

More MIT News

A small plastic pack labeled “Stable #1 inside” with capsules inside floats on the ISS.

MIT engineers find a way to protect microbes from extreme conditions

Read full story →

Illustrated silhouettes of people's heads, with thought and speech bubbles above

What is language for?

Thomas Varnish poses in the middle of PUFFIN, a large, stainless steel experimental facility.

Studying astrophysically relevant plasma physics

Al Oppenheim

Signal processing: How did we get to where we’re going?

Plastic bottles crunched into stacked bricks.

How to increase the rate of plastics recycling

Books on a shelf spelling out MIT for MIT’s Summer Reading 2024 list

Summer 2024 reading from MIT

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

ScienceDaily

Humans and flies employ very similar mechanisms for brain development and function

With these new findings scientists can potentially better understand the subtle changes that can occur in genes and brain circuits that can lead to mental health disorders such as anxiety and autism spectrum disorders.

Although physically very different, research has found that the brains of flies, mice and humans are similar in how they form and how they function. Data has shown that the genetic mechanisms that underlie the brain development of insects and mammals are very similar but this can be interpreted in two different ways, where some believe it provides evidence of one single ancestor for both mammals and insects and others think it could support the theory that brains evolved multiple times independently.

Published in the journal Proceedings of the National Academy of Sciences (PNAS) , this collaborative study between King's College London, University of Arizona, University of Leuven and Leibniz Institute DSMZ has provided strong evidence that the mechanisms that regulate genetic activity required for the formation of brain areas important to control behaviour, is the same for insects and mammals.

Most strikingly they have demonstrated that when these regulatory mechanisms are inhibited or impaired in insects and mammals they experience very similar behavioural problems. This indicates that the same building blocks that control the activity of genes are essential to both the formation of brain circuits and the behaviour-related functions they perform. According to the researchers this provides evidence that these mechanisms have been established in one common ancestor.

Senior author on the study, Dr Frank Hirth from the Institute of Psychiatry, Psychology & Neuroscience (IoPPN), King's College London said: 'To my knowledge this is the first study that provides evidence of the source of similarities between human and fly brains, how they form and how they function. Our research shows that the brain circuits essential for coordinated behaviour are put in place by similar mechanisms in humans, flies and mice. This indicates that the evolution of their very different brains can be traced back to a common ancestral brain more than a half billion years ago.'

Nicholas Strausfeld, Regents Professor of Neuroscience at the University of Arizona and a co-author on the study said: 'The jigsaw puzzle of how the brain evolved still lacks an image on the box, but the pieces currently being added suggest a very early origin of essential circuits that, over an immense span of time have been maintained, albeit with modification, across the great diversity of brains we see today.'

The study focussed on those areas of the brain known as the deutocerebral-tritocerebral boundary (DTB) in flies and the midbrain-hindbrain boundary (MHB) in vertebrates including humans. Using genomic data, researchers identified the genes that play a major role in the formation of the brain circuits that are responsible for basic motion in the DTB in flies and MHB in humans. They then ascertained the parts of the genome that control when and where these genes are expressed, otherwise known as cis-regulatory elements.

The researchers found that these cis-regulatory elements are very similar in flies, mice and humans, indicating that they share the same fundamental genetic mechanism by which these brain areas develop. By manipulating the relevant genomic regions in flies so they no longer regulate the genes appropriately, the researchers showed a subsequent impairment in behaviour. This corresponds to findings from research with people where mutations in gene regulatory sequences or the regulated genes themselves have been associated with behavioural problems including anxiety and autism spectrum disorders.

Dr Hirth commented: 'For many years researchers have been trying to find the mechanistic basis behind behaviour and I would say that we have discovered a crucial part of the jigsaw puzzle by identifying these basic genetic regulatory mechanisms required for midbrain circuit formation and function. If we can understand these very small, very basic building blocks, how they form and function, this will help find answers to what happens when things go wrong at a genetic level to cause these disorders.'

  • Brain-Computer Interfaces
  • Intelligence
  • Disorders and Syndromes
  • Neuroscience
  • Evolutionary Biology
  • Animal Learning and Intelligence
  • Behavioral Science
  • Social cognition
  • Hypothalamus
  • Developmental biology

Story Source:

Materials provided by King's College London . Note: Content may be edited for style and length.

Journal Reference :

  • Jessika C. Bridi, Zoe N. Ludlow, Benjamin Kottler, Beate Hartmann, Lies Vanden Broeck, Jonah Dearlove, Markus Göker, Nicholas J. Strausfeld, Patrick Callaerts, Frank Hirth. Ancestral regulatory mechanisms specify conserved midbrain circuitry in arthropods and vertebrates . Proceedings of the National Academy of Sciences , 2020; 201918797 DOI: 10.1073/pnas.1918797117

Cite This Page :

Explore More

  • Fresh Wind Blows from Historical Supernova
  • New T Cells, Genes Related to Immune Disorders
  • The Dawn of the Antarctic Ice Sheets
  • Extinct Humans On Tibetan Plateau: 160,000 Years
  • Excellent Memory of Birds
  • Seams in Clothing Capture Body Movement
  • Moon 'Swirls' Magnetized by Unseen Magmas?
  • Inexpensive, Clean, Fast-Charging Batteries
  • Giant 'Salamander': Top Predator Before Dinos
  • Humans Caused Extinction of Large Mammals

Trending Topics

Strange & offbeat.

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

for many years researchers have been trying

Should colon cancer screening start at 40? 

Lonely person sits alone on a bench looking at the ocean.

Stroke risk higher for the chronically lonely

for many years researchers have been trying

Testing fitness of aging brain

Cancer cells

iStock by Getty Images

‘We need to rethink how we are studying cancer metabolism’

MGH News and Public Affairs

Researchers take a closer look at cancer cells’ ability to rewire, thrive, and survive

Insights into how cancer cells adapt and rewire their metabolism to achieve growth and survive were accompanied by a call for tools to study this on a nearly single-cell level, according to a new paper in Nature Communications .

In the 1920s, Otto Warburg observed that cancer cells metabolically adapt their glucose pathway in unusual ways. Normally, glucose — the main nutrient needed for cells to function — is sent to the cell’s mitochondria to be broken down for energy, a process that requires oxygen. However, cancer cells appear to rapidly increase their glucose uptake and directly ferment it into lactate, even in the presence of oxygen and functional mitochondria.

“He called it aerobic glycolysis, but we know it as the Warburg effect,” says author  Raul Mostoslavsky, scientific co-director of the Mass General Cancer Center and the Laurel Schwartz Professor of Oncology (Medicine) at Harvard Medical School. For nearly 15 years researchers have been trying to explain why cancer cells do this.

In this paper, Mostoslavsky’s team studied colon cancer tumors to learn more. They developed a fluorescent reporter that stained only a marker of glycolysis in cells of the tumor. Using this reporter and a mass spectrometry imaging approach developed by collaborator Nathalie Agar of Brigham and Women’s Hospital, the researchers found that not all cells within the colon cancer cell relied on Warburg glycolysis.

“We found that this metabolic adaptation does not happen in the whole tumor, only in a heterogeneous group that were not dividing,” says Mostoslavsky. His team had published this heterogeneous feature  in squamous cell carcinoma  but this is the first time it has been shown in colon cancer, and in non-dividing cells.

“What really surprised us is that when we stained the tumor cells with a marker of cell proliferation, they were mutually exclusive,” adds Mostoslavsky. Within fully transformed colon cancers, the cells that were doing Warburg glycosis were not dividing.

“This completely challenges the dogma of the Warburg effect,” he adds.  For the past 10 to 15 years, most researchers working in cancer metabolism have held that cancer cells do Warburg glycolysis to send glucose for biomass production, or rapid proliferation. “Instead, we found that the main reason they were doing it was to reduce reactive oxygen species, or ROS.” Reactive oxygen species damage cells during glucose breakdown and energy production: “The cells do Warburg metabolism to protect against accumulation of ROS.”

This research showed that indeed Warburg glycolysis is real and functional in cancer cells as a needed adaptation. “But it’s not for the reason we used to think,” says Mostoslavsky. “This means we need to rethink how we are studying cancer metabolism.” Much of the advancements made in the past 10 years studying cancer metabolism come from mass spectrometry analysis of metabolomics, which require many cells. The problem is a lack of means for analyzing cellular heterogeneity.

“If metabolic adaptation happens in some cancer cells or not in others, you will not be able to determine that with the current technologies that exist,” he says. “We now know Warburg glycolysis is a heterogeneous feature happening in tumors so we need to develop tools that will allow us to investigate tumors in a single-cell fashion.”

In this paper, the team relied on a novel mass spectrometry imaging tool developed to achieve data almost at a single cell resolution. Says Mostoslavsky: “It is clear that cancer metabolism is highly heterogeneous so we will need new tools like this to study and define these metabolic features in tumors.”

Other authors of the study include Carlos Sebastian, Christina Ferrer, Maria Serra, Jee-Eun Choi, Nadia Ducano, Alessia Mira, Manasvi Shah, Sylwia Stopka, Andrew Perciaccante, Claudio Isella, Daniel Moya-Rull, Marianela Vara-Messler, Silvia Giordano, Elena Maldi, Niyata Desai, Diane Capen, Enzo Medico, Murat Cetinbas, Ruslan Sadreyev, Dennis Brown, Miguel Rivera, Anna Sapino, and David Breault.

This work was supported by grants from the National Institutes of Health, FPRC 5 per mille 2011 MIUR, FPRC 5 per mille 2014 MIUR, RC 2018 Ministero della Salute, and the European Union’s Horizon 2020 Research and Innovation Program.

This was adapted from a Massachusetts General Hospital press release.

Share this article

You might like.

for many years researchers have been trying

Amid surging early onset rates, Harvard experts say cost, effectiveness, equity must be considered, along with other ways to evaluate

Lonely person sits alone on a bench looking at the ocean.

Study of adults over 50 examines how feelings boost threat over time

for many years researchers have been trying

Most voters back cognitive exams for older politicians. What do they measure?

When should Harvard speak out?

Institutional Voice Working Group provides a roadmap in new report

Had a bad experience meditating? You're not alone.

Altered states of consciousness through yoga, mindfulness more common than thought and mostly beneficial, study finds — though clinicians ill-equipped to help those who struggle

College sees strong yield for students accepted to Class of 2028  

Financial aid was a critical factor, dean says

Covering a story? Visit our page for journalists or call (773) 702-8360.

Inside the Lab

Inside the Lab

Top stories.

  • Three UChicago undergraduate students earn 2024 Goldwater Scholarships

Eight books to add to your summer 2024 reading list

  • George Haley, acclaimed critic of the Golden Age of Spanish literature, 1927‒2024

How were researchers able to develop COVID-19 vaccines so quickly?

The steps that produced the most rapid vaccine rollout in history.

Vaccine development is typically measured in years, not months. But as the COVID-19 pandemic rages on, scientists are racing the clock—and breaking records—to develop an immunization that provides protection against the virus.

The nation’s scientific community also faces another obstacle: convincing the public that the COVID-19 vaccine is safe, and how important it is to get a COVID-19 vaccination in the first place.

“Even the most effective vaccine can’t protect us or our loved ones if people are afraid to take it or will not take it,” said Kathleen Mullane, director of infectious disease clinical trials at University of Chicago Medicine. “We know things are moving faster than ever, but the nation’s scientific community has cooperated and collaborated in ways as never before and we are absolutely committed to making sure whatever is ultimately approved works and is safe. I am going to get vaccinated and am recommending vaccination for my family and friends because I believe in the safety and efficacy of these agents.”

The rapid progress on a COVID-19 vaccine means that data regarding the long-term safety and durability of these vaccines will still be flowing in long after a vaccine has been approved for emergency use. Nevertheless, those wondering about vaccine safety may be encouraged that despite the speed in which these vaccines have been developed, the important regulatory and evaluation checkpoints designed to protect patients were followed. These milestones help to determine how safe and effective a vaccine will be, and whether or not the benefits are worth any potential risks.

Operation Warp Speed

Before the COVID-19 pandemic, getting a new vaccine from concept to approval could take 10 years and billions of dollars. With only one in 10 vaccine candidates making it to market, vaccine development is a risky proposition for pharmaceutical manufacturers.

For those who are unfamiliar with the methodical process of clinical research, the process can feel torturously slow. First, researchers must study the structure and infectious behavior of a pathogen. Then they figure out how to get the human body to best produce an immune response to fight against it. Next, the vaccine is tested for safety and efficacy—first using cell, animal and mathematical models, and later in human clinical trials involving thousands of participants. Only then can the federal approval process begin.

Dozens of vaccines against the SARS-CoV-2 virus are being developed by global pharmaceutical companies, but so far only a handful have reached large-scale, phase 3 clinical trials. In phase 3 trials, tens of thousands of volunteers participate to test the safety and effectiveness of the immunization. So far, 11 phase 3 trials have launched globally, although more are expected in the coming months and years as other research efforts move through the pipeline.

They’re getting a boost from Operation Warp Speed , a collaboration between the pharmaceutical industry and the federal government. To offset the cost of the development of the COVID-19 vaccine and to help mobilize approved vaccines as quickly as possible to the American public, the government established nearly $10 billion in federal funds. This has greatly accelerated the timeline for the development of vaccines through clinical trials, FDA review and mass distribution of a vaccine.

All of these factors in turn mean that once a vaccine passed critical safety and efficacy milestones and received emergency use approval from the federal government, healthcare organizations were able to start providing the vaccine to patients in a matter of days. For example, the Pfizer/BioNTech mRNA vaccine was approved for emergency use by the FDA on December 10, 2020; healthcare workers were being vaccinated by December 14.

How vaccines work

It should also be reassuring that nearly 200 years of vaccine development has generated a number of highly effective and safe vaccine platforms, requiring less time and effort to produce new kinds of vaccines. Recycling existing vaccine technology allows researchers to focus their time on identifying the best targets that will produce the strongest immune response with the fewest side effects.

“Really, most of the vaccine platform development work is already done,” said Habibul Ahsan, Director of the Institute for Population and Precision Health at the University of Chicago Medicine. “You just have to do the remaining part, which is adding the right viral antigens to the already-proven platform and making sure it’s safe and effective in humans. Even in just the last five to 10 years, we’ve made big leaps in developing new kinds of vaccine platforms like those being tested for SARS-CoV-2.”

Vaccines work by presenting the body’s immune system with certain proteins from the virus called antigens, that activate the immune response to generate antibodies that protect against the disease.

The vaccine candidates currently making headlines use mRNA and vector-based platforms. Vector-based vaccines have been developed in the past for diseases including SARS, MERS, and most notably, the deadly Ebola virus; and mRNA vaccines have previously been tested to prevent the Zika virus.

These next-generation immunizations have never been tried at such a large scale before, but there is already evidence that these platforms are safe and effective, with a reduced risk of the side effects generated by previous types of vaccines such as live attenuated or deactivated whole-virus vaccines.

“The mRNA and vector vaccines are a newer technology; the first products were developed in 1999,” said Mullane. “Based on our understanding of human biology, there is no reason to believe that they should pose any greater risk than any of the more traditional types of vaccines. If anything, the biggest concern is how long they are effective. The preliminary efficacy data so far is extremely promising.”

Combining clinical phases

To accelerate development, many COVID-19 vaccine trials are conducted in studies that combine phases 1, 2 and/or 3 where researchers begin by vaccinating a smaller number of healthy volunteers. As the trial continues, if the vaccine appears to be safe, it then opens up to more participants, such as those with preexisting health conditions. Large-scale phase 3 efficacy trials ultimately include tens of thousands of volunteers. The current trial lineup includes a variety of vaccine types—both tried-and-true models as well as next-generation approaches.

Before any vaccine can receive federal approval, even for emergency use, investigators must wait until tens of thousands of volunteers receive their experimental vaccine. Then they wait for enough time to pass for some of those volunteers to be exposed to COVID-19, which tells how effective each vaccine is. Scientists are also studying whether those who received the vaccine—versus a placebo—had less severe forms of illness. Without data that conclusively show vaccines are both safe and effective, they aren’t approved for use in the general public.

Once a clinical trial collects enough data to show a vaccine is both safe and effective, the pharmaceutical company submits data to the FDA, where it is reviewed and reanalyzed by federal statisticians and an external advisory board of scientific and medical experts. Since most of the COVID-19 vaccines will be submitted to the FDA before any long-term data is captured, it’s likely a vaccine will receive an emergency use authorization rather than full approval. This was the case for the Pfizer/BioNTech mRNA vaccine, which was approved for emergency use by the FDA on December 10, 2020, and the Moderna mRNA vaccine, eight days later.

The speed at which these trials are progressing has raised questions about whether or not they are safe. Scientists are aware of lingering mistrust due to the politicization of the pandemic and historic medical disenfranchisement in certain communities; so they are being careful not to overstate the early results from ongoing clinical trials and to be transparent about the risks involved.

Many are making efforts to get out into the community to speak with potential volunteers, address their concerns and offer an opportunity to participate, particularly in the large-scale phase 3 trials. For example, at UChicago Medicine, clinicians are taking mobile medical units into the surrounding neighborhoods to take the vaccine right to people’s front doors.

“Many centers recruiting for these trials are not readily accessible by minority and low-income populations,” said Ahsan. “But UChicago Medicine has been working hard to build strong relationships with our local community and make sure they know that we’re here to serve them.”

What risks do vaccines pose?

Like most medical treatments, any vaccine is accompanied by some degree of risk. Side effects are usually mild, ranging from soreness at the site of injection to a slight fever and body aches. In one in 100,000 cases, vaccines can trigger severe allergic reactions. Even more rare (the estimate is one in a million) is an increased risk of developing autoimmune conditions that affect the nervous system, such as Guillain-Barre Syndrome .

Two separate studies involving live non-replicative vector virus vaccines—U.K.-based phase 3 AstraZeneca vaccine trial and U.S.-based phase 3 Janssen vaccine trial —were briefly paused after a participant experienced an unexplained medical event known as an “adverse reaction” that may have been linked to their participation in the study. Both have since resumed after researchers and regulators determined that there was no clear connection between the vaccine and the medical events and deemed them safe enough to continue. No adverse events have yet been linked to the mRNA vaccine candidates, except for a handful of allergic reactions requiring EpiPens.

To hedge against uncertainty, the FDA added additional rules to provide increased safety by having specified checkpoints for the accelerated COVID-19 trials. That includes requiring researchers to collect at least two months of follow-up data from a majority of each trial’s participants, even if early data shows promising results, and long-term safety and efficacy out to two years after receipt of the vaccines.

Getting “back to normal”

Next comes the challenge of manufacturing and distributing a vaccine. The full rollout may take months to get enough batches for the general public; in the interim, authorities will prioritize distribution to those most at risk of contracting COVID-19 or those who are at highest risk of suffering the most severe effects of the illness, such as health care workers, older adults, adults with pre-existing conditions and essential workers.

As a new vaccine is distributed, the clinical trials will go on and data will continue to flow in about its long-term effectiveness and any potential safety issues. This will allow researchers and healthcare providers to adapt distribution as necessary.

Realistically, the general public likely won’t have access to a vaccine until sometime this summer. That’s far later than Operation Warp Speed’s initial goal of having 300 million doses available by January, but significantly faster than any other vaccine development effort  to date.

Individuals who are interested in participating in the ongoing COVID-19 vaccine trials—including potential new vaccine candidates in future trials—may sign up for the UChicago Medicine COVID-19 Vaccine Registry at covidvaccinestudies.uchicago.edu .

—Adapted from a story first published by the University of Chicago Medicine .

Recommended Stories

A Multiscale Coarse-Grained Model of the SARS-CoV-2 Virion, Biophysical Journal (2021)

UChicago scientists create first computational model of entire virus…

Healthcare worker gives a vaccine to seated masked woman

COVID-19 vaccinations begin at University of Chicago Medicine

Get more with UChicago News delivered to your inbox.

Related Topics

Latest news.

Reynolds Club Tower

Distinguished Service Professorships

Thirty-one UChicago faculty members receive named, distinguished service professorships in 2024

Miniature figure standing in a maze

Big Brains podcast: Feeling stuck? Here’s how to achieve a breakthrough

PME scientists Chenxi Sui (at left) and Po-Chun Hsu

Materials science

UChicago researchers invent new fabric that reduces heat

Inside the Lab

Go 'Inside the Lab' at UChicago

Explore labs through videos and Q and As with UChicago faculty, staff and students

Scientist reaches out to flip a switch on a board with a colleague visible further away

Upgraded synchrotron starts up at Argonne National Laboratory

Two scientists compare sheets of paper with figures, standing in front of a whiteboard with equations

UChicago scientists pioneer technique to visualize anti-ferroelectric materials

Around uchicago.

Gill Peterson

Pride Month

How UChicago’s Center for the Study of Gender and Sexuality ‘is here for everyone’

Quantrell and PhD Teaching Awards

UChicago announces 2024 winners of Quantrell and PhD Teaching Awards

Campus News

Project to improve accessibility, sustainability of Main Quadrangles

National Academy of Sciences

Five UChicago faculty elected to National Academy of Sciences in 2024

Springtime on campus

UChicago graduate student selected as 2024 Paul and Daisy Soros Fellow

Paul Alivisatos

Kavli Prize

UChicago President Paul Alivisatos shares 2024 Kavli Prize in Nanoscience

Biological Sciences Division

“You have to be open minded, planning to reinvent yourself every five to seven years.”

Prof. Chuan He faces camera smiling with hands on hips with a chemistry lab in the background

First student awarded scholarship in honor of historian, civil rights activist Timuel D. Black

Psychology’s Replication Crisis Is Running Out of Excuses

Another big project has found that only half of studies can be repeated. And this time, the usual explanations fall flat.

"The Thinker," by Auguste Rodin

Over the past few years, an international team of almost 200 psychologists has been trying to repeat a set of previously published experiments from its field, to see if it can get the same results. Despite its best efforts, the project, called Many Labs 2 , has only succeeded in 14 out of 28 cases. Six years ago, that might have been shocking. Now it comes as expected (if still somewhat disturbing) news.

In recent years, it has become painfully clear that psychology is facing a “ reproducibility crisis ,” in which even famous, long-established phenomena—the stuff of textbooks and TED Talks—might not be real. There’s social priming , where subliminal exposures can influence our behavior. And ego depletion , the idea that we have a limited supply of willpower that can be exhausted. And the facial-feedback hypothesis , which simply says that smiling makes us feel happier.

One by one, researchers have tried to repeat the classic experiments behind these well-known effects—and failed. And whenever psychologists undertake large projects , like Many Labs 2, in which they replicate past experiments en masse , they typically succeed, on average, half of the time.

Read: A worrying trend for psychology’s “simple little tricks”

Ironically enough, it seems that one of the most reliable findings in psychology is that only half of psychological studies can be successfully repeated.

That failure rate is especially galling, says Simine Vazire from the University of California at Davis, because the Many Labs 2 teams tried to replicate studies that had made a big splash and been highly cited. Psychologists “should admit we haven’t been producing results that are as robust as we’d hoped, or as we’d been advertising them to be in the media or to policy makers,” she says. “That might risk undermining our credibility in the short run, but denying this problem in the face of such strong evidence will do more damage in the long run.”

Many psychologists have blamed these replication failures on sloppy practices. Their peers, they say, are too willing to run small and statistically weak studies that throw up misleading fluke results, to futz around with the data until they get something interesting, or to only publish positive results while hiding negative ones in their file drawers.

But skeptics have argued that the misleadingly named “crisis” has more mundane explanations . First, the replication attempts themselves might be too small. Second, the researchers involved might be incompetent, or lack the know-how to properly pull off the original experiments. Third, people vary, and two groups of scientists might end up with very different results if they do the same experiment on two different groups of volunteers.

The Many Labs 2 project was specifically designed to address these criticisms. With 15,305 participants in total, the new experiments had, on average, 60 times as many volunteers as the studies they were attempting to replicate. The researchers involved worked with the scientists behind the original studies to vet and check every detail of the experiments beforehand. And they repeated those experiments many times over, with volunteers from 36 different countries, to see if the studies would replicate in some cultures and contexts but not others. “It’s been the biggest bear of a project,” says Brian Nosek from the Center for Open Science, who helped to coordinate it. “It’s 28 papers’ worth of stuff in one.”

Despite the large sample sizes and the blessings of the original teams, the team failed to replicate half of the studies it focused on. It couldn’t, for example, show that people subconsciously exposed to the concept of heat were more likely to believe in global warming , or that moral transgressions create a need for physical cleanliness in the style of Lady Macbeth , or that people who grow up with more siblings are more altruistic. And as in previous big projects , online bettors were surprisingly good at predicting beforehand which studies would ultimately replicate. Somehow, they could intuit which studies were reliable.

Read: Online bettors can sniff out weak psychology studies.

But other intuitions were less accurate. In 12 cases, the scientists behind the original studies suggested traits that the replicators should account for. They might, for example, only find the same results in women rather than men, or in people with certain personality traits. In almost every case, those suggested traits proved to be irrelevant. The results just weren’t that fickle.

Likewise, Many Labs 2 “was explicitly designed to examine how much effects varied from place to place, from culture to culture,” says Katie Corker from Grand Valley State University, who chairs the Society for the Improvement of Psychological Science. “And here’s the surprising result: The results do not show much variability at all.” If one of the participating teams successfully replicated a study, others did, too. If a study failed to replicate, it tended to fail everywhere.

It’s worth dwelling on this because it’s a serious blow to one of the most frequently cited criticisms of the “reproducibility crisis” rhetoric. Surely, skeptics argue, it’s a fantasy to expect studies to replicate everywhere. “There’s a massive deference to the sample,” Nosek says. “Your replication attempt failed? It must be because you did it in Ohio and I did it in Virginia, and people are different. But these results suggest that we can’t just wave those failures away very easily.”

This doesn’t mean that cultural differences in behavior are irrelevant. As Yuri Miyamoto from the University of Wisconsin at Madison notes in an accompanying commentary, “In the age of globalization, psychology has remained largely European [and] American.” Many researchers have noted that volunteers from Western, educated, industrialized, rich, and democratic countries— WEIRD nations —are an unusual slice of humanity who think differently than those from other parts of the world.

In the majority of the Many Labs 2 experiments, the team found very few differences between WEIRD volunteers and those from other countries. But Miyamoto notes that its analysis was a little crude—in considering “non- WEIRD countries” together, it’s lumping together people from cultures as diverse as Mexico, Japan, and South Africa. “Cross-cultural research,” she writes, “must be informed with thorough analyses of each and all of the cultural contexts involved.”

Read: Psychology’s replication crisis has a silver lining.

Nosek agrees. He’d love to see big replication projects that include more volunteers from non-Western societies, or that try to check phenomena that you’d expect to vary considerably outside the WEIRD bubble. “Do we need to assume that WEirD ness matters as much as we think it does?” he asks. “We don’t have a good evidence base for that.”

Sanjay Srivastava from the University of Oregon says the lack of variation in Many Labs 2 is actually a positive thing. Sure, it suggests that the large number of failed replications really might be due to sloppy science. But it also hints that the fundamental business of psychology—creating careful lab experiments to study the tricky, slippery, complicated world of the human mind—works pretty well. “Outside the lab, real-world phenomena can and probably do vary by context,” he says. “But within our carefully designed studies and experiments, the results are not chaotic or unpredictable. That means we can do valid social-science research.”

The alternative would be much worse. If it turned out that people were so variable that even very close replications threw up entirely different results, “it would mean that we could not interpret our experiments, including the positive results, and could not count on them happening again,” Srivastava says. “That might allow us to dismiss failed replications, but it would require us to dismiss original studies, too. In the long run, Many Labs 2 is a much more hopeful and optimistic result.”

* A mention of the marshmallow test was removed from an early paragraph, since the circumstances there differ from those of other failed replications.

Illustration of brain with EEG epilepsy reading superimposed across it

From diagnosing brain disorders to cognitive enhancement, 100 years of EEG have transformed neuroscience

for many years researchers have been trying

Associate Professor of Psychology and Neuroscience, Bowdoin College

Disclosure statement

Erika Nyhus receives funding from National Institute of Health, and the National Institute of Mental Health.

Bowdoin College provides funding as a member of The Conversation US.

View all partners

Electroencephalography, or EEG, was invented 100 years ago . In the years since the invention of this device to monitor brain electricity, it has had an incredible impact on how scientists study the human brain.

Since its first use, the EEG has shaped researchers’ understanding of cognition, from perception to memory. It has also been important for diagnosing and guiding treatment of multiple brain disorders, including epilepsy.

I am a cognitive neuroscientist who uses EEG to study how people remember events from their past. The EEG’s 100-year anniversary is an opportunity to reflect on this discovery’s significance in neuroscience and medicine.

Discovery of EEG

On July 6, 1924, psychiatrist Hans Berger performed the first EEG recording on a human , a 17-year-old boy undergoing neurosurgery. At the time, Berger and other researchers were performing electrical recordings on the brains of animals.

What set Berger apart was his obsession with finding the physical basis of what he called psychic energy , or mental effort, in people. Through a series of experiments spanning his early career, Berger measured brain volume and temperature to study changes in mental processes such as intellectual work, attention and desire.

He then turned to recording electrical activity. Though he recorded the first traces of EEG in the human brain in 1924, he did not publish the results until 1929 . Those five intervening years were a tortuous phase of self-doubt about the source of the EEG signal in the brain and refining the experimental setup. Berger recorded hundreds of EEGs on multiple subjects, including his own children, with both experimental successes and setbacks.

This is among the first EEG readings published in Hans Berger's study. The top trace is the EGG while the bottom is a reference trace of 10 Hz.

Finally convinced of his results, he published a series of papers in the journal Archiv für Psychiatrie and had hopes of winning a Nobel Prize. Unfortunately, the research community doubted his results, and years passed before anyone else started using EEG in their own research.

Berger was eventually nominated for a Nobel Prize in 1940. But Nobels were not awarded that year in any category due to World War II and Germany’s occupation of Norway.

  • Neural oscillations

When many neurons are active at the same time, they produce an electrical signal strong enough to spread instantaneously through the conductive tissue of the brain, skull and scalp. EEG electrodes placed on the head can record these electrical signals.

Since the discovery of EEG, researchers have shown that neural activity oscillates at specific frequencies. In his initial EEG recordings in 1924, Berger noted the predominance of oscillatory activity that cycled eight to 12 times per second, or 8 to 12 hertz, named alpha oscillations . Since the discovery of alpha rhythms, there have been many attempts to understand how and why neurons oscillate.

Neural oscillations are thought to be important for effective communication between specialized brain regions. For example, theta oscillations that cycle at 4 to 8 hertz are important for communication between brain regions involved in memory encoding and retrieval in animals and humans.

Finger pointing at EEG reading

Researchers then examined whether they could alter neural oscillations and therefore affect how neurons talk to each other. Studies have shown that many behavioral and noninvasive methods can alter neural oscillations and lead to changes in cognitive performance. Engaging in specific mental activities can induce neural oscillations in the frequencies those mental activities use. For example, my team’s research found that mindfulness meditation can increase theta frequency oscillations and improve memory retrieval.

Noninvasive brain stimulation methods can target frequencies of interest. For example, my team’s ongoing research found that brain stimulation at theta frequency can lead to improved memory retrieval .

EEG has also led to major discoveries about how the brain processes information in many other cognitive domains , including how people perceive the world around them, how they focus their attention, how they communicate through language and how they process emotions.

Diagnosing and treating brain disorders

EEG is commonly used today to diagnose sleep disorders and epilepsy and to guide brain disorder treatments .

Scientists are using EEG to see whether memory can be improved with noninvasive brain stimulation. Although the research is still in its infancy, there have been some promising results. For example, one study found that noninvasive brain stimulation at gamma frequency – 25 hertz – improved memory and neurotransmitter transmission in Alzheimer’s disease .

Back of person's head enveloped by the many, small round electrodes of an EEG cap

A new type of noninvasive brain stimulation called temporal interference uses two high frequencies to cause neural activity equal to the difference between the stimulation frequencies. The high frequencies can better penetrate the brain and reach the targeted area. Researchers recently tested this method in people using 2,000 hertz and 2,005 hertz to send 5 hertz theta frequency at a key brain region for memory, the hippocampus. This led to improvements in remembering the name associated with a face .

Although these results are promising, more research is needed to understand the exact role neural oscillations play in cognition and whether altering them can lead to long-lasting cognitive enhancement.

The future of EEG

The 100-year anniversary of the EEG provides an opportunity to consider what it has taught us about brain function and what this technique can do in the future.

What will be possible in the next 100 years of EEG?

Some researchers, including me, predict that we’ll use EEG to diagnose and create targeted treatments for brain disorders. Others anticipate that an affordable, wearable EEG will be widely used to enhance cognitive function at home or will be seamlessly integrated into virtual reality applications. The possibilities are vast.

Article updated to remove reference to a survey that has not yet been published.

  • Neuroscience
  • Brain disorders
  • Cognitive function
  • Brain activity
  • Electroencephalography
  • Hans Berger
  • Neurotechnology

Want to write?

Write an article and join a growing community of more than 186,400 academics and researchers from 4,990 institutions.

Register now

New Research

Scientists Replicated 100 Psychology Studies, and Fewer Than Half Got the Same Results

The massive project shows that reproducibility problems plague even top scientific journals

Brian Handwerk

Science Correspondent

42-52701089.jpg

Academic journals and the press regularly serve up fresh helpings of fascinating psychological research findings. But how many of those experiments would produce the same results a second time around?

According to work presented today in Science , fewer than half of 100 studies published in 2008 in three top psychology journals could be replicated successfully. The international effort included 270 scientists who re-ran other people's studies as part of The Reproducibility Project: Psychology , led by Brian Nosek of the University of Virginia .

The eye-opening results don't necessarily mean that those original findings were incorrect or that the scientific process is flawed. When one study finds an effect that a second study can't replicate, there are several possible reasons, says co-author Cody Christopherson of Southern Oregon University. Study A's result may be false, or Study B's results may be false—or there may be some subtle differences in the way the two studies were conducted that impacted the results.

“This project is not evidence that anything is broken. Rather, it's an example of science doing what science does,” says Christopherson. “It's impossible to be wrong in a final sense in science. You have to be temporarily wrong, perhaps many times, before you are ever right.”

Across the sciences, research is considered reproducible when an independent team can conduct a published experiment, following the original methods as closely as possible, and get the same results. It's one key part of the process for building evidence to support theories. Even today, 100 years after Albert Einstein presented his general theory of relativity, scientists regularly repeat tests of its predictions and look for cases where his famous description of gravity does not apply.

"Scientific evidence does not rely on trusting the authority of the person who made the discovery," team member Angela Attwood , a psychology professor at the University of Bristol, said in a statement "Rather, credibility accumulates through independent replication and elaboration of the ideas and evidence."

The Reproducibility Project, a community-based crowdsourcing effort, kicked off in 2011 to test how well this measure of credibility applies to recent research in psychology. Scientists, some recruited and some volunteers, reviewed a pool of studies and selected one for replication that matched their own interest and expertise. Their data and results were shared online and reviewed and analyzed by other participating scientists for inclusion in the large Science study.

To help improve future research, the project analysis attempted to determine which kinds of studies fared the best, and why. They found that surprising results were the hardest to reproduce, and that the experience or expertise of the scientists who conducted the original experiments had little to do with successful replication.

The findings also offered some support for the oft-criticized statistical tool known as the P value , which measures whether a result is significant or due to chance. A higher value means a result is most likely a fluke, while a lower value means the result is statistically significant.

The project analysis showed that a low P value was fairly predictive of which psychology studies could be replicated. Twenty of the 32 original studies with a P value of less than 0.001 could be replicated, for example, while just 2 of the 11 papers with a value greater than 0.04 were successfully replicated.

But Christopherson suspects that most of his co-authors would not want the study to be taken as a ringing endorsement of P values, because they recognize the tool's limitations. And at least one P value problem was highlighted in the research: The original studies had relatively little variability in P value, because most journals have established a cutoff of 0.05 for publication. The trouble is that value can be reached by being selective about data sets , which means scientists looking to replicate a result should also carefully consider the methods and the data used in the original study.

It's also not yet clear whether psychology might be a particularly difficult field for reproducibility—a similar study is currently underway on cancer biology research. In the meantime, Christopherson hopes that the massive effort will spur more such double-checks and revisitations of past research to aid the scientific process.

“Getting it right means regularly revisiting past assumptions and past results and finding new ways to test them. The only way science is successful and credible is if it is self-critical,” he notes. 

Unfortunately there are disincentives to pursuing this kind of research, he says: “To get hired and promoted in academia, you must publish original research, so direct replications are rarer. I hope going forward that the universities and funding agencies responsible for incentivizing this research—and the media outlets covering them—will realize that they've been part of the problem, and that devaluing replication in this way has created a less stable literature than we'd like.”

Get the latest Science stories in your inbox.

Brian Handwerk | READ MORE

Brian Handwerk is a science correspondent based in Amherst, New Hampshire.

APS

New Research in Psychological Science

  • Political Psychology
  • Psychological Science
  • Visual Attention

for many years researchers have been trying

Compassion Fatigue as a Self-Fulfilling Prophecy: Believing Compassion Is Limited Increases Fatigue and Decreases Compassion Izzy Gainsburg and Julia Lee Cunningham  

Compassion has health and well-being benefits for the self and others. Unfortunately, people sometimes experience compassion fatigue—a decreased ability to feel compassion—when they are repeatedly exposed to people suffering. Thus, the present research explores a factor that can mitigate compassion fatigue: changing people’s compassion mindsets. Our research suggests that when people believe compassion is fatiguing and a limited resource, they experience more compassion fatigue and provide lower-quality social support; however, when people believe compassion is energizing and not limited, they feel less compassion fatigue and provide higher-quality social support. We also show that people can change their limited-compassion mindsets and become less susceptible to compassion fatigue. Altogether, this research cautions people against assuming they will experience compassion fatigue and to allow for the possibility that compassion for someone in need can be an energizing experience that motivates people to care about others in need, too. 

Different Representational Mechanisms for Imagery and Perception: Modulation Versus Excitation Thomas Pace, Roger Koenig-Robert, and Joel Pearson  

Imagine trying to describe a favorite memory to a friend. The mental image is not as defined or strong as the original experience, right? Our research delved into this phenomenon, showing that the process of mental imagery and visual perception are quite different. When we imagine something, we create a sort of picture in our mind, but without the sensory input that comes from the eyes. To help create this mental picture, our brain employs a clever strategy: It dims the activity related to elements we do not imagine, rather like turning down the background noise to focus on a conversation. This paradigm shift in our understanding might explain why mental imagery is seldom experienced as richly as perception and may put an upper limit to its strength. 

Gaze-Triggered Communicative Intention Compresses Perceived Temporal Duration Yiwen Yu, Li Wang, and Yi Jiang 

Our experience of time is not the authentic representation of physical time and can be distorted by the properties of the stimuli. In this research, we report a novel temporal illusion: that eye gaze, being a crucial social cue, can distort subjective time perception of unchanged objects. Specifically, adult participants compared the duration of two objects before and after they had implicitly seen that one object was consistently under gaze whereas the other object was never under gaze. We found that gaze-associated objects were perceived as having a shorter duration than nonassociated ones. This effect was driven by intention processing elicited by social cues, as nonsocial cues (i.e., arrows) and blocked gaze failed to induce such time distortions. Notably, individuals lower in autistic traits showed greater susceptibility to gaze-induced time distortions. This research highlights the role of high-level social function in time perception. Time flies faster when observers are confronted with objects that fell under others’ gaze. 

The Role of Humor Production and Perception in the Daily Life of Couples: An Interest-Indicator Perspective Kenneth Tan, Bryan Choy, and Norman Li  

Humor has typically been shown to promote attraction and is highly desired by potential mates, but the day-to-day unfolding of how humor affects relationship maintenance has rarely been examined. In this research, we tested whether relationship quality on a daily basis precedes humor or the other way around, using a sample of college students in Singapore. We found consistent evidence that individuals engaged in humorous interactions to the extent that they reported greater relationship quality on the previous day, but not the other way around. These findings enhance our understanding of the role of humor in relationship maintenance and highlight the importance of examining bidirectional processes between relationship quality and humor in interpersonal interactions.  

Listen to related Under the Cortex episode .

Numerical Representation for Action in Crows Obeys the Weber-Fechner Law Maximilian Kirschhock and Andreas Nieder  

Whereas the laws governing the judgment of perceived numbers of objects by the “number sense” have been studied in detail, the behavioral principles of equally important number representations for action are largely unexplored. We trained crows to judge numerical values of instruction stimuli from one to five and to flexibly perform a matching number of pecks. Our quantitative behavioral data show an impressive correspondence of number representations found in the motor domain with those described earlier in the sensory system. We report that nonsymbolic number production obeys the psychophysical Weber-Fechner law. Our report helps to resolve a classical debate in psychophysics. It suggests that this way of coding numerical information is not constrained to sensory or memory processes but constitutes a general principle of nonsymbolic number representations. Thus, logarithmic relationships between objective number and subjective numerical representations pervade not only sensation but also motor production.  

Feedback on this article? Email  [email protected]  or login to comment.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

for many years researchers have been trying

Latin American Psychological Science: Will the Global North Make Room?

Seven authors outline factors that influence scientific advancements in Latin America and identify potential avenues for reframing research conducted in the region, especially by Latin American researchers, in the global scientific landscape.

for many years researchers have been trying

New APS Board Members Look to Strategic Plan, Emerging Researchers to Advance the Science

Three influential psychological scientists known for their work involving behavior change, intergroup relations, and memory have joined the APS Board of Directors for 2022–2023.

for many years researchers have been trying

The Burden of the COVID-19 Pandemic May Motivate Outbreaks of Violent Protest and Antigovernment Sentiment

Civil unrest and political violence may be related to the psychological burden of the COVID-19 pandemic.

Privacy Overview

CookieDurationDescription
__cf_bm30 minutesThis cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
CookieDurationDescription
AWSELBCORS5 minutesThis cookie is used by Elastic Load Balancing from Amazon Web Services to effectively balance load on the servers.
CookieDurationDescription
at-randneverAddThis sets this cookie to track page visits, sources of traffic and share counts.
CONSENT2 yearsYouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data.
uvc1 year 27 daysSet by addthis.com to determine the usage of addthis.com service.
_ga2 yearsThe _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors.
_gat_gtag_UA_3507334_11 minuteSet by Google to distinguish users.
_gid1 dayInstalled by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously.
CookieDurationDescription
loc1 year 27 daysAddThis sets this geolocation cookie to help understand the location of users who share the information.
VISITOR_INFO1_LIVE5 months 27 daysA cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface.
YSCsessionYSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages.
yt-remote-connected-devicesneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt-remote-device-idneverYouTube sets this cookie to store the video preferences of the user using embedded YouTube video.
yt.innertube::nextIdneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.
yt.innertube::requestsneverThis cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen.

Scientists replicated 100 recent psychology experiments. More than half of them failed.

by Julia Belluz

for many years researchers have been trying

Replication is one of the foundational ideas behind science. It's when researchers take older studies and reproduce them to see if the findings hold up. Testing, validating, retesting: It's all part of the slow and grinding process to arrive at some semblance of scientific truth.

Yet it seems that way too often, when we hear about researchers trying to replicate studies, they simply flop or flounder . Some have even called this a "crisis of irreproducibility." Consider the newest evidence: a landmark study published today in the journal Science . More than 270 researchers from around the world came together to replicate 100 recent findings from top psychology journals. By one measure, only 36 percent showed results that were consistent with the original findings. In other words, many more than half of the replications failed.

The results of this study may actually be too generous

"The results are more or less consistent with what we've seen in other fields ," said Ivan Oransky, one of the founders of the blog Retraction Watch , which tracks scientific retractions. Still, he applauded the effort: "Because the authors worked with the original researchers and repeated the experiments, the paper is an example of the gold standard of replication."

In reality, the replication failure rate might be even higher But Stanford's John Ioannidis , who famously penned a paper arguing that most published research findings are wrong, explained that exactly because it's the gold standard, the results might be a little too generous; in reality, the replication failure rate might be even higher.

"I say this because the 100 assessed studies were all published in the best journals, so one would expect the quality of the research and the false rates to be higher if studies from all journals were assessed," he said.

The 100 studies replicated ended up excluding some 50 others for which the replication was thought to be too difficult. "Among those that did get attempted, difficult, challenging replication was a strong predictor of replication failure, so the failure rates might have been even higher in the 50 or so papers that no one dared to replicate," Ioannidis said.

Again, the scientists worked closely with the researchers of the original papers, to get their data and talk over the details of their methods. This is why this effort is considered top quality— they tried really hard to understand the original research and duplicate it — but that collaboration may have also biased the results, increasing the chances of a successful replication. "In a few cases [the original authors] affected the choice of which exact experiment among many should be attempted to replicate," said Ioannidis.

Just listen to how difficult it was to repeat just one experiment

Even with all this buy-in and support, running a replication is an extremely difficult task, explained one of the people on the team, University of Virginia PhD candidate David Reinhard . In fact, after talking to Reinhard, I've come to the view the chance of reproducing a study and arriving at the same result — especially in a field like psychology, where local culture and context are so important — as next to nil.

Reinhard had been hearing a lot about the problem of irreproducibility in science recently and wanted to get firsthand experience with replication. He had no idea what he was in for — and his journey tells a lot about how arduous science can be.

To begin with, the original study he wanted to replicate failed during the pretesting stage. That's the first little-appreciated step of any replication (or study, for that matter) when researchers run preliminary tests to make sure their experiment is viable.

Reinhard spent hours on the phone with the original study authors. He also had to translate some of the data from German.

The study he finally settled on was originally run in Germany. It looked at how "global versus local processing influenced the way participants used priming information in their judgment of others." In English, that means the researchers were studying how people use concepts they are currently thinking about (in this case, aggression) to make judgments about other people's ambiguous behavior when they were in one of two mindsets: a big-picture (global) mindset versus a more detail-oriented (local) mindset. The original study had found that they were more suggestible when thinking big.

"Fortunately for me, the authors of the study were helpful in terms of getting the materials and communication," Reinhard said. He spent hours on the phone with them — talking over the data, getting information about details that were missing or unclear in the methods section of the paper (where researchers spell out how an experiment was conducted). He also had to translate some of the data from German to English, which took more time and resources.

This cooperation was essential, he said, and it's not necessarily always present. Even still, he added, "There were a lot of difficulties that arose."

Reinhard had to figure out how to translate the social context, bringing a study that ran in Germany to students at the University of Virginia. For example, the original research used maps from Germany. "We decided to use maps of one of the states in the US, so it would be less weird for people in Virginia," he said.

After all that, he couldn't reproduce the original findings

Another factor: Americans' perceptions of aggressive behavior are different from Germans', and the study hinged on participants scoring their perceptions of aggression. The German researchers who ran the original study based it on some previous research that was done in America, but they changed the ratings scalebecause the Germans' threshold for aggressive behavior was much higher. Now Reinhard had to change them back — just one of a number of variables that had to be manipulated.

In the end, he couldn't reproduce their findings, and he doesn't know why his experiment failed. "When you change the materials, a lot of things can become part of the equation," he said. Maybe the cultural context mattered, or using different stimuli (like the new maps) made a difference.Or it could just be that the original finding was wrong.

"I still think replication is an extremely important part of science, and I think that’s one of the really great things about this project," Reinhard said. But he's also come to a more nuanced view of replication, that sometimes the replications themselves can be wrong, too, for any number of reasons.

"The replication is just another sort of data point that there is when it comes to the effect but it’s not the definitive answer," he added. "We need a better understanding of what a replication does and doesn’t say."

Here's how to make replication science easier

After reading the study and talking to Reinhard, I had a much better sense of how replication works. But I also felt pretty sorry about the state of replication science.

It seemed a little too random, unsystematic, and patchwork — not at all the panacea many have made it out to be.

I asked Brian Nosek , the University of Virginia psychologist who led the Science effort, what he learned in the process. He came to a conclusion very similar to Reinhard's:

My main observation here is that reproducibility is hard. That's for many reasons. Scientists are working on hard problems. They're investigating things where we don't know the answer. So the fact that things go wrong in the research process, meaning we don't get to the right answer right away, is no surprise. That should be expected.

To make it easier, he suggested some fixes. For one thing, he said, scientists need to get better at sharing the details — and all the assumptions they may have made — in the methods sections of their papers. "It would be great to have stronger norms about being more detailed with the methods," he said. He also suggested added supplements at the end of papers that get into the procedural nitty-gritty, to help anyone wanting to repeat an experiment. "If I can rapidly get up to speed, I have a much better chance of approximating the results," he said. (Nosek has detailed other potential fixes in these guidelines for publishing scientific studies, which I wrote about here — all part of his work at the Center for Open Science .)

Ioannidis agreed and added that more transparency and better data sharing are also key. "It is better to do this in an organized fashion with buy-in from all leading investigators in a scientific discipline rather than have to try to find the investigator in each case and ask him or her in detective-work fashion about details, data, and methods that are otherwise unavailable," he said. "Investigators move, quit science, die, lose their data, have their hard drives with all their files destroyed, and so forth."

What both Ioannidis and Nosek are saying is that we need to have a better infrastructure for replication in place. For now, science is slowly lurching along in this this direction. And that's good news, because trying to do a replication — even with all the infrastructure of a world-famous experiment behind you, as Reinhard had — is challenging. Trying to do it alone is probably impossible.

Further reading:

  • Science is often flawed. It's time we embraced that.
  • John Ioannidis has dedicated his life to quantifying how science is broken
  • This is why you shouldn’t believe that exciting new medical study
  • How the biggest fraud in political science nearly got missed
  • Science is broken. These academics think they have the answer.

Most Popular

Stop setting your thermostat at 72, in an abc interview, biden charts a course for dems’ worst-case scenario, web3 is the future, or a scam, or both, the lawsuit accusing trump of raping a 13-year-old girl, explained, take a mental break with the newest vox crossword, today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

More in Science

How dangerous is it really to have a baby in America?

How dangerous is it really to have a baby in America?

Noise canceling can help save your ears

Noise canceling can help save your ears

Drug-resistant bacteria are killing more and more humans. We need new weapons.

Drug-resistant bacteria are killing more and more humans. We need new weapons.

There’s a secret wildlife wonderland hidden in the US — and it’s in danger

There’s a secret wildlife wonderland hidden in the US — and it’s in danger

Do we have Alzheimer’s disease all wrong?

Do we have Alzheimer’s disease all wrong?

Will AI ever become conscious? It depends on how you think about biology.

Will AI ever become conscious? It depends on how you think about biology.

How dangerous is it really to have a baby in America?

The most crucial part of next year’s federal budget

The existential struggle of being Black

The existential struggle of being Black  Audio

In an ABC interview, Biden charts a course for Dems’ worst-case scenario

Innovation in child care is coming from a surprising source: Police departments

Why Britain's Conservatives were wiped out by Labour  

Why Britain's Conservatives were wiped out by Labour  

What the Labour Party’s big win in the UK will actually mean

What the Labour Party’s big win in the UK will actually mean

  • Skip to main content
  • Keyboard shortcuts for audio player

Hidden Brain

Scientific findings often fail to be replicated, researchers say.

Shankar Vedantam 2017 square

Shankar Vedantam

A massive effort to test the validity of 100 psychology experiments finds that more than 50 percent of the studies fail to replicate. This is based on a new study published in the journal "Science."

Copyright © 2015 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NATURE PODCAST
  • 21 February 2024

Why are we nice? Altruism’s origins are put to the test

  • Benjamin Thompson &

Nick Petrić Howe

You can also search for this author in PubMed   Google Scholar

Download the Nature Podcast 21 February 2023

In this episode:

00:45 Why are humans so helpful?

Humans are notable for their cooperation and display far more altruistic behaviour than other animals, but exactly why this behaviour evolved has been a puzzle. But in a new paper, the two leading theories have been put the test with a model and a real-life experiment. They find that actually neither theory on its own leads to cooperation but a combination is required for humans to help one another.

Research article: Efferson et al.

News and Views: Why reciprocity is common in humans but rare in other animals

10:55 Research Highlights

The discovery of an ancient stone wall hidden underwater, and the fun that apes have teasing one another.

Research Highlight: Great ‘Stone Age’ wall discovered in Baltic Sea

Research Highlight: What a tease! Great apes pull hair and poke each other for fun

13:14 The DVD makes a comeback

Optical discs, like CDs and DVDs, are an attractive option for long-term data storage, but these discs are limited by their small capacity. Now though, a team has overcome a limitation of conventional disc writing to produce optical discs capable of storing petabits of data, significantly more than the largest available hard disk. The researchers behind the work think their new discs could one day replace the energy-hungry hard disks used in giant data centres, making long-term storage more sustainable.

Research Article: Zhao et al.

20:10 Briefing Chat

The famous fossil that turned out to be a fraud, and why researchers are making hybrid ‘meat-rice’.

Ars Technica: It’s a fake: Mysterious 280 million-year-old fossil is mostly just black paint

Nature News: Introducing meat–rice: grain with added muscles beefs up protein

Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday.

Never miss an episode. Subscribe to the Nature Podcast on Apple Podcasts , Google Podcasts , Spotify or your favourite podcast app. An RSS feed for the Nature Podcast is available too

Benjamin Thompson

Welcome back to the Nature Podcast , this week: putting theories for why humans are so helpful to the test…

…and how to make DVDs with huge storage capacity. I'm Nick Petrić Howe.

And I'm Benjamin Thompson.

First up on the show reporter, Adam Levy is helping us to understand the evolutionary origins of altruism.

Well, before I do that, Nick, can you do me a quick favour and remind me what's coming up later in the show?

Oh, um, sure yes. Later on, I've been looking into ways to make discs with huge data storage capacity.

Now, Nick, why did you help me out there?

Well, it's, you know, it's in the script.

Well, yeah. But I mean, even if it wasn't in the script, you would have helped out, right?

Sure. I guess?

I don't know. It's just what people do. Right?

Exactly. It's just what people do. But why?

You mean why do people help each other out?

Yeah. How did this behaviour evolve? which is the topic of a study out in this week's N ature . So, thank you, Nick, for your help with that intro.

No worries, you're welcome.

You see, helping each other out is a part of our human nature. Whether that's helping with childcare, sharing information, exchanging goods and food, and…

Sarah Mathew

We help individuals that we know, we help also individuals who we don't know.

That’s evolutionary anthropologist, Sarah Mathew of Arizona State University. So why did we evolve to help each other out? After all, in many situations, it would seem to be in our interests to not cooperate: to accept someone's help, but then not help them out in return.

Cooperators somehow had to avoid getting exploited by individuals who just take their help but are not going to do their share. And that problem is so powerful that we get very, very little cooperation evolving in nature.

So how did humans evolve cooperation? When compared to other animals were remarkably cooperative with each other. So much so that we often help someone out even when we're unlikely to ever interact with them again, and so won't get to reap any benefits.

People donate blood, we will get up from our seat when somebody enters a bus who looks like they need the seat.

So how did our weird way of helping each other out come about? Well, there are two main explanations. Here’s economist Ernst Fehr of the University of Zurich, explaining the first of these theories: the theory of repeated interactions, which argues…

Because we evolved always under the shadow of the future where there is another future interactions, we have an incentive to cooperate.

So, help someone now so they might help you later. And this theory explains our tendency to help people we won't see again, by suggesting that our altruism evolved when our ancestors lived in small groups and knew everyone and so helped everyone. The other theory called group competition, imagines that it was the groups of people themselves that could have led to cooperation arising as a human norm.

And the key idea in the theory of group competition is that the groups who are more likely to succeed in group competition are more cooperative groups.

So groups in which everyone is nice to each other would be able to out compete groups where people acted more meanly. Okay, so two theories to explain human helpfulness and Ernst and his collaborators set out to see which of these holds up.

Well, we set up a very large simulation project, and that cooperation game has a very simple structure.

The cooperation game that the team modelled imagines two players, say they're you and Ernst. To start off with Ernst is given $10.

And I can keep my $10 or I can send any of my $10 to you.

And a bonus, the money Ernst sends to you gets doubled on its way to you. Now you can return the favour sending all, none or some of your money to Ernst.

So we have this sequential game if you like. You observe what I sent and then you can respond by also sending your amount.

And again, any of the money you give to Ernst would be doubled. This game sounds very simple, but the model allows for surprisingly complex behaviour and tactics to arise. Here’s Sarah again who didn't work on the study.

Now, there have been hundreds and 1000s of models done before, they usually conceive of strategies based on binary kinds of behaviours. So either you give or you don't give. Here in this study, they modelled cooperation and non-cooperation as a continuous trait, which is more realistic, because you could potentially give back a little less than you were given, you could give back a little more than you were given. So that is, in some ways, the really special thing that this model accomplished.

So what does this model find? Well, for repeated interactions, this idea that we help someone so they'll help us.

The person who responds to the partners previous cooperation level, always has an incentive to cooperate a little bit less than what the partner did, and over time, the little bit less accumulates and leads to the breakdown of cooperation. So in contrast to what most people in the evolutionary community believe, repeated interactions cannot explain the evolution of cooperation. So that's the first important finding.

Okay, so according to the team's model, the repeated interactions theory is out. And when simulating the dynamics of group competition, the model finds that this also fails to lead to cooperation.

It flies in the face of two very prominent theories for how cooperation can evolve.

And so we are stuck, so to speak. And then we had the idea that maybe if the two mechanisms, repeated interactions and group competitions, can work together, maybe that leads to a different result. And to our surprise, we found out these two mechanisms of cooperation, when simultaneously active, can explain human cooperation over a wide range of conditions. What the group competitions do is they counteract the individuals’ incentives to cheat a little bit.

So the model suggests that cooperation didn't evolve because of repeated interactions or group competition. It evolved because of repeated interactions and group competition. And to test this theory, the team investigated what the model predicts should happen when real people play this game. Do they play the game in the way you'd expect if cooperation did indeed arise, thanks to both group competition and repeated interactions? The question can't be answered so easily by getting people in say Zurich to play the game, since there's so many rules and regulations enforcing cooperation in Switzerland. Instead, the study asked people in the western highlands of Papua New Guinea.

And therefore we get much more at evolved behavioural tendencies when you do experiments in Papua New Guinea compared to Switzerland or the US.

But asking people who aren't so well connected to state or scientific institutions to take part in a study can be somewhat fraught. And to avoid exploitation, it's vital that everyone understands what they're agreeing to, and why. But the team had someone on hand who was well positioned to set up the collaboration.

Helen Bernhardt, who is one of our co-authors grew up in Papua New Guinea, she had intimate knowledge of the local customs and norms in these societies. And that helped us greatly to conduct these experiments, because you have to be sensitive to the local customs and local norms, you have to acquire the trust of the people. And Helen was the ideal person to do that.

The team wanted to see if these participants would back up the predictions their model had made of human cooperation, namely that if two players see themselves as being in the same group, they'll help each other more and more over time. And if players see themselves as being in different groups, they'll gradually help each other less. So they played out the same game again with a real participants to see if the models predictions held up.

These two predictions could be tested in our experiment in Papua New Guinea, and they both turned out to be true.

And so the study through its model and its work with participants in Papua New Guinea suggests that cooperation may have evolved not just because humans interact repeatedly with each other, and not just because humans evolved in compete between groups, but thanks to both of these forces. For Sarah, this result is profound not just because it takes us closer to understanding the evolution of cooperation, but because it helps highlight just how many questions there still are to answer to explain why we humans help each other out.

I think one of the most important results in this paper is to really shake people out of this status quo. So I don't think this is case closed, it's more that this is case opened.

That was Sarah Mathew from Arizona State University, in the US. You also heard from Ernst Fehr, from the University of Zurich, in Switzerland. For more on that story, check out the show notes for some links.

Coming up, a method to make discs the go-to data storage system of tomorrow. Right now though, it’s time for the Research Highlights, with Dan Fox.

<music>

Divers have helped to uncover the remnants of a one-kilometre-long Stone Age wall submerged in the Baltic Sea off the coast of Germany. Researchers used camera images, sediment cores and sonar data to characterise a string of boulders located 21 metres down and around 10 kilometres from the shore. The team counted over 1500 rocks in a formation that stretches 971 metres. Most of the rocks weigh less than 100 kilograms and so can be moved into position by small groups of people. Analysis suggests that the structure ran along the shoreline of a former lake or bog. It was most likely built by hunter-gatherers over 10,000 years ago, possibly as a tool to guide reindeer and other large animals during hunts before becoming submerged around 8500 years ago as the sea level rose. Take a deeper dive into that research in Proceedings of the National Academy of Sciences of the United States of America.

Young apes get a kick out of teasing each other and joking around when they're relaxed, just like humans do. Researchers recorded videos of five great ape-species – orangutans, chimpanzees, bonobos, and western and eastern gorillas – as they played in zoos in the US and Germany. They noted the primates’ interactions, including how often they try to provoke a response from one another rather than simply playing together. Like cheeky siblings, the apes would poke their targets repeatedly, dangle objects in their faces, pull their hair or stare at them until they responded. All five species seemed to tease each other in similar ways, and were most likely to play in this way, when relaxed. The researchers say that this kind of play probably evolved at least 13 million years ago, before humans’ ancestors separated from those of these ape species. If you've gone ape for this research, read it in full in Proceedings of the Royal Society B Biological Sciences .

Could the humble disc be the future of data storage?

<disk insert sound>

Well, a new Nature paper might make that a step closer to reality, as the team behind it have made the storage capacity of such discs millions of times greater than those currently available.

Oh, I feel so excited. And over the ten years we have done lots and lots of work for that particular goal.

This is Min Gu, one of the authors of the new paper. Now the reason that this has been a particular goal is that we, as a society, are producing more data than ever… and we need places to store it.

Typically, that’s currently achieved by hard disks, like you have in your computer, but they may not be up to the task forever.

There are some limitations. The hard disk drives, they are limited capacity. The second issue is regarding the hard disk drive is the lifetime. So, typically the hard disk drive, the lifetime is 3-5 years or 5-10 years.

Now optical discs — your DVDs, blu-rays, CDs and the like — can last up to 50 years, under the right conditions. And they also use less energy than hard disks, potentially making them a greener alternative for long-term, large-scale data storage and retrieval. But for capacity… well they are also limited there.

The biggest kind of disc you can typically buy tops out at around 100 GB of storage. To put that in some nerdy context for you, that’s not quite enough for one of the extended editions of the Lord of the Rings in 4K ultra-HD quality. Whereas hard drives can be up to 100 Terabytes — a thousand times more, and a lot more room to store the adventures of Frodo and the gang.

Optical Discs have been limited by how much information you can write on them, which is in turn limited by the resolution of the lasers that do the writing.

There is a physics law, it's called the diffraction limit. So the size of the laser spot on the disk is limited to half of the wavelength, the wavelength of the laser beam we used. So in other words, the smallest bit size on the blu-ray is about 200 nanometres. That is the limitation.

When you write data or bits onto a disc you write it in dots, so the more focused, fine-tuned the laser the smaller the dots can be. Allowing you to pack more of them and therefore more data onto a disc. To overcome the limitation of how focused the lasers can be, Min had a plan. Instead of using one laser to write on the disc, how about two?

The wavelength of the second laser beam, or the colour of the second laser beam is slightly different from the first one. So in that case, we also make the second laser beam into a doughnut shape. So imagine that one laser beam is a bright spot in the centre, the second laser beam is a ring structure. So then we use the second laser beam to erase the ring of the first laser beam.

You can imagine this process a bit like this. If you shine a torch — a flashlight — on the wall you will have a bright spot at the centre and diffuse halo of light around it. But if you block this halo, you’ll be left with just a bright spot.

Fundamentally the same thing is happening here, the team used the second laser to erase the diffuse ring created by the first, leaving just a single focused laser spot. This allowed Min and the team to get around that diffraction limit and be able to write more information onto a disc.

Now, this laser-cancelling-laser technique has been around for a while, it has been used for etching tiny details onto things like computer chips. In fact, Min himself proposed its usage for data storage 10 years ago, but the key to getting it to work has been finding the right material. In this paper, Min and the team have created a thin film that can be coated on plastic disks and has the right chemical composition to allow the two lasers to write onto it effectively.

We tried many material, but we never reached as good as results liked detailed in this paper.

Their method was able to write way more dots onto a disc than any other optical discs before.

In the end, they were able to achieve petabits of storage on a disc the same size as a regular DVD — a petabit being a thousand times as big as a terabit, much bigger than currently available hard disk drives. Which would allow you to store a whole lot of The Lord of the Rings, in the highest quality.

By enabling massive amounts of data to be stored on a single optical disc, Min’s method could help reduce the space and the energy requirements of the massive data storage centres that are used to store the increasing amount of data we create as a society.

So if you think about the petabytes of data storage currently, if you use a hard disk drive, then basically you need a very large space to store let's say 1000 of disk — stack them together — and a lot of the cooling process because these disk produce heat. So, air conditioning costs 30% of the energy consumption of data centre. Now, you can actually using one disk to store equivalent amount of information.

This is still some way off though. At the moment the process of writing and reading the discs is pretty slow, and whilst the storage of the discs could be energy saving, this writing process is quite energy intensive. The energy used is similar to that to write a normal optical disc… but with 1,000,000 times as much data, so a lot more energy overall.

At the moment as well, Min and the team use a specialist microscope to read information off the disc, so we’d also have to change our devices in order to read them properly. The DVD player you have gathering dust in the attic won’t be up to snuff here.

But if these problems are overcome, maybe discs will make a comeback.

<disk eject sound>

In this podcast piece, you heard from Min Gu, from the University of Shanghai, in China. For more on the future of discs, check out the show notes for a link to the paper.

Finally on the show, it’s time for the Briefing Chat, where we talk about some of the stories that have been featured in the Nature Briefing. And Nick, I think I’ll go first this week and I've got a story from Ars Technica . And it involves a fossil that has really had scientists stumped for almost 100 years because well they couldn't really figure out what it was. And now researchers have taken a fresh look at it, using, you know, cutting edge techniques, and they figured out what it is. Now according to their research, which they've published in the journal Palaeontology – it is a fake.

Oh, so the end of that particular drum roll is it wasn't a fossil after all, at all. What was it then?

Okay, well, we're gonna go back in time here to begin with. So we're going back to 1931, the Italian Alps where this fossil was discovered, okay, that was a small creature, a lizard-like maybe sort of 20 centimetres long, estimated to date back to about 280 million years ago. And when it was described in 1959, it was given the name Tridentinosaurus antiquus and apologies to any Latin speakers out there if I've got that wrong. Now, it's quite unusual. Now I'm going to show you a picture Nick so you can see it and we'll put a link in the show notes so that listeners can see it as well. And it's a strange looking thing. It looks a bit like a silhouette.

Yeah, it looks kind of like a frog that’s been run over or something like that. It looks very odd.

Yeah. So it has got all its limbs, there has got a tail as well. And for the longest time, this kind of silhouette was thought to be preserved skin. Okay, now, this is very, very rare indeed. I discussed that with Shamini a few weeks ago on the podcast about how difficult it is to find preserved skin in any sort of fossil, you have to have just the right conditions. And it was posited that this fossil is preserved the way that it is, because maybe it was caught in kind of a volcano blast or something like that, which seared, which charred the outside layers of its skin instantly. Okay, to lend weight to this there are also some plants found in the same region, which were preserved in a similar way. Okay. And so this is quite exciting for a lot of people in terms of, you know, figuring out the evolution of lizards and where it's sat on the tree of life and what have you. And it's been discussed for years, you know, researchers have been trying to place it on said tree, but it turns out that all of those efforts might be for naught.

That's very disappointing for all those people who've worked very hard for a long time. I guess, what was it then if it wasn't a fossil?

Well, this is where some detective work comes in, then. So researchers, you know, want to try and answer these questions about what was this animal? Where did it fit on the tree of life? So they took another look at it using cutting edge science techniques, which weren't available previously. And so they did a bunch of things. In one instance, they shined some ultraviolet light on it, and this fossil kind of fluoresced yellow, but the plants found nearby they didn't fluoresce which is kind of an interesting red flag. And it turns out, actually, that this might not have been too unusual, because a lot of old fossils had a layer of varnish put on them to preserve them, okay, which isn't really done very much now. But varnish kind of does fluoresce. Okay, so that wasn't necessarily a bad thing. But the researchers wanted to appear beneath this veneer to try and look at the actual silhouette to see, you know, was this skin? What was it? What can it tell them about the animal? And they did a bunch of sort of chemical techniques. And they realised that the skin is actually black paint.

You’re joking?

Nope, specifically a type of paint made from animal bones called Bone black paint, with some irony.

So–so was this some sort of prank then? Or was this just an accident? And it happened to look like a fossil and people got a bit confused by it?

Well, that's a great question. And I think what's actually going on here historically, isn't clear. Now the researchers conclusion is that someone had just carved kind of a lizard shape in the rock, and then filled it in with black paint. And they've used a bunch of different methods to characterise this. Now, as for who did it, they suggest that this must have happened before 1959 when the species was formally kind of described. But as to who and when, I mean, that's one for somebody else on some sort of true crime podcast to try and figure it out.

Well, I guess, you know, finding out that this fossil isn't a fossil is disappointing for some, but presumably, there's more science to be done.

Well, there's a lot to still learn about this fossil. And I think what's interesting is that some of it actually does seem to be real fossil, like there's a couple of leg bones at the back of the fossil that do appear to be fossilised bones. And the researchers say that they found a few little things that look like tiny scales on the back of this animal. So there is an identification to be had, it's gonna be really, really tough because these bones aren't very well preserved, you know, not necessarily the end of the story for Tridentinosaurus antiquus , okay. But I think what the researchers are saying is stop using this for any sort of phylogenetic analysis because it isn't what it was reported to be. And it speaks to kind of a broader problem – there are fossils that are fakes and that comes up now and again, right. So I think the researchers suggest that when fossils are described, there needs to be maybe really, really good reporting of how it was done of– of what methods were used to characterise it, to try and avoid situations like this in the future.

Well, maybe some lessons there then for future palaeontologists who come across fossils like this. Thanks, Ben. For my story this week, I've been reading about something that may sound a little bit strange as well. It's about meat-rice.

Meat-rice, okay, I've got so many questions. Is this rice made of meat? Is it meat made of rice? Is it just the dish containing meat and rice? I mean, Nick, please define what meat rice is off the bat.

So kind of yes and no, and yes to those questions. So this is a kind of hybrid food. So this is where researchers have grown muscle cells and fat cells on rice. So they've used rice as a scaffold in order to grow basically meat. And so you've ended up with this sort of strange (and listeners, be sure to check out the link to this) like this strange sort of pink looking rice, which is a combination of meat and rice. And this is reported in Nature .

Why have they done this?

Well, that's a very good question. And there are a lot of efforts around the world to try and make lab-grown meat. We've talked about them before on the podcast. But these have some problems. If you try to make the conventional things that people try and go for like steaks or burgers, they're quite hard to form into the right shape, because you just have like a mass of cells that doesn't necessarily grow in the way that you would expect if you wanted to have a meat-like product. So that's part of it. Also, people aren't necessarily familiar with lab-grown meat and stuff, they may not be interested in it, they may not know what to do with it, they may not know how to cook it. So these researchers were trying to address those problems by making a product that people are familiar with, and that they could add a bit of meat to increase its nutritional quality and thus we get meat-rice.

When I think about meat protein, I guess it needs kind of a blood supply and so forth to grow. But are these just small clumps of cells that are attached to the rice grain? What are we talking about here?

Yeah, essentially it's that, it's like a film of cells that has grown onto the rice. So what happens when you're trying to do this is you get your rice, you lay on it a little bit of fish gelatin, and a commonly used food additive which helps those sort of cells stick onto it and then you bathe the combination of cells and rice in growth media. And then the cells just sort of form this layer on top of the rice and you end up with this, as I say, sort of pinky rice looking thing.

So it’s not just a rice plant that grows and it has the meat on the outside of the rice. The meat is added separately by dunking the rice into it.

Yeah basically that. So it's quite different from regular rice. Apparently it tastes a bit nuttier, and it was a bit harder. But it does increase the sort of protein content and the fat content. Not by very much though it has to be said. So this was around 0.01 grams more fat and 0.31 grams more protein. So in the future, the researchers are interested in trying to raise those numbers up. But nonetheless, this could be an easy way to increase nutritional quality of rice. And also growing it in this way is much cheaper than other alternatives. Like, if you, well, if you grow just normal beef, that costs quite a lot, and also has an environmental cost as well. And compared to other lab-grown things, this could be a cheaper way as well, because you're just using the rice as a scaffold rather than trying to make a whole mass of lab-grown meat.

I don't know this is addressed in the article, but does it talk about how you need to cook it? Because I can imagine if you cook the rice, then you'd overcook the meat. But if you cook the meat so it’s right, then you'd under cook the rice? Is that something that's been addressed?

Yeah, exactly. So this is actually one of the things that they wanted to make easier for people. Like how to use this in their cooking. And you cook it just like normal rice. So it goes a little bit yellow in places. But otherwise, you just cook it just as you would normal rice.

I mean, I guess it's quite an involved process to make this which suggests that we're not going to see it on supermarket shelves or supermarket fridges. I'm not sure anytime soon.

No we're probably not like there was still work to be done with this. And also, lab-grown meat has not been approved for sale in most countries, only the US and Singapore have approved the sale of it. And so it seems that lab-grown meat still has a way to go in terms of regulation as well. But the researchers behind this we're quite excited and one researcher who wasn't involved as well says the idea seems really cool. You can just have one rice and take care of everything in terms of sort of nutritional needs.

Well, that is a neat one and I think it's starting to make me a little bit hungry. So let's call the Briefing Chat there for the time being before my stomach to rumble. And listeners, for more on both of those stories, head over to the show notes where you can find links to them and a link on where you can sign up to the Briefing to get even more stories like them delivered directly to your inbox.

That’s all for now but check your podcast feed later this week as there’ll be an extra podcast of whale- like proportions . For now, though, you can keep in touch with us on X, we’re @NaturePodcast, or you can send us an email to [email protected] . I’m Nick Petrić Howe.

And I’m Benjamin Thompson. See you next time.

doi: https://doi.org/10.1038/d41586-024-00539-1

Related Articles

for many years researchers have been trying

  • Human behaviour
  • Optics and photonics

Fossils found far from the Equator point to globetrotting tetrapods

Fossils found far from the Equator point to globetrotting tetrapods

News & Views 03 JUL 24

Narrative cave art in Indonesia by 51,200 years ago

Narrative cave art in Indonesia by 51,200 years ago

Article 03 JUL 24

How Denisovans thrived on top of the world: mysterious ancient humans’ survival secrets revealed

How Denisovans thrived on top of the world: mysterious ancient humans’ survival secrets revealed

News 03 JUL 24

Megastudy shows that reminders boost vaccination but adding free rides does not

Megastudy shows that reminders boost vaccination but adding free rides does not

Article 26 JUN 24

Language is primarily a tool for communication rather than thought

Language is primarily a tool for communication rather than thought

Perspective 19 JUN 24

Misinformation might sway elections — but not in the way that you think

Misinformation might sway elections — but not in the way that you think

News Feature 18 JUN 24

Dirac mass induced by optical gain and loss

Dirac mass induced by optical gain and loss

Titanium:sapphire-on-insulator integrated lasers and amplifiers

Titanium:sapphire-on-insulator integrated lasers and amplifiers

Powerful laser miniaturized from tabletop to microchip

Powerful laser miniaturized from tabletop to microchip

News & Views 26 JUN 24

Staff Scientist in Computational Metabolomics

A position as a Staff scientist in Computational Metabolomics is available at the SciLifeLab Metabolomics Platform.

Umeå (Kommun), Västerbotten (SE)

Umeå University (KBC)

for many years researchers have been trying

Group Leader in Functional Genomics

APPLICATION CLOSING DATE: August 15th, 2024 Human Technopole (HT) is an interdisciplinary life science research institute, created and supported by...

Human Technopole

for many years researchers have been trying

Faculty Positions & Postdocs at Institute of Physics (IOP), Chinese Academy of Sciences

IOP is the leading research institute in China in condensed matter physics and related fields. Through the steadfast efforts of generations of scie...

Beijing, China

Institute of Physics (IOP), Chinese Academy of Sciences (CAS)

for many years researchers have been trying

Faculty and Research Positions, Postdoctoral Recruitment

Jointly sponsored by the Hangzhou Municipal People's Government and the University of Chinese Academy of Sciences.

Hangzhou, Zhejiang, China

Hangzhou Institute of Advanced Study, UCAS

for many years researchers have been trying

Postdoctoral Research Scientist: DNA Replication and Repair in Haematopoietic Stem Cells

An exciting opportunity has arisen for a highly motivated Postdoctoral Research Scientist to join Professor Chapman’s Group, to investigate how DNA...

Oxford, Oxfordshire

University of Oxford, Radcliffe Department of Medicine

for many years researchers have been trying

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Share full article

Advertisement

The Morning

The science of dogs.

We explore a boom in research into our furry friends.

A dog with white fur and gray patches stands in an outdoor setting.

By Emily Anthes

I cover animal health and science.

My career as a science journalist began with a story on canine genetics. It was the summer of 2004, and a female boxer named Tasha had just become the first dog in the world to have her complete genome sequenced. It was a major advance for an animal that, though beloved by humans, had been overlooked by many scientists.

Over the two decades since, I have seen dogs transform from an academic afterthought to the new “it” animal for scientific research. In the United States alone, tens of thousands of dogs are now enrolled in large, ongoing studies. Canine scientists are investigating topics as varied as cancer, communication, longevity, emotion, retrieving behavior, the gut microbiome, the health effects of pollution and “doggy dementia.”

The research has the potential to give dogs happier, healthier and longer lives — and improve human well-being, too, as I report in a story published this morning . In today’s newsletter, I’ll explain why dogs have become such popular scientific subjects.

Big dog data

First, an important clarification: Dogs have long been the subject of invasive medical experiments, similar to lab rats and monkeys. That’s not the research I’m discussing here. The studies that have exploded in popularity involve pets. They require the enthusiastic participation of owners, who are collecting canine saliva samples, submitting veterinary records and answering survey questions about their furry friends.

One reason these studies have become more common: Scientists realized that dogs were interesting and unique subjects. Our canine companions have social skills that even great apes lack, for instance, and they happen to be the most physically diverse mammal species on the planet. (Consider the difference between a Chihuahua and a Great Dane.) Dogs also share our homes and get many of the same diseases that people do, making them good models for human health.

“Most of the questions that we have in science are not questions about what happens to animals living in sterile environments,” said Evan MacLean, the director of the Arizona Canine Cognition Center at the University of Arizona. “They’re questions about real organisms in the real world shared with humans. And dogs are a really, really good proxy for that in ways that other animals aren’t.”

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

FiveThirtyEight

Mar. 14, 2018 , at 9:29 AM

These Researchers Have Been Trying To Stop School Shootings For 20 Years

By Maggie Koerth

Filed under School Violence

for many years researchers have been trying

AP Photo / Ed Andrieski

Mary Ellen O’Toole calls the teenagers who murdered 13 people at Columbine High School in 1999 by their first names — Dylan and Eric. O’Toole did not personally know Dylan Klebold and Eric Harris, but she’s thought about them for decades. At the time of the Colorado shootings, O’Toole was a profiler for the FBI and had been tapped to write the bureau’s report on how to prevent mass shootings in schools. What began as a research project has become a life’s work — and a deep source of frustration.

O’Toole is part of a small group of academics, law-enforcement professionals and psychologists who published some of the first research on mass shootings in schools. She and other members of this group began paying attention to the phenomenon in the late 1990s. Two decades later, some of them say not much has changed. The risk factors they identified back then still apply. The recommendations they made are still valid. And, as we saw last month at Marjory Stoneman Douglas High School, students are still dying. “On the news, people are saying we should be concerned about this and that,” O’Toole said, “and I thought, ‘We identified that 20 years ago. Did you not read this stuff 20 years ago?’ … It’s fatiguing. I just feel a sense of fatigue.”

It’s difficult to say definitively how many school shootings have happened in the years since Columbine — or in the years before it. It’s harder still to prove how many would-be shootings were averted, or how many others could have been if additional steps had been taken. But the people who have spent the last two decades trying to understand this phenomenon are still here and are still trying to sell politicians and the public on possible solutions that are complicated, expensive and tough to sum up in a sound bite.

Any research into school shootings is made more difficult by how uncommon such shootings are. In 2016, FiveThirtyEight wrote about the more than 33,000 people killed by guns in America every year. Of those deaths, roughly one-third — about 12,000 — are homicides, but hardly any are due to mass shootings. 1 If you define mass shootings as an event where a lone attacker indiscriminately kills four or more people, in a public place, unrelated gang activity or robbery, then mass shootings account for a tiny portion of all gun homicides — probably a fraction of a percent.

There have been many attempts to formally quantify school shootings, but, as with mass shootings, all use different definitions. Our chart is taken from a 2016 paper that defined a school shooting as a premeditated incident of gun violence that took place in an educational setting, killed or wounded at least three victims (not counting the perpetrator), was unrelated to gang activity and was not an act of domestic violence. 2 This data suggests that school shootings, though still extremely rare, are more common today than they were 40 years ago.

for many years researchers have been trying

But no matter how you define a school shooting, they’re still a subset of a subset — just as mass shootings account for a fraction of all gun homicides, school shootings account for a fraction of all mass shooting deaths. In 1995, when O’Toole began to study school shootings, they seemed like even more of an outlier than they are today. “I couldn’t even call it a phenomenon,” she said. “Prior to Columbine, there was no indication that it was going to become one of those crimes that just becomes part of the culture. It looked like it could have faded away.”

These uncommon but high-profile tragedies had also drawn the attention of Marisa Randazzo. In 1999, she was the chief psychologist for the Secret Service and became a part of a joint effort between the Secret Service and Department of Education to better understand school shooters and how to prevent attacks before they happened. Randazzo had previously worked on the Exceptional Case Study Project — a Secret Service project designed to better understand people who threaten the president and other public figures. Like school shootings, assassinations are extremely rare events that have a huge impact on society. That rarity makes them hard to study — and makes it hard to tell blowhards from real threats. But their impact makes them important to understand.

Randazzo found that the project’s findings echoed what she was learning about school shootings. For instance, the Secret Service had once focused its energy on threats made by people with a history of violent crime or who had a mental illness that caused them to act irrationally. But the Exceptional Case Study Project analysis showed that most people who actually carry out attacks didn’t meet either of those criteria. Instead, a better way to figure out who was really a threat was to talk to friends, family and coworkers — most attackers had discussed their plans with other people.

Randazzo’s and O’Toole’s parallel reports came to remarkably similar conclusions.

First, these studies determined that there wasn’t much point in trying to profile school shooters. Yes, most were (and remain) male and white, but those categories were so broad that they’re essentially useless in identifying potential threats ahead of time, Randazzo said. What’s more, she said, more detailed profiles risked stigmatizing perfectly reasonable behaviors — like wearing black and listening to loud music.

Instead, the reports focused on the behavior and mental state of the young people who chose to kill. While these teens were deeply troubled, that’s not quite the same thing as saying that those who commit school shootings are just irredeemably mentally ill. Nor does it mean those young people suddenly snapped, giving no warning. “School shooters typically do this out of a profound adolescent crisis,” said James Garbarino, a professor of psychology at Loyola University Chicago who specializes in teen violence and began studying school shooters in the late 1990s.

Randazzo described a pattern of young people who were deeply depressed, unable to cope with their lives, who saw no other way out of a bad situation. The stressors they faced wouldn’t necessarily be problems that an adult would see as especially traumatic, but these young people were unable to handle their emotions, sadness and anger, and they started acting in ways that were, essentially, suicidal.

Some of the best data on the mental state of school shooters has come from interviews with those shooters (and would-be shooters) who survived the attack. Randazzo described one such living school shooter, 3 currently serving multiple life sentences, who told her that before the attack he spent weeks vacillating between suicide and homicide. Only after he tried and failed to kill himself did he settle on killing others in hopes that someone would kill him. Garbarino, who has interviewed dozens of people who went to prison for life as teenagers, both for school shootings and other violent crimes, heard many similar stories.

“The reason I emphasize this is that we know so much about how to help someone who is suicidal, and those same resources can be used very effectively with someone who is planning to engage in school violence,” Randazzo said. So how do we spot the ones who are planning an attack at a school? The studies she and O’Toole published years ago showed that, like people planning to attack the president, would-be school shooters don’t keep their plans to themselves. They tell friends or even teachers that they want to kill. They talk about their anger and their suicidality. And as more teens have attacked their schoolmates, that pattern has proved to hold true over time . It was true for Nikolas Cruz, the Parkland shooter. It was true for the at least four potential school shootings that were averted in the weeks after Parkland — all stopped because the would-be killers spoke or wrote about their plans and someone told law enforcement.

While all the experts I spoke with said that policies that keep guns out of the hands of teenagers are an important part of preventing mass shootings, they all also said it is crucial to set up systems that spot teens who are are struggling and may become dangerous.

But those systems seem to break down over time. Randazzo told me that her team had trained numerous school districts in school shooting prevention back in the early 2000s and, as of this year, many of those districts no longer had prevention systems in place. Thanks to staff turnover and budget reprioritization, that institutional knowledge simply withered away. And ironically, that happens precisely because school shootings are so rare. “It takes time and effort for a school to create a team and get training,” Randazzo said. “And, fortunately, threatening behavior doesn’t happen often enough” to spur schools to action.

CLARIFICATION (June 1, 2022, 2:00 p.m.): This article has been updated to clarify that James Garbarino was a professor of psychology at Loyola University Chicago.

Read more: Mass Shootings Are A Bad Way To Understand Gun Violence

Our data on total gun deaths and gun homicides was drawn from the Centers for Disease Control and Prevention and represents the average of the years 2012-2014. There is not a single definition of what constitutes a mass shooting. To count those, we have chosen to use the definition and database put together by Mother Jones , because it was recommended by experts I spoke to and because it does a good job of distinguishing between a mass shooting as we understand it in the colloquial/pop culture sense and other kinds of shootings that, while tragic, don’t fit the narrative we’re trying to focus on here. But there is no one right answer for how to quantify this.

Some analyses of school shootings are limited to incidents where the perpetrator is an adolescent or the shooting takes place in a primary or secondary school, criteria that exclude events like the Virginia Tech shooting . This paper includes incidents that occurred at colleges or involved adult perpetrators, resulting in a larger total number of school shootings in its data set than in some similar data sets.

Randazzo declined to say who she was referring to.

Maggie Koerth was a senior reporter for FiveThirtyEight. @maggiekb1

Filed under

Gun Control (41 posts) Mass Shootings (35) School Violence (3)

  • Causes & Risks
  • Tests & Diagnosis
  • Complications
  • Family & Relationships
  • Emotional & Mental Health
  • View Full Guide

Latest Research on Lupus Treatment

for many years researchers have been trying

Lupus happens because of an immune system misfire. Your immune cells mistakenly attack your organs and tissues, causing damage and inflammation. The main lupus treatments today – immunosuppressants and corticosteroids – work by lowering your immune response and bringing down inflammation.

Managing this autoimmune disease is tricky because lupus affects so many organs – joints, skin, kidneys, heart, brain, and lungs. If you've tried a few treatments without success, help could be on the way.

The search is on for new and more effective lupus treatments. Many of them work by calming the overactive immune response.

Here are just a few of the latest treatment approaches that have been recently approved or are working their way through clinical trials.

B-Cell Therapies

B cells are part of your immune system. They're like little soldiers in your body's army against germs. When they spot a foreign invader like bacteria or a virus, B cells release proteins called antibodies to stop it.

When you have lupus, your B cells turn against you. They make autoantibodies that attack your own organs. B cells also make chemicals that trigger more inflammation . A few new medicines kill or block B cells to treat lupus. Some of these drugs are monoclonal antibodies, which are lab-made proteins that act like the antibodies your immune system makes.

Obinutuzumab ( Gazyva ) is a monoclonal antibody that destroys B cells. It's already a treatment for blood cancers like chronic lymphocytic leukemia (CLL) and follicular lymphoma. Now researchers are looking at whether it might treat lupus, too. So far, results have been promising.

Rituximab ( Rituxan ) was first approved to treat lymphoma. Today it's also a treatment for rheumatoid arthritis. For many years, researchers have been trying to find out whether rituximab might treat lupus, too. Studies so far haven't shown much success, but rituximab might still work for more severe forms of lupus, or for lupus nephritis – kidney inflammation that lupus sometimes causes.

A few other medications block B cells instead of killing them. In studies, obexelimab improved lupus symptoms and prevented flares in some people who used it

Other lupus treatments in development work on BAFF, a protein that sends out signals to activate B cells. Blisibimod and TACI-Ig ( Atacicept ) block these signals. In one study, blisibimod helped people with lupus lower their steroid dose.

T-Cell Therapies

T cells are another type of immune cell that seeks out and kills germs. When you have lupus, your T cells make chemicals that increase inflammation. T cells also direct your B cells to make more autoantibodies.

In 2021, voclosporin ( Lupkynis ) was approved by the FDA to treat lupus nephritis . It blocks T cells from triggering an immune response in the kidneys. Tacrolimus ( Prograf ) interferes with T-cell function and is already approved to prevent rejection after an organ transplant. Research shows it might help with lupus, too.

Plasma Cell Treatments

Plasma cells are immune cells that release autoantibodies in lupus. These cells don't respond well to current lupus treatments that suppress the immune system.

The cancer drug daratumumab ( Darzalex ) destroys plasma cells directly. In one very small study, daratumumab led to a big improvement in lupus symptoms. Researchers need to do more studies to learn whether this drug might be useful for treating lupus.

Targeting Interferon

Interferons are part of your body's immune defenses. In people with lupus, immune cells called plasmacytoid dendritic cells release too much interferon, which produces a lot of inflammation. A couple of new treatments in studies either kill dendritic cells or reduce the amount of interferons these cells release.

Disease-Modifying Drugs

Leflunomide ( Arava ) stops the excess production of immune cells to reduce inflammation in the joints. It's already approved to treat rheumatoid arthritis. Small studies have shown that it also helps to relieve lupus joint symptoms. More research is needed to confirm whether leflunomide might be an effective lupus treatment.

Lupus Kidney Disease

Lupus can damage the kidneys to the point where these organs can't filter your blood. Lupus nephritis is the name for lupus kidney disease.

As mentioned above, voclosporin (Lupkynis) targets T cells to reduce inflammation in the kidneys. Another way to reduce kidney damage is to diagnose and treat lupus nephritis early. Researchers have found more than 230 different proteins in the urine, called biomarkers, that might one day help doctors find and treat lupus nephritis more easily.

Stem Cell Transplant

There are many new medicines to treat lupus, but they don't work for everyone. Sometimes lupus progresses to the point where it damages organs. Studies are under way to see whether stem cell transplant might be an option for people with severe lupus that hasn't improved with other treatments.

Stem cells are the very early cells that grow into other cell types, including immune cells. A stem cell transplant replaces your damaged immune cells with healthy ones from your own body or from a donor.

Some studies are looking at transplants of mesenchymal stem cells. These are a special kind of stem cell that may prevent your immune system from turning against your own body.

These are just some of the new lupus treatments researchers are studying. If you're interested in learning more about them, you might join a clinical trial . One of these studies could give you access to a new medicine before it's approved. Ask the doctor who treats your lupus if any studies might be a good fit for you.

photo of tired woman on sofa

Top doctors in ,

Find more top doctors on, related links.

  • Lupus Reference
  • Lupus Slideshows
  • Lupus Blogs
  • Lupus Videos
  • Lupus Community
  • Lupus Medications
  • Find a Rheumatologist
  • WebMDRx Savings Card
  • Autoimmune Diseases
  • Drug Interaction Checker
  • Lupus and Pregnancy
  • Lupus and Rheumatoid Arthritis
  • Lupus Nephritis
  • Skin Problems

for many years researchers have been trying

March 1, 2024

11 min read

These Cancers Were Beyond Treatment—But Might Not Be Anymore

New drugs called antibody-drug conjugates help patients with cancers that used to be beyond treatment

By Jyoti Madhusoodanan

Illustration of a doctor in a white lab coat, using a bow and arrow.

Keith Negley

I n the long and often dispiriting quest to cure cancer, the 1998 approval of the drug Herceptin was a tremendously hopeful moment. This drug for breast cancer was the first to use a tumor-specific protein as a homing beacon to find and kill cancer cells. And it worked. Herceptin has benefited nearly three million people since that time, dramatically increasing the 10-year survival rate—and the cancer-free rate—for what was once one of the worst medical diagnoses. “Honestly, it was sort of earth-shattering,” says oncologist Sara M. Tolaney of the Dana-Farber Cancer Institute in Boston.

But the drug has a major limitation. Herceptin's beacon is a protein called HER2, and it works best for people whose tumors are spurred to grow by the HER2 signal—yet that's only about one fifth of breast cancer patients. For the other 80 percent of the approximately 250,000 people diagnosed with the disease every year in the U.S., Herceptin offers no benefits.

The hunt for better treatments led researchers to reimagine targeted therapies. By 2022 they had developed one that linked Herceptin to another cancer-killing drug. This therapy, for the first time, could damage tumors that had vanishingly low levels of HER2. The drug, named Enhertu, extended the lives of people with breast cancer by several months, sometimes longer. And it did so with fewer severe side effects than standard chemotherapies. The U.S. Food and Drug Administration approved its use in that year.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

The news got even better in 2023. Researchers reported that Enhertu appeared to work even on tumors with seemingly no HER2 at all. (It's possible the cancers did have the protein but at very low levels that escaped standard detection methods.) “Exciting!” says oncologist Shanu Modi of Memorial Sloan Kettering (MSK) Cancer Center in New York City, who helped to run the study that led to Enhertu's approval. “They did this provocative test and saw this almost 30 percent response rate” in tumors apparently lacking the cancer protein, she notes.

Enhertu belongs to an ingenious and growing class of targeted cancer drugs called antibody-drug conjugates, or ADCs. The compounds are built around a particular antibody, an immune system protein that homes in on molecules that are abundant on cancer cells. The antibody is linked to a toxic payload, a drug that kills those cells. An ADC's affinity for cancer means it spares healthy cells, avoiding many of the side effects of traditional chemotherapy. And each antibody can be paired with several different drugs. This Lego-like assembly opens up a world of mix-and-match possibilities. Researchers can use the same drug to treat many cancers by switching up the antibody, or they can attack one type of tumor with many different ADCs that target several cancer biomarkers on the cells. This ability “changes the way we think about drug development,” Tolaney says.

The idea for ADCs is not entirely new—the first one was cleared for patient use in 2000—but recently scientists have learned intricate chemical construction techniques that make the compounds much more effective, and they have identified new cancer-specific targets. These advances have driven a wave of new development. Fourteen ADCs have been approved for breast, bladder, ovarian, blood, and other cancers. Approximately 100 others are in the preclinical pipeline. One ADC for breast cancer, known as T-DM1, proved much more effective than Herceptin and has now become the standard of care for early stages of disease. “It is pretty cool to see how things have changed so quickly,” Tolaney says. Buoyed by the successes, researchers and pharmaceutical companies are pouring resources into developing more powerful ADCs—perhaps even ones that can work across a wide range of cancer types. Pharma giants such as Gilead, Roche and BioNTech have invested heavily in their ADC programs; in October 2023, for example, Merck put $4 billion into a partnership with Daiichi Sankyo, the biotechnology firm that partnered with AstraZeneca to produce Enhertu.

But the new drugs are still beset by some mysterious problems. Some ADCs have side effects similar to those caused by traditional chemotherapies—which shouldn't happen, because the drugs are supposed to target cancer cells alone. On patient forums, people describe needing to reduce their doses because of intolerable nausea or fatigue. These drawbacks limit ADCs' use, so scientists and pharma companies are urgently trying to figure out what is causing them.

In the clinical trial that led to Enhertu's approval, patients typically had already received different kinds of chemotherapy drugs, such as medications that stop cells from multiplying. But these drugs—and other forms of chemotherapy—do not distinguish between a cancer cell and a healthy one. Any cell trying to make DNA or multiply is vulnerable, and normal tissue as well as tumors can be attacked. Fully 64 percent of people on standard chemotherapy experience nausea, diarrhea, fatigue, and other negative side effects. For many, these can be as debilitating as cancer itself. Such effects limit the dose people can take and the length of treatment, leaving windows of opportunity for tumors to grow resistant and rebound.

For many years researchers have sought less toxic alternatives, envisioning precision drugs that target cancers and spare healthy cells. The idea of ADCs sprang from the exquisite specificity of antibodies. If highly toxic forms of chemotherapy could be strapped onto antibodies, the toxins would reach only the cancer cells and no others. Although the concept was straightforward, attempts at making ADCs faltered for decades.

Some of the earliest attempts used drugs that just weren't strong enough. In the 1950s, for instance, researchers linked a drug named methotrexate to an antibody that targets carcinoembryonic antigen, a common tumor marker, and tested whether the construct could treat advanced colorectal and ovarian cancers in people. The drug bound to its target but had little therapeutic effect. Researchers then swung too far to the other end of the spectrum and tried using much more toxic drugs instead. But these drugs triggered serious side effects.

for many years researchers have been trying

Credit: Jen Christiansen; Graphics consultant: Greg Thurber/University of Michigan

Greg Thurber, a chemical engineer at the University of Michigan, looked into this conundrum. He began working on ADCs when studying how antibodies spread through the body to bind to their targets. After ADCs infiltrate a tumor through its network of blood vessels, the compounds slip out of these vessels and into cancer cells to kill them, Thurber says. But the ADCs that existed at the time never got past the cells just outside the blood vessels. They bound too tightly. The key to improved effects, it turned out, was tailoring the antibody parts so they zeroed in on cancer cells but had a loose enough grip for some to slip into the interior of the tumor. “A lot of people in the field had a very simple concept—we put a chemotherapy drug on an antibody, it targets it to the cancer cell, and it will avoid healthy tissue,” Thurber says. “That's not at all how they work in reality.”

Tinkering with the drug component of ADCs, as well as the antibody, eventually led to a cancer-killing sweet spot. In 2013 the fda greenlit T-DM1 for breast cancer. Its antibody is trastuzumab (the “T” in T-DM1), the same antibody used in Herceptin. The drug attached to this antibody is notable because it's too dangerous to be used on its own. Known as emtansine, it was initially discovered in the 1970s but shelved because it was too toxic to too many cells. Tethered together as T-DM1, however, the drug and antibody generally stayed away from healthy cells and proved to be a potent and precise combination.

In the early 2000s Modi helped to conduct a trial of T-DM1—branded Kadcyla by its maker, Genentech—in people who had an especially difficult disease: advanced HER2-positive breast cancer that had spread throughout the body. Only those who had run out of other treatment options were enrolled. “We were taking people who in some cases were really looking to go to hospice,” Modi says. Yet “almost every patient who was enrolled on that drug had benefits. It was really so satisfying.”

In another trial of about 1,500 people with early breast cancer, an interim data analysis, published in 2019, estimated that 88 percent of those who received T-DM1 would be cancer-free three years later, compared with just 77 percent of those who received Herceptin alone. The drug has proved “more active than most of the therapies we were giving to patients, and it was associated with a better safety profile,” Modi says.

Kadcyla's success against difficult-to-treat cancers didn't just transform some patients' lives. It pumped enthusiasm—and, perhaps more important, pharmaceutical industry dollars—into the idea of ADCs. Researchers now knew that when pieced together correctly, it was possible to load an antibody with drugs too toxic to be used otherwise and still produce a medicine that worked better than traditional chemotherapy.

Several similarly designed ADCs have been approved for a range of different cancer types. Many of these carry drugs that inhibit the enzyme topoisomerase 1, which is essential for DNA replication. Like emtansine, the drug used in Kadcyla, newer topoisomerase inhibitors are too toxic to be used as freestanding drugs but are much less harmful when they're largely restricted to tumor cells. And Kadcyla itself, after being shown to slow or stall late-stage breast cancer, is being tested on patients with very early-stage disease to see whether treatment at that point can not only slow cancer down but actually cure it. Its success “was sort of the catalyst for continued exploration,” Modi says. “Can we build on this? Can we do even better?”

D oing better, it turns out, involves designing good linker molecules that tie the antibody to the drug. These tiny structures act like chemical triggers. They must remain perfectly stable until they reach their target, then unclip from the antibody to discharge their payload at the tumor. Some of the earliest attempts at making ADCs failed not because of their antibodies or drugs but as a result of unstable linkers.

Modern ADCs rely on two types of linkers. One kind remains unbroken even when the ADC reaches its target. The other kind, known as cleavable linkers, are chemicals that break in response to very specific cues, such as enzymes that are abundant in tumors, in the spaces between individual cancer cells. Once an ADC is within the tumor's boundaries, these enzymes cleave the linker and release the drug payload.

Cleavable linkers are showing impressive advantages, and more than 80 percent of currently approved ADCs now use them. An ADC with a noncleavable linker will kill only the cell it attaches to, but one that splits up could place drug molecules near neighboring tumor cells and destroy them as well. This so-called bystander effect can make the drugs much more effective, Thurber says.

Enhertu, for instance, uses the same antibody as Kadcyla but with a cleavable linker (Kadcyla uses a noncleavable version) and a different drug. Each Enhertu antibody carries approximately eight drug molecules, compared with about three per antibody in Kadcyla. In one recent study, researchers compared the effects of these two drugs in people with HER2-positive breast cancers. Enhertu was the clear winner. It stopped tumor growth for more than two years on average, whereas Kadcyla did so for just six months. “It was a landslide in terms of how much better it was,” Tolaney says. “It's a really nice example of how ADC technology leads to dramatic differences in outcomes.”

The bystander effect also explains, in part, why Enhertu is effective against tumors that have barely any HER2: once the ADC enters a tumor and the drug molecules detach, they can kill neighboring tumor cells even if those bystanders don't carry much HER2 on their surface. This action, along with the use of a diagnostic test that can miss extremely low HER2 levels, could explain the results from the trial where the drug seemed to work on tumors with no HER2. That trial employed an assay known as an IHC test. It is generally used to categorize cancers as HER2 positive or negative, not to measure the amount of the protein present. A negative result typically means 10 percent or fewer of the tumor's cells have HER2 on their surfaces. Yet 10 percent may be enough to attract a few Enhertu particles, and the bystander effect might be sufficient to destroy tumor cells, Modi says.

Enhertu is not the only ADC that appears to work this way. In a 2022 study, researchers found that Trodelvy, an ADC that targets a surface protein known as TROP2, seemed to be more effective than standard chemotherapy for people with metastatic triple-negative breast cancer, a particularly hard-to-treat disease. Trodelvy was better irrespective of how much or how little TROP2 was detected on tumors. “That, to me, is wild,” Tolaney says. “We're excited about it because these cancers are having benefits [apparently] without the target.”

This new generation of ADCs is making a difference in other types of cancers previously thought to be intractable, such as metastatic bladder cancer. In 2021 the fda approved Trodelvy and another ADC named Padcev to treat this illness. For 30 years the standard of care for this type of bladder cancer was chemotherapy alone, says oncologist David J. Benjamin, who treats genitourinary cancers at Hoag Family Cancer Institute in southern California. “Now we have multiple new treatments, and two of them happen to be antibody-drug conjugates,” Benjamin says. In clinical trials for patients with advanced bladder cancer, Padcev combined with a drug that stimulates the immune system shrank tumors or stalled their growth in more than 60 percent of people. In a whopping 30 percent of those who received the two-drug combination, their cancer completely disappeared—an unprecedented success.

But even newer ADCs aren't without problems. The bystander effect, which makes them so effective, can spread far enough from the tumor to affect healthy cells, causing hair loss, nausea, diarrhea, fatigue, and other side effects that are disturbingly similar to the fallout of old-school chemo. ADCs also have been linked to a variety of eye problems ranging from conjunctivitis to severe vision loss.

Another explanation for these nasty effects is that there are no protein targets that are exclusive to cancer cells. These proteins, also known as antigens, are more abundant in cancers but may appear in normal cells. That makes some binding of ADCs to healthy cells unavoidable. “I can't think of any examples of true tumor-specific antigens,” says Matthew Vander Heiden, a molecular biologist at the Koch Institute at the Massachusetts Institute of Techonology. Further, ADCs, like any other medicine or antibody, are eventually ingested and metabolized by noncancerous cells. This process fragments them into smaller pieces, releasing payload drugs from their linkers and triggering reactions.

Still, the ability to take ADCs apart and tweak their components—something that isn't possible with traditional treatments—offers researchers the chance to find versions with fewer side effects and more advantages. At present, most ADCs are used at the maximum dose a person can tolerate. That might not be true with future versions. When developing a medication, whether it's a simple painkiller, a chemotherapy or an ADC, researchers begin by figuring out the lowest dose at which the drug is effective. Then they work out the highest dose that people can receive safely. The space between those two doses, known as a therapeutic window, is usually small. But the ability to swap components offers ADC researchers many routes to widening it. Eventually drugmakers might create ADCs so effective that patients never need to take the highest tolerable dose—a much lower one would eliminate tumors without creating unintended consequences such as nausea or hair loss.

Shifting away from toxic chemotherapy-based drugs as payloads could also reduce side effects. Some recently approved ADCs, for instance, link antibodies to drugs that can activate the body's own immune system to attack cancer cells rather than relying on cell-poisoning chemicals. In addition, scientists are exploring ways to deliver radiation therapy directly to tumors by tethering antibodies to radioisotopes. Joshua Z. Drago, an oncologist at MSK Cancer Center, says that with the right kind of linkers, ADCs “could theoretically deliver any kind of small-molecule medication.”

Ultimately, recombined and improved components could lead to the type of swap that cancer patients really care about: exchanging their disease for a cure.

Jyoti Madhusoodanan  is a health and science journalist based in Portland, Ore. She has a Ph.D. in microbiology.

Scientific American Magazine Vol 330 Issue 3

To Communicate With Apes, We Must Do It On Their Terms

Scientists have long tried to teach apes to speak or sign in human language. But what if we studied their language?

for many years researchers have been trying

Jaglavak, Prince of Insects

Animal magnetism, other fish in the sea.

On August 24, 1661, Samuel Pepys, an administrator in England’s navy and famous diarist, took a break from work to go see a “ strange creature ” that had just arrived on a ship from West Africa. Most likely, it was a chimpanzee—the first Pepys had ever seen. As he wrote in his diary, the “great baboon” was so human-like that he wondered if it were not the offspring of a man and a “she-baboon.”

“I do believe that it already understands much English,” he continued, “and I am of the mind it might be taught to speak or make signs.”

Humans and apes share nearly 99% of the same DNA, but language is one thing that seems to irreconcilably differentiate our species. Is that by necessity of nature, though, or simply a question of nurture?

“It could be that there’s something biologically different in our genome, something that’s changed since we split from apes, and that’s language,” says Catherine Hobaiter, a primatologist at the University of St. Andrews in Scotland. “But another possibility is that they might have the cognitive capacity for language, but they’re not able to physically express it like we do.”

In the centuries since Pepys’ speculations, scientists and the public alike have only become more enamored with the idea of breaking through communication barriers separating our species. “It’s every scientist’s dream,” Hobaiter says. “Instead of having to do years of tests, we could just sit down and have a chat.”

Hobaiter’s work shows that chimps have their own rich world of communication—it’s sort of like a secret sign language.

This not only would allow us to find out what our closest relatives are thinking, but also to potentially learn something about our own evolution and what it means to be human, says Jared Taglialatela, an associate professor at Kennesaw State University and the director of research at the Ape Cognition and Conservation Initiative. “We shared a common ancestor just 5.5 million years ago with both chimpanzees and bonobos,” he says. “That’s useful for making comparisons and answering questions about human origins.”

Apes and humans are even more similar than we ever imagined.

Scientists have been trying to teach chimps to speak for decades, with efforts ranging from misguided to tantalizingly promising. Now, however, they are coming to realize that we’ve likely been going about it in the wrong way. Rather than force apes to learn our language, we should be learning theirs.

Researchers have only just begun to understand the rudimentary fundamentals of ape communication, but already the results are exceeding expectations. Apes and humans, they are discovering, are even more similar than we ever imagined.

“Pretty much all of the capacities we thought of as being uniquely human—learning socially, using communication to reach a goal, shifting our communication depending on who we’re communicating with, being able to plan for the future, keeping record of friends and enemies—actually are not,” Hobaiter says. “They’re all present in chimps.”

Support Provided By

All in the family.

The story of modern research into ape communication begins in 1931, when Winthrop and Luella Kellogg, a husband-wife psychologist team, decided to raise a chimp named Gua alongside their biological son, Donald. The goal was to see if Gua would pick up language, just as a human child would. The landmark experiment inspired many similar efforts—but research with Gua herself lasted less than a year. She was failing to pick up language. (Donald, on the other hand, was reportedly making chimpanzee sounds.) So the Kelloggs called it quits and gave her to a primate center. Less than a year later, she died from pneumonia.

Similar sorts of real-life Curious George projects continued in the 1940s and 1950s, when another husband-wife team, Keith and Catherine Hayes, tried to teach a chimp named Viki spoken human language. After several years, Viki could only use four words: mama, papa, cup, and up. The experiment was cut short when Viki died of meningitis at the age of seven, but many interpreted her lack of progress to mean that apes were not capable of sophisticated communication.

Shortly after the Viki experiment ended, however, Jane Goodall’s groundbreaking research on chimpanzees began to hit the news. Goodall showed that chimps are highly intelligent, emotional beings with individual personalities and capable of constructing tools—discoveries that challenged assumptions about their limited abilities.

In 1967—the year before Planet of the Apes was released—yet another husband-wife team, Allen and Beatrix Gardner, decided to give communication experiments another try. But they went with a different approach: rather than spoken language, they would teach a chimpanzee named Washoe American sign language. Washoe—who wore clothes, sat at the dinner table, brushed her teeth and played games—quickly began to learn and seemed to understand the meaning of the signs. A few years into the project, the Gardners moved on to other work and gave their adopted chimp daughter to a primate center. But scientists there continued to work with Washoe, and by the end of her life, she had learned around 250 signs and had even taught her son to sign.

Receive emails about upcoming NOVA programs and related content, as well as featured reporting about current events through a science lens.

Sign language, researchers agreed, seemed the way to go. Nim Chimpsky, another chimp raised by a human family in the 1970s and taught to sign, showed similar progress—as did Koko, the gorilla who understood more than 1,000 signs of “Gorilla Sign Language” (GSL) and was exposed to English at an early age. But the work on Nim was cut short when Nim’s adopted father and experimenter, Herbert Terrace, became convinced that Nim had not learned sign language at all, but had rather been imitating the trainers. Terrace abandoned Nim—who continued to try to sign for the rest of his days—to a life of animal testing, cages, and solitude.

Nim’s sad end also cast doubt on whether Washoe had in fact learned to sign. “Debate went back and forth about Nim Chimpsky and others, and the whole field began to implode,” Hobaiter says. “But that also coincided very much with the animal rights movement, and the idea of growing awareness that apes are extraordinary individuals.”

Indeed, some researchers were not ready to give up, and they decided to try yet another approach. In the mid-1970s, Emory University scientists created a symbol board—essentially, a primitive computer—and taught a chimp named Lana to string together different keys to mean different things. Spinning off from there, primatologists discovered what is likely the most talented ape of them all, a bonobo named Kanzi.

for many years researchers have been trying

Kanzi, who currently lives at the Ape Cognition and Conservation Initiative in Iowa, learned to communicate by observing scientists trying to teach his mother. “Kanzi basically surprised everyone when he started using the symbols on the board,” Taglialatela says. “Not only that, he seemed to be showing proficiency for understanding spoken language that researchers were just using around the apes while trying to train them.”

By 1993, Kanzi could pass rigorous language comprehension tests, performing at about the level of a 3.5-year-old human child. In controlled tests, he shows proficiency in about 90 symbols, and his keepers say he can use around 250 symbols in more natural environments. He also seems to understand complex sentences: In one experiment , he correctly responded to three-quarters of 660 spoken instructions.

Kanzi is no doubt an exceptionally intelligent individual. But researchers continued to wonder whether his knack for communication was a result of his time with humans—or represented a deeper ability, something that scientists, until now, have overlooked. In trying to force apes to learn our language, have may we blinded ourselves to theirs.

Into the Wild

Researchers continue to work with captive apes to try to answer questions about how they communicate with one another and how that relates to the complexity of their social and emotional lives. But focus is also increasingly turning to directly observing apes in their natural environment. As Hobaiter says, “If we want to know if humans are unique in our language use, we must look at what apes are doing naturally.”

While some researchers attempt experimental studies in the field, Hobaiter prefers a fly-on-the-wall approach. For the past 11 years, she has stalked her subjects—a group of chimpanzees in Uganda—in the most unobtrusive way possible, allowing them to slowly get used to her. She videotapes their interactions and later analyzes the recordings, noting every gestures the chimps make while also trying to parse the wider social context in which individuals are interacting.

“If we want to know if humans are unique in our language use, we must look at what apes are doing naturally.”

Like humans—who regularly communicate with a simultaneous mix of spoken words, tone, facial expressions, and gestures—Hobaiter and other researchers have found that apes seem to use concurrent modalities, or different, overlapping means of getting their point across. It’s a discovery that hints at the complexity of ape communication, but it also makes deciphering seemingly simplistic movements laborious. “If there’s an arm raise gesture, you need to see dozens of cases from dozens of individuals” to fully understand its intent, Hobaiter says. “Like human words with multiple meanings, you need context to know what’s going on in communication.”

So far, she and her colleagues have translated what they believe are the chimps’ basic 60–80 gestures , plus a number of facial expressions and vocalizations. She believes that, together, these things make up ape language phonics. Distilling the meaning of those various sounds and gestures when put together, however, will be a much more challenging and drawn-out task.

“We have this whole system of communication that looks very language-like,” Hobaiter says. “Definitely, we are only beginning to scratch the surface.”

Photo credit: Magnus Johansson / Wikimedia Commons (CC BY-SA 2.0) , Wcalvin / Wikimedia Commons (CC BY-SA 4.0)

Portrait of Rachel Nuwer

National Corporate funding for NOVA is provided by Carlisle Companies. Major funding for NOVA is provided by the NOVA Science Trust, the Corporation for Public Broadcasting, and PBS viewers.

Psychologists Confront Rash of Invalid Studies

wheels-in-head

In the wake of several scandals in psychology research, scientists are asking themselves just how much of their research is valid.

In the past 10 years, dozens of studies in the psychology field have been retracted, and several high-profile studies have not stood up to scrutiny when outside researchers tried to replicate the research.

By selectively excluding study subjects or amending the experimental procedure after designing the study, researchers in the field may be subtly biasing studies to get more positive findings. And once research results are published, journals have little incentive to publish replication studies, which try to check the results.

That means the psychology literature may be littered with effects, or conclusions, that aren't real. [ Oops! 5 Retracted Science Studies ]

The problem isn't unique to psychology, but the field is going through some soul-searching right now. Researchers are creating new initiatives to encourage replication studies, improve research protocols and to make data more transparent.  

"People have started doing replication studies to figure out, 'OK, how solid, really, is the foundation of the edifice that we're building?'" said Rolf Zwaan, a cognitive psychologist at Erasmus University in the Netherlands. "How solid is the research that we're building our research on?"

Storm brewing

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

In a 2010 study in the Journal of Social and Personal Psychology, researchers detailed experiments that they said suggested people could predict the future .

Other scientists questioned how the study, which used questionable methodology such as changing the procedure partway through the experiment, got published; the journal editors expressed skepticism about the effect, but said the study followed established rules for doing good research.

That made people wonder, "Maybe there's something wrong with the rules," said University of Virginia psychology professor Brian Nosek.

But an even bigger scandal was brewing. In late 2011, Diederik Stapel, a psychologist in the Netherlands, was fired from Tilburg University for falsifying or fabricating data in dozens of studies, some of which were published in high-profile journals.

And in 2012, a study in PLOS ONE failed to replicate a landmark 1996 psychology study that suggested making people think of words associated with the elderly — such as Florida, gray or retirement — made them walk more slowly.

Motivated reasoning

The high-profile cases are prompting psychologists to do some soul-searching about the incentive structure in their field.

The push to publish can lead to several questionable practices.

Outright fraud is probably rare. But "adventurous research strategies" are probably common, Nosek told LiveScience. [ The 10 Most Destructive Human Behaviors ]

Because psychologists are so motivated to get flashy findings published, they can use reasoning that may seem perfectly logical to them and, say, throw out research subjects who don't fit with their findings. But this subtle self-delusion can result in scientists seeing an effect where none exists, Zwaan told LiveScience.

Another way to skew the results is to change the experimental procedure or research question after the study has already begun. These changes may seem harmless to the researcher, but from a statistical standpoint, they make it much more likely that psychologists see an underlying effect where none exists, Zwaan said.

For instance, if scientists set up an experiment to find out if stress is linked to risk of cancer, and during the study they notice stressed people seem to get less sleep, they might switch their question to study sleep. The problem is the experiment wasn't set up to account for confounding factors associated with sleep, among other things.

Fight fire with psychology

In response, psychologists are trying to flip the incentives by using their knowledge of transparency, accountability and personal gain.

For instance, right now there's no incentive for researchers to share their data , and a 2006 study found that of 141 researchers who had previously agreed to share their data, only 38 did so when asked.

But Nosek and his colleagues hope to encourage such sharing by making it standard practice. They are developing a project called the Open Science Framework, and one goal is to encourage researchers to publicly post their data and to have journals require such transparency in their published studies. That should make researchers less likely to tweak their data.

"We know that behavior changes as a function of accountability, and the best way to increase accountability is to create transparency," Nosek said.

One journal, Social Psychology, is dangling the lure of guaranteed publication to motivate replication studies. Researchers send proposals for replication studies to the journal, and if they're approved, the authors are guaranteed publication in advance. That would encourage less fiddling with the protocol after the fact.

And the Laura and John Arnold Foundation now offers grant money specifically for replication studies, Nosek said.

Follow LiveScience on Twitter @livescience . We're also on Facebook  & Google+ . 

Tia is the managing editor and was previously a senior writer for Live Science. Her work has appeared in Scientific American, Wired.com and other outlets. She holds a master's degree in bioengineering from the University of Washington, a graduate certificate in science writing from UC Santa Cruz and a bachelor's degree in mechanical engineering from the University of Texas at Austin. Tia was part of a team at the Milwaukee Journal Sentinel that published the Empty Cradles series on preterm births, which won multiple awards, including the 2012 Casey Medal for Meritorious Journalism.

Many kids are unsure if Alexa and Siri have feelings or think like people, study finds

'Scent therapy' helps unlock memories in people with depression, trial finds

Forbidden black holes and ancient stars hide in these 'tiny red dots'

Most Popular

  • 2 What causes you to get a 'stitch in your side'?
  • 3 Newly discovered asteroid larger than the Great Pyramid of Giza will zoom between Earth and the moon on Saturday
  • 4 China opens Chang'e 6 return capsule containing samples from moon's far side
  • 5 Neanderthals cared for 6-year-old with Down syndrome, fossil find reveals
  • 2 Self-healing 'living skin' can make robots more humanlike — and it looks just as creepy as you'd expect
  • 4 2,000 years ago, a bridge in Switzerland collapsed on top of Celtic sacrifice victims, new study suggests
  • 5 Tasselled wobbegong: The master of disguise that can eat a shark almost as big as itself

for many years researchers have been trying

  • At least 10% of research may already be co-authored by AI

That might not be a bad thing

Illustration of a robot head spewing paper with text and charts out of its mouth

Your browser does not support the <audio> element.

“C ERTAINLY, HERE is a possible introduction for your topic...” began a recent article in Surfaces and Interfaces , a scientific journal. Attentive readers might have wondered who exactly that bizarre opening line was addressing. They might also have wondered whether the ensuing article, on the topic of battery technology, was written by a human or a machine .

It is a question ever more readers of scientific papers are asking. Large language models ( LLM s) are now more than good enough to help write a scientific paper. They can breathe life into dense scientific prose and speed up the drafting process, especially for non-native English speakers. Such use also comes with risks: LLM s are particularly susceptible to reproducing biases, for example, and can churn out vast amounts of plausible nonsense. Just how widespread an issue this was, though, has been unclear.

In a preprint posted recently on arXiv, researchers based at the University of Tübingen in Germany and Northwestern University in America provide some clarity. Their research, which has not yet been peer-reviewed, suggests that at least one in ten new scientific papers contains material produced by an LLM . That means over 100,000 such papers will be published this year alone. And that is a lower bound. In some fields, such as computer science, over 20% of research abstracts are estimated to contain LLM -generated text. Among papers from Chinese computer scientists, the figure is one in three.

Spotting LLM -generated text is not easy. Researchers have typically relied on one of two methods: detection algorithms trained to identify the tell-tale rhythms of human prose, and a more straightforward hunt for suspicious words disproportionately favoured by LLM s, such as “pivotal” or “realm”. Both approaches rely on “ground truth” data: one pile of texts written by humans and one written by machines. These are surprisingly hard to collect: both human- and machine-generated text change over time, as languages evolve and models update. Moreover, researchers typically collect LLM text by prompting these models themselves, and the way they do so may be different from how scientists behave.

for many years researchers have been trying

The latest research by Dmitry Kobak, at the University of Tübingen, and his colleagues, shows a third way, bypassing the need for ground-truth data altogether. The team’s method is inspired by demographic work on excess deaths, which allows mortality associated with an event to be ascertained by looking at differences between expected and observed death counts. Just as the excess-deaths method looks for abnormal death rates, their excess-vocabulary method looks for abnormal word use. Specifically, the researchers were looking for words that appeared in scientific abstracts with a significantly greater frequency than predicted by that in the existing literature (see chart 1). The corpus which they chose to analyse consisted of the abstracts of virtually all English-language papers available on PubMed, a search engine for biomedical research, published between January 2010 and March 2024, some 14.2m in all.

The researchers found that in most years, word usage was relatively stable: in no year from 2013-19 did a word increase in frequency beyond expectation by more than 1%. That changed in 2020, when “ SARS ”, “coronavirus”, “pandemic”, “disease”, “patients” and “severe” all exploded. (Covid-related words continued to merit abnormally high usage until 2022.)

for many years researchers have been trying

By early 2024, about a year after LLM s like Chat GPT had become widely available, a different set of words took off. Of the 774 words whose use increased significantly between 2013 and 2024, 329 took off in the first three months of 2024. Fully 280 of these were related to style, rather than subject matter. Notable examples include: “delves”, “potential”, “intricate”, “meticulously”, “crucial”, “significant”, and “insights” (see chart 2).

The most likely reason for such increases, say the researchers, is help from LLM s. When they estimated the share of abstracts which used at least one of the excess words (omitting words which are widely used anyway), they found that at least 10% probably had LLM input. As PubMed indexes about 1.5m papers annually, that would mean that more than 150,000 papers per year are currently written with LLM assistance.

for many years researchers have been trying

This seems to be more widespread in some fields than others. The researchers’ found that computer science had the most use, at over 20%, whereas ecology had the least, with a lower bound below 5%. There was also variation by geography: scientists from Taiwan, South Korea, Indonesia and China were the most frequent users, and those from Britain and New Zealand used them least (see chart 3). (Researchers from other English-speaking countries also deployed LLM s infrequently.) Different journals also yielded different results. Those in the Nature family, as well as other prestigious publications like Science and Cell , appear to have a low LLM- assistance rate (below 10%), while Sensors (a journal about, unimaginatively, sensors), exceeded 24%.

The excess-vocabulary method’s results are roughly consistent with those from older detection algorithms, which looked at smaller samples from more limited sources. For instance, in a preprint released in April 2024, a team at Stanford found that 17.5% of sentences in computer-science abstracts were likely to be LLM -generated. They also found a lower prevalence in Nature publications and mathematics papers ( LLM s are terrible at maths). The excess vocabulary identified also fits with existing lists of suspicious words.

Such results should not be overly surprising. Researchers routinely acknowledge the use of LLM s to write papers. In one survey of 1,600 researchers conducted in September 2023, over 25% told Nature they used LLM s to write manuscripts. The largest benefit identified by the interviewees, many of whom studied or used AI in their own work, was to help with editing and translation for those who did not have English as their first language. Faster and easier coding came joint second, together with the simplification of administrative tasks; summarising or trawling the scientific literature; and, tellingly, speeding up the writing of research manuscripts.

For all these benefits, using LLM s to write manuscripts is not without risks. Scientific papers rely on the precise communication of uncertainty, for example, which is an area where the capabilities of LLM s remain murky. Hallucination—whereby LLM s confidently assert fantasies—remains common, as does a tendency to regurgitate other people’s words, verbatim and without attribution.

Studies also indicate that LLM s preferentially cite other papers that are highly cited in a field, potentially reinforcing existing biases and limiting creativity. As algorithms, they can also not be listed as authors on a paper or held accountable for the errors they introduce. Perhaps most worrying, the speed at which LLM s can churn out prose risks flooding the scientific world with low-quality publications.

Academic policies on LLM use are in flux. Some journals ban it outright. Others have changed their minds. Up until November 2023, Science labelled all LLM text as plagiarism, saying: “Ultimately the product must come from—and be expressed by—the wonderful computers in our heads.” They have since amended their policy: LLM text is now permitted if detailed notes on how they were used are provided in the method section of papers, as well as in accompanying cover letters. Nature and Cell also allow its use, as long as it is acknowledged clearly.

How enforceable such policies will be is not clear. For now, no reliable method exists to flush out LLM prose. Even the excess-vocabulary method, though useful at spotting large-scale trends, cannot tell if a specific abstract had LLM input. And researchers need only avoid certain words to evade detection altogether. As the new preprint puts it, these are challenges that must be meticulously delved into. ■

Curious about the world? To enjoy our mind-expanding science coverage, sign up to  Simply Science , our weekly subscriber-only newsletter.

Explore more

This article appeared in the Science & technology section of the print edition under the headline “Scientists, et ai”

Science & technology June 29th 2024

The race to prevent satellite armageddon.

  • A deadly new strain of mpox is raising alarm

France’s centre cannot hold

From the June 29th 2024 edition

Discover stories from this section and more in the list of contents

More from Science and technology

for many years researchers have been trying

New yeast strains can produce untapped flavours of lager

One Chilean hybrid has a spicy taste, with hints of clove

for many years researchers have been trying

A new technique could analyse tumours mid-surgery

It would be fast enough to guide the hands of neurosurgeons

for many years researchers have been trying

The world’s most studied rainforest is still yielding new insights

Even after a century of research, a tropical rainforest in Panama continues to shed valuable light on the world’s abundance of natural life

A new bionic leg can be controlled by the brain alone

Those using the prosthetic can walk as fast as those with intact lower limbs

How the last mammoths went extinct

Small genetic mutations accumulated through inbreeding may have made them vulnerable to disease

Fears of a Russian nuclear weapon in orbit are inspiring new protective tech

We use cookies. Read more about them in our Privacy Policy .

  • Accept site cookies
  • Reject site cookies

Grantham Research Institute on Climate Change and the Environment

Global trends in climate change litigation: 2024 snapshot

for many years researchers have been trying

This report provides a numerical analysis of how many climate change litigation cases were filed in 2023, where and by whom, and a qualitative assessment of trends and themes in the types of cases filed. It is the sixth report in the series, produced by the Grantham Research Institute in partnership with the Sabin Center for Climate Change Law and drawing on the Sabin Center’s Climate Change Litigation Databases . Each report provides a synthesis of the latest research and developments in the climate change litigation field.

Key messages

  • At least 230 new climate cases were filed in 2023. Many of these are seeking to hold governments and companies accountable for climate action. However, the number of cases expanded less rapidly last year than previously, which may suggest a consolidation and concentration of strategic litigation efforts in areas anticipated to have high impact.
  • Climate cases have continued to spread to new countries, with cases filed for the first time in Panama and Portugal in 2023.
  • 2023 was an important year for international climate change litigation, with major international courts and tribunals being asked to rule and advise on climate change. Just 5% of climate cases have been brought before international courts, but many of these cases have significant potential to influence domestic proceedings.
  • There were significant successes in ‘government framework’ cases in 2023; these challenge the ambition or implementation of a government’s overall climate policy response. The European Court of Human Rights’ decision in April 2024 in the case of KlimaSeniorinnen and ors. v. Switzerland is likely to lead to the filing of further cases.
  • The number of cases concerning ‘climate-washing’ has grown in recent years. 47 such cases were filed in 2023, bringing the recorded total to more than 140. These cases have met with significant success, with more than 70% of completed cases decided in favour of the claimants.
  • There were important developments in ‘polluter pays’ cases: more than 30 cases worldwide are currently seeking to hold companies accountable for climate-related harm allegedly caused by their contributions to greenhouse gas emissions.
  • Litigants continue to file new ‘corporate framework’ cases, which seek to ensure companies align their group-level policies and governance processes with climate goals. The New Zealand Supreme Court allowed one such case to proceed, although cases filed elsewhere have been dismissed. The landmark case of Milieudefensie v. Shell is under appeal.
  • In this year’s analysis a new category of ‘transition risk’ cases was introduced, which includes cases filed against corporate directors and officers for their management of climate risks. Shareholders of Enea approved a decision to bring such a case against former directors for planned investments in a new coal power plant in Poland.
  • ESG backlash cases, which challenge the incorporation of climate risk into financial decision-making.
  • Strategic litigation against public participation (SLAPP) suits against NGOs and shareholder activists that seek to deter them from pursuing climate agendas.
  • Just transition cases, which challenge the distributional impacts of climate policy or the processes by which policies were developed, normally on human rights grounds.
  • Green v. green cases, which concern potential trade-offs between climate and biodiversity or other environmental aims.

Recent previous reports in the series:

2023 snapshot

2022 snapshot

Sign up to our newsletter

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Frequently Asked Questions

Why am i never asked to take a poll.

You have roughly the same chance of being polled as anyone else living in the United States. This chance, however, is only about 1 in 170,000 for a typical Pew Research Center survey. To obtain that rough estimate, we divide the current adult population of the U.S. (about 255 million) by the typical number of adults we recruit to our survey panel each year (usually around 1,500 people). We draw a random sample of addresses from the U.S. Postal Service’s master residential address file. We recruit one randomly selected adult from each of those households to join our survey panel. This process gives every non-institutionalized adult a known chance of being included. The only people who are not included are those who do not live at a residential address (e.g., adults who are incarcerated, living at a group facility like a rehabilitation center, or living in a remote area without a standard postal address).

Can I volunteer to be polled?

While we appreciate people who want to participate, we can’t base our polls on volunteers. The key to survey research is to have a  random sample so that every person has a chance of having their views captured. The kinds of people who might volunteer for our polls are likely to be very different from the average American – at the very least they would probably be more politically interested and engaged, which would not be a true representation of the general population.

Why should I participate in surveys?

Polls are a way for you to express your opinions to the nation’s leaders and the country as a whole. Public officials and other leaders pay attention to the results of polls and often take them into account in their decision-making. If certain kinds of people do not participate in the surveys, then the results won’t represent the full range of opinions in the nation.

What good are polls?

Polls seek to measure public opinion and document the experiences of the public on a range of subjects. The results provide information for academics, researchers and government officials and help to inform the decision-making process for policymakers and others. Much of what the country knows about its media usage, labor and job markets, educational performance, crime victimization and social conditions is based on data collected through polls.

Do pollsters have a code of ethics? If so, what is in the code?

The major professional organizations of survey researchers have very clear codes of ethics for their members. These codes cover the responsibilities of pollsters with respect to the treatment of respondents, their relationships with clients and their responsibilities to the public when reporting on polls.  Some good examples of a pollster’s Code of Ethics include:

American Association for Public Opinion Research (AAPOR)

Council of American Survey Research Organizations (CASRO)

You can read Pew Research Center’s mission and code of ethics  here .

How are your polls different from market research?

One main difference is the subject matter. Market research explores opinions about products and services and measures your buying patterns, awareness of products and services or willingness to buy something. Our polls typically focus on public policy issues, mainly aimed at informing the public. We also try to measure topics like how voters are reacting to candidates in political campaigns and what issues are important to them.

Do you survey Asian Americans?

Yes. Our surveys are representative of the entire adult population of the United States and accurately account for the full population’s diversity by age, gender, race and ethnicity, region, and socioeconomic factors such as education levels, household income and employment status. We do not exclude anyone from our analyses based on his or her demographic characteristics. With the American Trends Panel, the Center release results specifically for Asian Americans in multiple reports each year.

How are people selected for your polls?

Most of our U.S. surveys are conducted on the American Trends Panel (ATP), the Center’s national survey panel of over 10,000 randomly selected U.S. adults. ATP participants are recruited offline using random sampling from the U.S. Postal Service’s residential address file. Respondents complete the surveys online using smartphones, tablets or desktop devices. We provide tablets and data plans to adults without home internet.

Do people lie to pollsters?

We know that not all survey questions are answered accurately, but it’s impossible to gauge intent and to say that any given inaccurate answer necessarily involves lying. People may simply not remember their behavior accurately.

More people say they voted in a given election than voting records indicate actually cast ballots. In some instances, researchers have actually verified the voting records of people who were interviewed and found that some of them said they voted but did not. Voting is generally considered a socially desirable behavior, like attending church or donating money to charity. Studies suggest these kinds of behaviors are overreported. Similarly, socially undesirable behaviors such as illegal drug use, certain kinds of sexual behavior or driving while intoxicated are underreported.

We take steps to minimize errors related to questions about socially desirable or undesirable activities. For example, questions about voter registration and voting usually acknowledge that not everyone takes part in elections. Pew Research Center’s voter turnout question is worded this way:

“Which of the following statements best describes you? I did not vote in the [YEAR] presidential election; I planned to vote but wasn’t able to; I definitely voted in the [YEAR] presidential election”

Do people really have opinions on all of those questions?

When we poll on a topic that may be unfamiliar, we typically start by asking how much, if anything, people have heard about it. This way we can get some insight into who knows about the subject and who does not. When we release results from the poll, we typically report just the opinions of people who say they had heard about the topic, and we also report what share of the public had not heard about the topic.

How can I tell a high-quality poll from a lower-quality one?

Two key aspects to consider are transparency and representation. Pollsters who provide clear, detailed explanations about how the poll was conducted (and by whom) tend to be more accurate than those who do not. For example, reputable pollsters will report the source from which the sample was selected, the mode(s) used for interviewing, question wording, etc. High-quality polls also have procedures to ensure that the poll represents the public, even though response rates are low, and some groups are more likely to participate in polls than others. For example, it helps to sample from a database that includes virtually all Americans (e.g., a master list of addresses or phone numbers). Also, it is critical that the poll uses a statistical adjustment (called “weighting”) to make sure that it aligns with an accurate profile of the public. For example, Pew Research Center polls adjust on variables ranging from age, sex and education to voter registration status and political party affiliation. More general guidelines on high-quality polling are available here .

How can a small sample of 1,000 (or even 10,000) accurately represent the views of 250,000,000+ Americans?

Two main statistical techniques are used to ensure that our surveys are representative of the populations they’re drawn from: random sampling and weighting. Random sampling ensures that each person has the same chance of selection to participate in a survey and that the people selected into a sample are a good mix of various demographics, such as age, race, income and education, just like in the general population. However, sample compositions can differ. For example, one sample drawn from a nationally representative list of residential addresses may have a higher percentage of rural dwellers compared with another sample drawn from the exact same list. To ensure that samples drawn ultimately resemble the population they are meant to represent, we use weighting techniques in addition to random sampling. These weighting techniques adjust for differences between respondents’ demographics in the sample and what we know them to be at population level, based on information obtained through institutions such as the U.S. Census Bureau. For more on this topic, check out our Methods 101 video on random sampling.

Do your surveys include people who are offline?

Yes. For the online ATP panel to be truly nationally representative, the share of those who do not use the internet nationally must be represented on the panel. In the past, we did this by providing identified non-internet users with paper questionnaires to complete and mail back. Now, those who don’t have internet access are provided with internet-enabled tablets to take their surveys. These tablet-provided individuals are representative of our non-internet population in the Center’s analyses.  

U.S. Surveys

Other research methods.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

IMAGES

  1. Ch. 2 The Leadership Equation.docx

    for many years researchers have been trying

  2. Human Genome Project Fact Sheet (2022)

    for many years researchers have been trying

  3. Researchers have been try

    for many years researchers have been trying

  4. In recent years a growing number of researchers working

    for many years researchers have been trying

  5. Why some scientists are more open than others

    for many years researchers have been trying

  6. New robust device may scale up quantum tech, researchers say

    for many years researchers have been trying

VIDEO

  1. Scientific Equipment Leads To Incredible Discoveries!

  2. Thai zoo breeds first red-headed vulture chick in 30 years

  3. Lunchbox Science with Professor Jean Yang

  4. Global Aging and Longevity Science

  5. Nazca Mummies Are A New ALIEN Species?!

  6. Scientists Put a Camera in a Coffin for Research Purposes. When They Turned It on, They Screamed

COMMENTS

  1. Researchers have been trying to breed fungus-resistant chestnut ...

    Since the 1920s, researchers have been trying to breed fungus-resistant chestnut trees. Roxy Todd of member station Radio IQ in Virginia reports that a hundred years later, they're still at it ...

  2. Neuroscientists find a way to make object-recognition models perform

    For many years, researchers have been trying to build computer models that can identify objects as well as the human visual system. Today's leading computer vision systems are already loosely guided by our current knowledge of the brain's visual processing. However, neuroscientists still don't know enough about how the entire ventral ...

  3. Breakthrough A.I. Makes Huge Leap Toward Solving 50-Year-Old Problem in

    Researchers have been trying to find efficient ways to estimate the shape of proteins since at least the 1970s, reports Will Douglas Heaven for MIT Tech Review.

  4. Humans and flies employ very similar mechanisms for ...

    Dr Hirth commented: 'For many years researchers have been trying to find the mechanistic basis behind behaviour and I would say that we have discovered a crucial part of the jigsaw puzzle by ...

  5. Closer look at cancer cells' ability to rewire, thrive, survive

    For nearly 15 years researchers have been trying to explain why cancer cells do this. In this paper, Mostoslavsky's team studied colon cancer tumors to learn more. They developed a fluorescent reporter that stained only a marker of glycolysis in cells of the tumor. Using this reporter and a mass spectrometry imaging approach developed by ...

  6. The pandemic mixed up what scientists study

    The urgency of the pandemic is also uniting researchers from disparate disciplines — a trend seen during the Second World War and for years afterwards, says Kaiser. Source: arXiv papers ...

  7. How were researchers able to develop COVID-19 vaccines so quickly?

    To accelerate development, many COVID-19 vaccine trials are conducted in studies that combine phases 1, 2 and/or 3 where researchers begin by vaccinating a smaller number of healthy volunteers. As the trial continues, if the vaccine appears to be safe, it then opens up to more participants, such as those with preexisting health conditions.

  8. Psychology's Replication Crisis Is Real, Many Labs 2 Says

    November 19, 2018. Over the past few years, an international team of almost 200 psychologists has been trying to repeat a set of previously published experiments from its field, to see if it can ...

  9. From diagnosing brain disorders to cognitive enhancement, 100 years of

    Since the discovery of alpha rhythms, there have been many attempts to understand how and why neurons oscillate. ... What will be possible in the next 100 years of EEG? Some researchers, including ...

  10. How Is CRISPR Revolutionizing Cancer Research?

    Back in 2020, it was predicted that an estimated 1,806,590 new cases of cancer would be diagnosed in the U.S. alone, and about 606,520 people would succumb to the disease that year. Researchers have been trying to cure cancer with various methods from cell therapy to immunotherapy, but there still isn't a reliable remedy for the disease.

  11. Science has been in a "replication crisis" for a decade. Have we

    Much ink has been spilled over the "replication crisis" in the last decade and a half, including here at Vox. Researchers have discovered, over and over, that lots of findings in fields like ...

  12. Scientists Replicated 100 Psychology Studies, and Fewer Than Half Got

    According to work presented today in Science, fewer than half of 100 studies published in 2008 in three top psychology journals could be replicated successfully. The international effort included ...

  13. New Research in Psychological Science

    It also demonstrates that children as young as 3 years can engage in reputation-management strategies that involve subtle inferences about the intentions of other people. ... For decades researchers have been trying to find ways to improve people's ability to regulate negative emotions. Thus far, researchers have generally assumed that people ...

  14. Scientists replicated 100 recent psychology experiments. More ...

    More than 270 researchers from around the world came together to replicate 100 recent findings from top psychology journals. By one measure, only 36 percent showed results that were consistent ...

  15. Scientific Findings Often Fail To Be Replicated, Researchers Say

    A massive effort to test the validity of 100 psychology experiments finds that more than 50 percent of the studies fail to replicate. This is based on a new study published in the journal "Science."

  16. Why are we nice? Altruism's origins are put to the test

    And it's been discussed for years, you know, researchers have been trying to place it on said tree, but it turns out that all of those efforts might be for naught. Nick Petrić Howe.

  17. The Science of Dogs

    Many big canine projects are just starting to produce results, and the data will generate papers — and headlines — for years to come. And some researchers have started setting their sights on ...

  18. These Researchers Have Been Trying To Stop School Shootings For 20 Years

    These Researchers Have Been Trying To Stop School Shootings For 20 Years. Mary Ellen O'Toole calls the teenagers who murdered 13 people at Columbine High School in 1999 by their first names ...

  19. Latest Research on Lupus Treatment

    For many years, researchers have been trying to find out whether rituximab might treat lupus, too. Studies so far haven't shown much success, but rituximab might still work for more severe forms ...

  20. These New Cancer Drugs Improve Outcomes for People with Hard-to-Treat

    For the other 80 percent of the approximately 250,000 people diagnosed with the disease every year in the U.S., Herceptin offers no benefits. The hunt for better treatments led researchers to ...

  21. Study: Homework Matters More in Certain Countries

    For years, researchers have been trying to figure out just how important homework is to student achievement. Back in 2009, the Organization for Economic Cooperation and Development looked at ...

  22. To Communicate With Apes, We Must Do It On Their Terms

    The story of modern research into ape communication begins in 1931, when Winthrop and Luella Kellogg, a husband-wife psychologist team, decided to raise a chimp named Gua alongside their ...

  23. How Many Psychology Studies Are Wrong?

    For instance, right now there's no incentive for researchers to share their data, and a 2006 study found that of 141 researchers who had previously agreed to share their data, only 38 did so when ...

  24. A summer wave of Covid-19 has arrived in the US

    Covid-19 levels have been rising in the United States for weeks as new variants drive what's become an annual summer surge. Covid-19 surveillance has been scaled back significantly since the US ...

  25. How many books does the average person read a year?

    Nearly six in 10 (57%) Americans have bought or read a book based solely on its cover, new research suggests. A survey of 2,000 U.S. adults found that surprisingly, a whopping 96% of those who did ...

  26. The U.S. wants to change how researchers get access to a huge ...

    Instead, it wrote, it will soon require researchers to access data through its Virtual Research Data Center (VRDC). To get access to the VRDC, researchers pay an initial "project fee" of at least $15,000, and pay at least $10,000 per year thereafter to maintain access. In addition, CMS charges an annual "seat fee" for each user: at ...

  27. At least 10% of research may already be co-authored by AI

    At least 10% of research may already be co-authored by AI ... They might also have wondered whether the ... Just how widespread an issue this was, though, has until recently been unclear. Explore more

  28. Global trends in climate change litigation: 2024 snapshot

    2023 was an important year for international climate change litigation, with major international courts and tribunals being asked to rule and advise on climate change. Just 5% of climate cases have been brought before international courts, but many of these cases have significant potential to influence domestic proceedings.

  29. Third-party or independent candidates often fall ...

    Given the unusual dynamics of the 2024 presidential election - including the presence of several potentially significant third-party and independent candidates - Pew Research Center examined how such candidates fared in past elections.. We focused on the six elections over the past 60 years in which the major-party share of the nationwide popular vote was less than 98%.

  30. Frequently Asked Questions

    The key to survey research is to have a random sample so that every person has a chance of having their views captured. The kinds of people who might volunteer for our polls are likely to be very different from the average American - at the very least they would probably be more politically interested and engaged, which would not be a true ...