Advertisement

Advertisement

Better, Virtually: the Past, Present, and Future of Virtual Reality Cognitive Behavior Therapy

  • Open access
  • Published: 20 October 2020
  • Volume 14 , pages 23–46, ( 2021 )

Cite this article

You have full access to this open access article

research on virtual reality

  • Philip Lindner   ORCID: orcid.org/0000-0002-3061-501X 1  

18k Accesses

42 Citations

3 Altmetric

Explore all metrics

Virtual reality (VR) is an immersive technology capable of creating a powerful, perceptual illusion of being present in a virtual environment. VR technology has been used in cognitive behavior therapy since the 1990s and accumulated an impressive evidence base, yet with the recent release of consumer VR platforms came a true paradigm shift in the capabilities and scalability of VR for mental health. This narrative review summarizes the past, present, and future of the field, including milestone studies and discussions on the clinical potential of alternative embodiment, gamification, avatar therapists, virtual gatherings, immersive storytelling, and more. Although the future is hard to predict, clinical VR has and will continue to be inherently intertwined with what are now rapid developments in technology, presenting both challenges and exciting opportunities to do what is not possible in the real world.

Similar content being viewed by others

research on virtual reality

Health, Health-Related Quality of Life, and Quality of Life: What is the Difference?

research on virtual reality

Cognitive–behavioral therapy for management of mental health and stress-related disorders: Recent advances in techniques and technologies

research on virtual reality

Social interactions in the metaverse: Framework, initial evidence, and research roadmap

Avoid common mistakes on your manuscript.

Introduction

Since its merging into a coherent therapeutic tradition in the 1980s, cognitive behavior therapy (CBT) has proven a remarkable success in treating a wide range of mental disorders, psychosomatic conditions, and many non-medical issues for which sufferers need help with. A distinguishing feature of CBT, common to and prominent in both its behavioral and cognitive roots, is the emphasis on carrying out exercises designed to change behavior and/or cognitions related to some problem area (Mennin et al. 2013 ). This is often explicitly framed as a multi-stage process, including first providing a psychotherapeutic rationale for the exercise, and then detailed and concrete planning, controlled execution, reporting of specific outcomes, drawing lessons learned, and progressing to the next exercise. Since exercises are so central in CBT—congruent with its historical emphasis on specific as opposed to common factors (Buchholz and Abramowitz 2020 )—this therapeutic tradition is inherently well suited for technology-mediated delivery formats that do not rely on there being a traditional client-therapist relationship. The rapid, paradigm shifting and often unpredictable development and dissemination in the last decades of different consumer information technologies (everything from personal computers to portable media players, smartphones, and wearables) have allowed researchers and clinicians to explore new avenues for treatment design and delivery. A prominent success story of the merger of technology and psychotherapy is Internet-administered CBT (iCBT), which enjoys a robust evidence base with demonstrated efficacy equivalent to face-to-face treatments for mental disorders and psychosomatic conditions (Carlbring et al. 2018 ), and is now implemented in routine care in many countries (Titov et al. 2018 ). iCBT, as it is typically packaged, is in essence a digital form of bibliotherapy: a virtual self-help book where modules replace chapters, delivered not on paper but via an online platform, with or without support from an online therapist (often through asynchronous messaging). These modules convey in writing what would otherwise be conveyed orally in the face-to-face format, covering both psychoeducation and exercises. The bibliotherapeutic roots of iCBT are apparent in that many trials adapted existing self-help books (Andersson et al. 2016 ) or have afterwards been published as such (Carlbring et al. 2001 ).

Without any doubt, iCBT was certainly revolutionary at the time of the first appearance, challenging entrenched preconceptions of what psychotherapy is, offering unlimited dissemination of evidence-based treatment, and raising the scientific standard of psychotherapy research to that of the medical field (Andersson 2016 ). The novelty of iCBT however lied in the format of delivery, not the therapeutic content. With immersive technology like virtual reality (VR), it is possible to revolutionize not only how treatment is delivered but also how change-promoting experiences are designed and evaluated, by making the unrealistic a reality. This narrative review will introduce readers to VR technology and how it can be put to clinical use, and discuss the past, present, and future of VR-CBT for mental disorders. The aim is not to provide a systematic review (Freeman et al. 2017 ) or a meta-analysis of the field (Botella et al. 2017 ; Carl et al. 2018 ; Wechsler et al. 2019 ) but rather to give a historical overview and context, showing how developments in technology have fueled clinical progress. Perspectives on the future of the field will also be provided. The focus will be on VR-CBT treatments for anxiety disorders since this application dominates the extant literature, although other clinical applications will also be discussed. In addition to being a treatment tool, VR is also seeing increasing use as an experimental platform for studying psychopathology (Juvrud et al. 2018 ) and treatment mechanisms (Scheveneels et al. 2019 ); coverage of this exciting application is however beyond the scope of this review.

What Is Virtual Reality?

In essence, VR refers to any technology that creates a simulated experience of being present in a virtual environment that replaces the physical world (Riva et al. 2016 ). This sense of (virtual) presence is a key concept in VR and is what distinguishes this technology from others along the so-called mixed reality spectrum: someone playing a traditional video game, for example, or even reading a captivating book, may very well become immersed in it but is unlikely to feel physically transported to the locale depicted on their computer screen. Experiencing presence in VR is a powerful perceptual illusion, yet an illusion nonetheless: the environment may indeed prompt overt cognitions like “I know this is not real”—as individuals undergoing VR exposure therapy often think and say aloud as a safety behavior—yet this is done after the same user has already acted congruent with the environment, thereby demonstrating that it is nonetheless perceived as real (Slater 2018 ).

To create this perceptual illusion, special hardware is required. As will be discussed in the coming sections, until only a few years ago, such hardware was inaccessible, expensive, and required trained professionals to both develop software for and use. While it is possible to transform a (restricted) physical environment into a virtual one by projecting interactive images onto the walls—a so-called CAVE setup (Bouchard et al. 2013 )—developments in head-mounted display (HMD) technology has made this latter VR approach the dominant one, especially with the release of consumer VR platforms that are all of the HMD kind. Modern VR HMDs come in two versions: mobile devices, either freestanding or smartphone-based, and tethered devices. Mobile devices offer simplicity of use and do not physically constrain the user, yet are computationally limited and must also be recharged. Tethered devices require a high-end computer or gaming console to run, connected via cable. It should be noted that wireless tethering is being developed and will likely be released in the years to come and that VR devices like the Oculus Quest can now run in both mobile and tethered modes, offering more computational power in the latter.

Since VR HMDs first appeared in the 1960s, these have relied on the same core principles to create a sense of presence in a virtual world: by including two 2D displays (one covering each eye, thereby also withholding the real world) showing views with offset angles corresponding to an average (or custom) interocular distance, binocular depth perception can be simulated, termed stereoscopy. In addition, the HMD continuously measures head rotation in all directions (pitch, yaw, and roll, so-called three degrees of freedom or 3DOF) using gyroscopes and adapts the visual presentation accordingly, giving the user the perceptual illusion of being able to look around the virtual environment (Scarfe and Glennerster 2019 ). Immersion and presence are both mediated by interactivity with the virtual world, and since vision is the dominant sense in humans (Posner et al. 1976 ), stereoscopy coupled with 3DOF is a simple but powerful setup to create VR. Modern VR platforms typically also include adaptive stereo sound, as well as wireless handheld controllers that can be used both for mouse-type pointing and to control virtual hands. In addition, there are now both tethered and mobile VR platforms that have 6DOF functionality: alongside rotation tracking, these devices also continuously measure movement in X, Y, and Z (positional tracking) using either cameras placed in the physical room (outside-in tracking) or integrated into the HMD (inside-out tracking) and then use this data to update the visual presentation, giving the user the ability to also physically walk around the virtual environment.

Not unexpectedly, the history of clinical VR is intertwined with the history of VR technology. The sections that follow provide a brief historical overview of these parallel tracks of development, divided into the past (roughly 1968–2013), present (2014–2020), and future.

Recognizable VR technology has existed since the 1960s, emerging from and taking root in the bourgeoning computer science scene (Sutherland 1968 ) and aerospace industry (Furness 1986 ), developing alongside advances in computational power and display technology. By the 1990s, a series of failed attempts to mass-commercialize what was still unripe VR technology into gaming products struck a hard blow to the VR field, pushing it back to the peripheries and the niche applications found there (e.g., flight simulation) for which this technology had always been valuable. It would take almost 20 years before anyone made a serious attempt at consumer VR again (see below), yet by the mid-1990s, the inherent capabilities of VR had become apparent to clinical research psychologists around the world. Remarkably, the capabilities and advantages of VR in treating anxiety disorders that were raised already in 1996 (Glantz et al. 1996 ) are still echoed today (Lindner et al. 2017 ). Since the virtual environment is built from scratch, it can be made fully controllable, flexible, adaptive, and interactive in ways not practically feasible or even possible in the real world. In treating spider phobia, for example, the therapist or patient could conveniently choose the specific type of spider to be used in exposure, linearly increase how frightening the virtual spider looks, and make its behavior adaptive to how the user behaves. Further, VR exposure will always be safer than in vivo equivalent exposure and additionally solves practical issues in providing exposure therapy: there is no need to leave the therapy room, either to perform exposure exercises or prepare stimuli material in advance.

While the mid-to-late 1990s was not an exciting period in terms of VR technology development, the period marks the beginning of using VR for mental health purposes. Early published case reports and trials from well-funded laboratories revealed the feasibility and promise of VR exposure therapy for acrophobia (Rothbaum et al. 1995a , 1995b ), aviophobia (Rothbaum et al. 1996 ), claustrophobia (Botella et al. 1998 ), spider phobia (Carlin et al. 1997 ), and PTSD (Rothbaum et al. 1999 ). In all cases, VR was used to generate virtual equivalents of phobic stimuli to perform otherwise rather traditional exposure therapy, resulting in impressive symptom reductions considering the hardware limitations of the period. However, the unique clinical advantages of VR were put to use already at this early stage: in the claustrophobia exposure paradigm, for example, the user had the ability to move a wall of a virtual room, making it bigger or smaller (Botella et al. 1998 ). In addition to enabling a convenient, linear version of the fear hierarchy, such features also provide the user with an important sense of control over the exposure scenario (Lindner et al. 2017 ). Another early example of innovative and unique applications was a small feasibility study on using VR to treat body image disturbances (Perpiñá et al. 1999 ), an important component of eating disorders that—at least at the time—was a neglected topic in CBT protocols (Rosen 1996 ). This first VR study included several pioneering clinical components, including the possibility to change the size of different body sections of a human avatar body, with the patient’s actual body overlaid to highlight discrepancies that could then also be highlighted and discussed in traditional Socratic dialogue. This example thereby illustrates how well VR experiences can be integrated into a CBT framework, in which mapping and resolving discrepancies are an important generic component.

These first studies used VR hardware like the Division dVisor, a bulky and heavy HMD by today’s standard that included an aft counterweight just to balance the weight of the dual LCDs capable of generating around 146,000 pixels at 10 frames per second (about 1/80th of the pixels per second that a modern tethered HMD is capable of). The system additionally required a high-end computer and several peripherals to run and costs up to a hundred thousand euros depending on the setup. Not surprisingly, the heavy and bulky HMDs of the time, with their low resolutions and framerates, were prone to induce so-called cybersickness (Rebenitsch and Owen 2016 ): symptoms resembling motion sickness believed to be caused primarily by sensory conflict between the visual and vestibular/proprioceptor systems, although display properties in themselves also play a role (Saredakis et al. 2020 ). Cybersickness was a well-recognized phenomenon already in the 1990s (McCauley and Sharkey 1992 ) yet was much less an issue in clinical VR than in, e.g., the aerospace industry where VR was typically (and still is) used to simulate flight, i.e., virtual movement (as per the visual system) without physical movement (as per the vestibular/proprioceptor systems). Many of the common principles for the design of VR exposure paradigms emerged at this early stage, including minimizing first-person movement to avoid inducing cybersickness—letting the feared stimuli come to the user and not the other way around, which may even have therapeutic benefits in some cases (Lindner et al. 2017 ).

The decade or so that followed saw an exponential growth of research on clinical applications of VR, as shown in the graph below on published and accumulated papers on VR and anxiety over time (Fig.  1 ) and as evident by several meta-analytic studies being published around this time (Parsons and Rizzo 2008 ; Powers and Emmelkamp 2008 ). By 2012, there were k  = 21 randomized controlled trials of VR exposure therapy for anxiety disorders available for meta-analysis, showing, e.g., that VR exposure therapy outperformed waitlist control conditions, had similar outcomes to other evidence-based interventions, and had stable long-term effects (Opriş et al. 2012 ). Regarding the latter, it should be noted that only three studies at the time had follow-up periods of 1 year or more; studies with longer follow-up periods have since been published (Anderson et al. 2016 ). A landmark meta-analysis was published in 2015 showing that previously reported within- and between-group effect sizes remained when only considering trials with in vivo behavioral outcomes, revealing that fear reductions in VR does indeed translate to reduced fear also in vivo (Morina et al. 2015 ).

figure 1

VR research published up until release of consumer VR technology

In addition to establishing VR as an efficacious treatment of many anxiety disorders, the early 2000s also saw a growing research interest in the mechanisms of VR-CBT, in particular on the role of presence: does increased presence drive increased distress during VR exposure (and indirectly treatment outcomes), vice versa, or is this association bidirectional? A 2014 meta-analysis reported an overall significant correlation of r  = .28 (95% CI: 0.18–0.38) between self-reported distress and presence during exposure, albeit with significant differences across sample clinical characteristics and type of VR equipment used (Ling et al. 2014 ). Two of the covered studies deserve special mention: a 2004 study on acrophobia exposure experimentally assigned participants to experiencing VR exposure with either a low-presence HMD or a high-presence CAVE, and despite finding the expected difference in presence scores, the groups did not differ on outcomes (although it should be noted that this contrast was low powered) (Krijn et al. 2004 ). A 2007 study on VR exposure for aviophobia found that while presence partially mediated the association between pre-existing anxiety and distress during exposure, presence did not moderate outcomes, suggesting that presence is a necessary but insufficient requirement for symptom reduction (Price and Anderson 2007 ), congruent with the 2004 study. Unfortunately, the complex associations between presence, distress, and outcomes have not cleared much since then, although recent experimental research has begun to shed more light on this important aspect (Gromer et al. 2019 ). It is not unlikely that measurement error from the self-reported presence ratings contribute to this confusion: using behavioral measures of presence—e.g., a behavioral response consistent with the virtual environment as per the very definition of presence (Slater 2018 )—was in fact proposed and showed promise already in the early 2000s (Freeman et al. 2000 ) yet did not become popular in clinical paradigms due to the more elaborate procedure and data analysis required compared to self-report ratings. With the release of consumer VR platforms, developing appropriate paradigms and analyzing data has become much more convenient, hopefully paving the way for a renaissance of behavioral measures of presence.

Finally, the first extended decade of the new millennium was also a period when researchers first began examining the contextual factors of importance to field, including views on VR held by both patients and clinicians. One early study reported that nine out of ten spider-fearful individuals would prefer VR exposure over in vivo exposure (Garcia-Palacios et al. 2001 ); to what degree such a preference serves as a proxy measure for greater baseline severity and functions as a safety behavior remains unknown and has been the subject of surprisingly little research since. Two later survey studies found that clinicians have a generally favorable view of using VR clinically, although they also reported fears about required training, handling the equipment and, financial costs, and reported an overall low degree of acquaintance with the technology (Schwartzman et al. 2012 ; Segal et al. 2011 ). Recent survey research, conducted after the advent of consumer VR, indicates that these fears have now decreased, although both professional and even recreational experiences of using VR were still rare, and knowledge of VR exposure therapy still low (Lindner et al. 2019d ). Together, the first three studies provided early evidence that there were no substantial human barriers to implementing VR interventions in regular clinical settings—a quest that endured into the period of modern VR.

The Present

The history of modern VR really begins in 2012, when a start-up company named Oculus (later purchased by Facebook) revived the VR field by launching a crowdfunding campaign to finance the development and release of a modern VR HMD, primarily for gaming. This happened at a critical point in time when the maturation of two related technologies converged: the rapid development of smartphones, which had begun only a few years prior, meant that high-quality flat screen technology was now readily available along with miniaturized peripheral hardware (e.g., gyroscopes). In parallel, consumer gaming computers had now grown powerful enough to render impressive graphical quality, even with the increased resolution, field of view, and refresh rate required by VR HMDs. A working prototype of the tethered Oculus Rift HMD (Development Kit 2) was shipped in 2014 and was quickly adapted for use in clinical research (Anderson et al. 2017 ; Peterson et al. 2018 ). At around the same time, the Samsung Gear VR platform was announced and soon released, making it the first mobile VR device to see widespread consumer adoption. It too quickly saw use in clinical research (Lindner et al. 2019c ; Miloff et al. 2016 ; Spiegel et al. 2019 ; Tashjian et al. 2017 ). The Gear VR platform featured a unique, since abandoned design wherein a compatible smartphone from the same brand snapped into place at the front of a simpler HMD containing only optics, a touchpad, and rotation trackers. The smartphone then served as the display and ran the VR applications. This solution was meant to lower the threshold for mass adoption by offering VR at a lower price to users that already had a powerful smartphone that could be put to use. Google used the same solution for both their simpler Cardboard VR platform (for use with nearly any smartphone) and their Gear-equivalent Daydream platform (for use with only a few compatible smartphone models). Although both the Gear and Cardboard platforms became relatively popular among consumers—the extremely low-cost Cardboard solution also finding some unique clinical applications in that it allowed for unprecedented, low-cost distribution of VR for at-home use (Donker et al. 2019 ; Lindner et al. 2019b )—these platforms have since been abandoned by both Samsung and Google without replacements. The required hardware matching was simply not cost-effective, and requiring a compatible smartphone was ultimately deemed to exclude more potential adopters than those brought on by the lowered threshold for adoption. The release and ensuing popularity of the affordably priced, mobile Oculus Go device convinced the industry that freestanding mobile VR was the future—that high-quality freestanding HMDs could be developed and released to a cost only negligibly higher than smartphone-dependent solutions. Other influential hardware releases in the last few years include the tethered HTC Vive (a competitor to the Rift platform), the tethered Playstation VR platform, and the recent release of the mobile Oculus Quest which provides 6DOF through inside-out tracking, allows interaction through hand gesture mapping (making hand controllers optional in many applications), and can also be run in tethered mode for increased performance.

The section below on the present state of clinical VR will not be told chronologically and is not exhaustive but rather touches upon a selection of topics of current interest in the field.

Automated Treatments

Arguably the most exciting recent development in the field of clinical VR is the rise of automated VR exposure therapy applications. Three high-quality randomized trials have been published recently, featuring three different applications: two on acrophobia with comparison against waiting-list (Donker et al. 2019 ; Freeman et al. 2018 ) and one on spider phobia examining non-inferiority against gold-standard in vivo exposure (Miloff et al. 2019 ). Findings from the later spider phobia trial have since been replicated in a single-subject trial with simulated real-world conditions (Lindner et al. 2020b ) and valuable usage data from one of the acrophobia trials has also been reported (Donker et al. 2020 ). A qualitative study on the experience of undergoing automated VR exposure therapy for spider phobia has also been published (Lindner et al. 2020c ). All three trials report impressive symptom reductions, revealing the public health and clinical potential of this innovative approach to treatment. Recent advances in this field include a large, ongoing randomized controlled trial of automated VR-CBT for anxious avoidance of social situations among patients with psychosis (Freeman et al. 2019 ; Lambe et al. 2020 ).

Automation in this context means that no human therapist took part in immediate treatment delivery. Instead, these applications are designed to be freestanding and offer a complete therapeutic experience, including onboarding and psychoeducation, instructions, gamified cognitive-behavioral exercises, a virtual therapist (see below), and more—all packaged in a user-friendly interface, sometimes with an explicit, overarching narrative. The term gamification refers to the application of traditional game components, originally designed for enjoyment, to a non-gaming setting (Koivisto and Hamari 2019 ). Such components typically include simple game mechanics, earning points by completion of tasks, and overt reinforcement of progress through, e.g., unlockables and collecting badges. When combined with onboarding and a cohesive, progressive, and possibly interactive narrative, and with an explicit goal other than pure enjoyment, the experience may be considered a so called serious game (Fleming et al. 2017 ; Laamarti et al. 2014 ). Findings from the first qualitative study on automated VR exposure therapy showed that even aversive experiences like exposure therapy can indeed be framed and viewed by users as a serious game with a psychotherapeutic goal (Lindner et al. 2020c ). Being a high-immersion technology, the VR modality is inherently well suited for gamification, with gaming remaining the unique selling point of VR, driving consumer adoption. Gamification has long been assumed to increase compliance and thereby treatment effects by increasing both short- and long-term engagements and making aversive experiences less so. Congruently, qualitative research has shown gamification elements are indeed perceived as attractive features by users (Faric et al. 2019a , 2019b ; Lindner et al. 2020c ; Tobler-Ammann et al. 2017 ). Empirical evidence for the presumed effects is however surprisingly scarce (Fleming et al. 2017 ; Johnson et al. 2016 ). Automated VR interventions distributed as applications on ordinary digital marketplaces have the potential to reach tens or hundreds of thousands of users (Lindner et al. 2019c ), providing not only a vector for substantial public health impact but also the necessary sample sizes to use factorial designs (Chakraborty et al. 2009 ) and randomized A-B testing to disentangle the causal impact of each gamification component.

Virtual Therapists

Interestingly enough, all three automated VR exposure applications mentioned above opted to include a virtual therapist of some sort (Donker et al. 2019 ; Freeman et al. 2018 ; Miloff et al. 2019 ), either voiceover or as an embodied agent. Such a feature serves many purposes: it reminds users of the therapeutic context, is a convenient and familiar way of conveying information (psychoeducation) and reinforcing progress, and adds a pleasant human touch. Little is known however about this novel addition to the VR arsenal. Early research examined working alliance towards the virtual environment itself, finding psychometric properties that suggest that the alliance concept can indeed be applied in this way (Miragall et al. 2015 ). Recently, a novel instrument has been developed specifically to measure working alliance with an embodied virtual therapist, using data from automated VR exposure therapy for spider phobia (Lindner et al. 2020b ; Miloff et al. 2019 ), showing that a relationship similar to a working alliance does seem to form with the virtual therapist and that the quality of this relationship predicted long-term improvements (Miloff et al. 2020 ). Qualitative interview research on the same VR intervention for spider phobia showed that the virtual therapist was an appreciated feature (Lindner et al. 2020c ). These findings are consistent with research on unguided iCBT and bibliotherapy, for example, showing that a relationship similar to a working alliance (but obviously not exactly the same) can develop with the therapeutic material itself (Heim et al. 2018 ). How best to make therapeutic use of virtual therapists remains an important topic for future research.

Virtual embodiment entails creating a perceptual illusion of being present in a body other than one’s own physical. In VR, this is typically achieved by having the user see a virtual body positioned below the camera position (an impression which can be amplified by placing virtual mirrors in the environment) and promoting body ownership by allowing the user to move this body using hand controllers and/or 6DOF positional tracking. Building on early research (Perpina et al. 2003 ; Perpiñá et al. 1999 ), a VR full-body illusion has been shown to decrease body image disturbance in anorexia nervosa (Keizer et al. 2016 ). More recently, a VR body swap illusion has been used to increase self-compassion (Cebolla et al. 2019 ). In another recent study, participants practiced delivering compassion in one virtual body and then experienced a recorded version of this act embodied as the receiving party, leading to reduced depression and self-criticism and increased self-compassion (Falconer et al. 2016 ). Another innovative approach involves allowing a single user to alternate between two virtual bodies (one being Sigmund Freud) engaged in a conversation, essentially a form of semi-externalized self-dialogue. Compared to a scripted control condition, participants engaged in embodied self-dialogue reported being helped and changed to a greater degree (Slater et al. 2019 ). In addition to inspiring a new line of research, this type of paradigm presents a fine example of how a generic CBT technique like perspective-changing can be empowered and amplified using VR (Lindner et al. 2019a ).

VR for Other Disorders

A 2017 systematic review found that at the time, most clinical VR research had been conducted on anxiety disorders (including PTSD and OCD), schizophrenia, substance use disorders, and eating disorders, with only two studies on depression (Freeman et al. 2017 ). Notably, VR interventions for autism (Didehbani et al. 2016 ; Maskey et al. 2019 ) were not covered by the systematic search, nor were gambling disorder (Bouchard et al. 2017 ), stress (Anderson et al. 2017 ; Serrano et al. 2016 ), or ADHD (Neguț et al. 2017 ). A recent survey study on attitudes towards VR among practicing CBT clinicians found that those who worked clinically with neuropsychiatric disorders, personality disorders, and psychosomatic disorders were more inclined to report that VR could be used with the respective disorder (Lindner et al. 2019d ), suggesting that novel clinical applications of VR are indeed possible. The VR field is currently expanding rapidly, including new research on innovative VR treatments for disorders that have previously received little attention like depression (Migoya-Borja et al. 2020 ; Schleider et al. 2019 ), sleep problems (Lee and Kang 2020 ), and worry (Guitard et al. 2019 ). Recent work has also studied how VR can be used for modifying cognitions (Silviu Matu 2019 ) and feared self-perceptions (Wong 2019 ), in approach-avoidance training for obesity (Kakoschke 2019 ), and to treat aggressive behavior in children (Alsem 2019 ), revealing how VR has matured into a flexible, innovative treatment tool.

The future of VR-CBT will continue to be inherently intertwined with the development of VR technology, yet the latter now develops at a pace so rapid that clinical researchers are struggling to keep up and make full use of the new capabilities offered by new technologies. It has proven notoriously difficult to predict advances in technology that could in turn drive novel therapeutic applications: progress is both linear (as with, e.g., display properties like resolution and refresh rate), discrete and unexpected (as with, e.g., the development of inside-out tracking enabling 6DOF also on mobile VR HMDs), and a complex combination thereof. Nonetheless, some predictions on the future of clinical VR for mental health can be made based on obvious gaps in the extant research literature, trends in consumer VR, and recent technological advances.

Beyond Efficacy: Demonstrating Effectiveness

Clinical research can be placed along a continuum ranging from basic science, to efficacy and effectiveness trials (Wieland et al. 2017 ). The efficacy-effectiveness continuum is noteworthy since it emphasizes that study design and study aims need to be adapted to the context of the extant literature. In building an evidence base for a new intervention, one would first examine efficacy (“Does the intervention work under optimal conditions?”), and then proceed to examining effectiveness (“Does the intervention work under real-world conditions?”). In the case of VR exposure therapy for anxiety disorders, more than a dozen efficacy trials conducted over 20 years have convincingly shown that this intervention is efficacious (Carl et al. 2019 ; Fodor et al. 2018 ; Wechsler et al. 2019 ) and associated with low rates of deterioration (Fernández-Álvarez et al. 2019 ). To the author’s knowledge, only a single effectiveness trial of VR exposure therapy has been published to date: although it demonstrated feasibility and replicated the effect size from the preceding efficacy trial, the sample size was relatively small and for ethical and practical reasons, a single-case design was chosen instead of comparison with treatment-as-usual (Lindner et al. 2020a ). The lack of large, multi-arm effectiveness trials presents a substantial gap in the extant literature and should be considered a research priority in the years to come, if VR is ever to become a part of routine clinical care.

Still Awaiting Mass Adoption by Consumers

Despite VR now being an accessible and affordable consumer product (Lindner et al. 2017 ), mass adoption has yet to occur and growth remains linear rather than exponential, hindering the full public health and clinical potential of VR. The exact number of sold VR devices and active users is difficult to estimate for many reasons, yet publically released hardware statistics from the Steam gaming platform in spring 2020 revealed that around 2% of the platform’s active user base has access to VR, equivalent to roughly 2 million users. At the end of 2019, Sony confirmed having sold more than five million units of their Playstation VR device. If one includes simpler Cardboard-based VR HMDs that require a smartphone to run and offers only a rudimentary VR experience, there are at least twenty million VR units distributed worldwide, possibly twice that. Usage patterns among device owners will likely vary considerably, from daily to one-time use. By comparison, approximately half of the world’s population is now estimated to have access to a smartphone. Releasing mental health interventions, packaged as applications, on ordinary VR content marketplaces would allow dissemination on an almost unprecedented scale. Even in the early days of consumer VR, a first-generation VR relaxation application reached 40,000 unique users in 2 years (Lindner et al. 2019c ), a number that would likely be surpassed rapidly at time of writing. Still, until VR mass adoption by consumers, the dissemination potential is limited by the overlap of early adopters, those experiencing mental health problems, and those that view this medium as appropriate for help-seeking.

The comparably low adoption rate also hinders some promising clinical applications, e.g., having a patient perform VR exposure tasks in-between in-vivo exposure sessions or completely by themselves using an automated intervention. Today, this would likely require the clinic to lend or rent out the specific VR equipment (sending it by mail if necessary). While this approach does indeed offer an innovative solution to a clear clinical need, the lack of interest thus far among ordinary clinics demonstrates that it also comes with potent barriers. Costs in acquiring and maintaining the equipment, as well as those relating to the logistics of distributing it, may simply outweigh the benefits it brings. Until consumer mass adoption, this approach is unlikely to be successful unless applicable health insurance models begin to incorporate and reimburse it. In the first effectiveness trial of VR exposure therapy in routine care, the clinic was reimbursed either through existing occupational healthcare contracts or the patients payed out-of-pocket (Lindner et al. 2020a ), in no case with any additional cost included to cover the clinics investment in VR. Whether such models are financially sustainable at scale remains to be evaluated.

Novel Uses of Virtual Embodiment

Arguably, clinical researchers have only begun to scratch the surface of the clinical potential of virtual embodiment, especially with regard to how such experiences can be merged with traditional, evidence-based CBT techniques. With regard to exposure therapy, for example, one could imagine allowing patients to experience the very thing they fear by embodying them, e.g., as a feared conversation partner in a virtual social scenario, allowing them to truly experience it from both perspectives. Such an experience should not be less tolerable than standard exposure and has the potential to promote rapid fear reduction since one need not fear themselves. Virtual embodiment could also, for example, be used to allow individuals with substance use disorders to interact with their influenced self through embodiment of a concerned significant other, providing a potentially powerful transformative experience of the negative effects of substance misuse, the full extent of which may not otherwise be perceived by the person affected.

Full Immersion Through Innovative User Interfaces and Making Use of This Data

While research has begun to collect and analyze user engagement metrics and self-reported data from self-guided VR interventions (Donker et al. 2020 ), there has been surprisingly little research on data that offers deeper insight. The entire concept of (HMD) VR relies on continuous head rotation tracking, making rotation a suitable user interface through the use of a crosshair that fixates eye gaze and synchronizes it (at least to same degree) with head rotation. In fact, human-computer interaction research has shown that head rotation provides an adequate proxy measure of eye gaze: gaze tends to focus around a rotation-controlled crosshair, and gaze shifts above 25° are typically accompanied by subsequent head movement with a lag of 30–150 ms (Sidenmark and Gellersen 2020 ). In general, head rotation data has seen very few published clinical applications beyond its immediate role in graphical presentation, either as a way of adapting the virtual environment (e.g., prompting a “Look up!” message during public speaking exposure when the user stares at the floor as a safety behavior) or as a non-invasive, continuous measure that can be used for further analysis. A rare 2016 study demonstrated the value of this data by showing that horizontal rotation during VR exposure for public speaking anxiety correlated with distress in female participants (Won et al. 2016 ). Until the average consumer VR HMDs includes proper eye tracking technology—already available in some high-grade consumer HMDs—head rotation appears to be a valuable proxy measure that can be put to greater use. This includes the possibility of using VR head rotation data as a proxy measure of other physiological variables like heart rate that indicate emotional distress, proof-of-concept of which has already been demonstrated (Noori et al. 2019 ) and is possible with related methods (Lomaliza and Park 2019 ).

The future of VR-CBT will likely also see greater use of other user interfaces that are already available technology-wise yet have seen limited use thus far in clinical applications. Embodied conversational agents, capable of instantaneous natural language processing (Provoost et al. 2017 ), may, for example, be used to include interactive, virtual therapists that the user can speak with freely without preselected options. HMDs like the Oculus Quest can now use its 6DOF cameras to track hand movements directly, bypassing the need for hand controllers to map hand movement and enabling the user to interact with the virtual environment with individual fingers. This could be used clinically for simulating touching phobic stimuli, for example (Hoffman et al. 2003 ; Tardif et al. 2019 ). 6DOF technology, although not new but now much more user-friendly, remains underutilized in clinical VR, in particular as core clinical components. VR exergames, for example, could provide a form of behavioral activation for depression (Lindner et al. 2019a ). All these discussed user interfaces rely on continuous, non-invasive measurement that provide large amounts of data. Research on how this data can be used with machine learning (Pfeiffer et al. 2020 ) to, e.g., predict clinical outcomes will likely be another topic of interest in the years to come. Collecting vast amounts of data on (proxy) gaze, motor actions and in-virtuo behaviors from VR usage—in addition to the camera mapping of physical surroundings required by inside-out tracking 6DOF—is however not without ethical aspects; privacy concerns have already been raised (Slater et al. 2020 ; Spiegel 2018 ) and must continue to be discussed within clinical research, especially since this issue will likely grow more prominent among VR users in general.

The Importance of (and Need for) Tailoring and Adaptation

Many previously studied VR paradigms have included features that allowed the user to customize the environment to fit their therapeutic needs, including the “Virtual Iraq/Afghanistan” paradigm developed for PTSD treatment that allowed users to recreate specific traumatic scenarios (Rizzo et al. 2010 ). To what degree virtual environments need to be tailored to the specific user remains an open question of great importance to the field since so-called sandbox-type paradigms require additional developmental resources, which may not be cost-effective in relation to efficacy. This question has however received surprisingly little research attention thus far. One recent study compared exposure to standardized catastrophic scenarios in VR, to imaginal exposure with personalized scenarios, and found no difference in evoked anxiety (Guitard et al. 2019 ), suggesting that perfect tailoring is at least not necessary. A study on (360° video) VR relaxation found a strong correlation between averages preference rating of different virtual nature environments and average improvement in positive mood (Gao et al. 2019 ). A related research question concerns the benefits of including adaptive virtual environments. Research on VR biofeedback paradigm (Fominykh et al. 2017 ) have demonstrated the feasibility of including such adaptive components, which could easily be combined with other CBT techniques like exposure. Having, e.g., already created a series of spider models with increasingly frightening appearances and behaviors (Miloff et al. 2019 ), it would certainly be possible with today’s technology to create an exposure task wherein the spider stimuli morphs automatically depending on the heart rate or some other continuous measure of emotional distress acquired using off-the-shelf, wearable technology integrated through an API. Whether adaptive and/or tailored virtual scenarios show additional clinical benefits that warrant the extra developmental and practical resources required remains to be examined. Relating to the issue of what works for whom, more individual patient data meta-analytic research (Fernández-Álvarez et al. 2019 ) with high-resolution variables is needed to establish predictors of treatment response, non-response and negative effects (see below).

Social VR is growing increasingly popular and refers to any application allowing two or more people to meet and directly interact in a virtual environment, typically through embodied avatars. Many VR games already feature or are explicitly built around multiplayer functionality, including virtual tennis and realistic shooter games. More generic virtual meetup applications have also begun to appear, and it is likely only a matter of time before a VR equivalent of Second Life becomes ubiquitous (Sonia Huang 2011 )—hence (presumably) Facebook’s interest in the technology. In terms of research, studies on social presence experienced with both avatars and agents (Fox et al. 2015 ) have a long and extensive history, with a recent systematic review identifying k  = 152 studies investigating different factors of importance (Oh et al. 2018 ). However, clinical applications of social VR have thus far been scarce. There are a number of possible uses of social VR in CBT: social VR could be used as an immersive type of videoconferencing psychotherapy (Tarp et al. 2017 ), virtual gatherings could be used as a form of behavioral activation in depression (Lindner et al. 2019a ), a patient in non-automated VR exposure therapy would likely benefit from observing an embodied avatar therapist modeling non-phobic responses (Olsson and Phelps 2007 ; Öst 1989 ), VR could also make it convenient to perform VR exposure therapy for public speaking anxiety (Kahlon et al. 2019 ) in front of avatars instead of agents, and more.

Therapeutic Storytelling

The fact that modern consumer VR is primarily marketed, and used, as an entertainment platform hints at the potential of using this immersive technology to distribute powerful storytelling experiences that are designed to be therapeutic in themselves, i.e., going beyond the simple overarching narratives that may be included as gamification elements. Using techniques, principles, and lessons learned from the field of VR entertainment, it may be possible to develop interactive or even passive VR experiences that tell stories that have a significant and stable impact on how individuals view themselves and others, e.g., by allowing individuals to experience emotionally charged events and scenarios from different perspectives, conceptually akin to traditional cognitive rescripting exercises for early traumatic memories (Wild and Clark 2011 ). Although there is some preliminary, indirect research in support of this approach (Shin 2018 ), therapeutic effects on psychopathology have yet to be demonstrated. Of note, the idea of using storytelling therapeutically is not new to the field of clinical psychology: so-called creative bibliotherapy—patient reading selected works of fiction, as opposed to self-help material—has a long history yet has received very little research attention (Troscianko 2018 ) and can therefore not currently be considered an evidence-based treatment. Whether VR equivalents can make clinical use of storytelling remains to be evaluated, yet this approach shows prima facie potential as a low-threshold, single-session intervention that could be distributed at scale and would likely be viewed as attractive by users.

Raising Research Quality

Concerns have been raised about research quality of trials examining VR exposure therapy for anxiety disorders, primarily the reliance on small samples, and no or questionable control conditions (Page and Coxon 2016 ). It should however be noted that meta-analytic research has found no correlation between study quality and observed effect size (McCann et al. 2014 ). The small sample sizes that characterized early (and to a lesser degree, also current) research is not unexpected given the added practical requirements in providing VR treatment, at least with the previous generation of technology: in addition to all the regular logistics required of any psychotherapy trial, such trials also required acquiring expensive hardware, developing special software and training therapists in the use of equipment that was often far from user-friendly. However, since the expected effect sizes in these studies were large (as with in vivo exposure therapy), even smaller trials may nonetheless have been well powered; further, the fact that VR allows for a greater, even full degree of standardization should decrease outcome variance and thereby sample size requirement (Lindner et al. 2020b ).

The advent of modern consumer VR has resolved most of the logistic barriers to running larger clinical trials (but see below), as reflected in the larger sample sizes found in recent studies (Donker et al. 2019 ; Freeman et al. 2018 ; Miloff et al. 2019 ). Hopefully, with technological progress now having made greater sample sizes feasible, future VR research will also address the critique of suboptimal control conditions. The choice of control condition is a long-standing debate and multifaceted issue in psychotherapy research, with placebo interventions remaining the gold-standard despite being hard to implement in research on traditional psychotherapy (Gold et al. 2017 ). VR however is inherently well suited for the use of placebo interventions: therapeutic techniques may be packaged in non-traditional ways (e.g., as serious games) and the occurrence and precise extent of each component can easily be modified. Patients, in turn, are also likely to have markedly fewer preconceptions of what constitutes a VR psychological treatment, offering good grounds for (double) blinding. Future research contrasting active VR interventions against VR placebos is of special importance in VR applications for mental health problems where the immersion itself may have an effect. This includes VR pain management (Kenney and Milling 2016 ) where (sensory) distraction is often explicitly framed as the mediating mechanism (Gupta et al. 2018 ), as well as VR relaxation (Anderson et al. 2017 ) which is believed to work by evoking a strong sense of presence in a calming virtual environment (Seabrook et al. 2020 ). Research contrasting VR interventions with active components to VR placebos without active components, would be able to disentangle the specific effect of the presumed therapeutic mechanism from the nonspecific effect of using immersive technology, and would also raise the research quality of the field.

Finally, as the field of clinical VR grows and expands, research must continue to be vigilant to negative effects and aspects. VR has a long history of studying negative effects in the form of cybersickness (Rebenitsch and Owen 2016 ), meta-analytic research has revealed low rates of deterioration in VR exposure therapy (Fernández-Álvarez et al. 2019 ), and trials have already begun (Miloff et al. 2019 ) to include and report results from broader measures of negative effects used elsewhere in psychotherapy research (Rozental et al. 2016 ). New patient groups, treatment forms and delivery modalities nonetheless continue to raise new challenges. Recent survey research suggests that ordinary clinicians do continue to see certain risks in using VR in therapy, although positive views outweigh negative (Lindner et al. 2019d ). Widespread reports of difficulties in implementing exposure therapy for PTSD in clinical settings (Waller and Turner 2016 ), for example, stress the importance of considering therapist views in efforts to disseminate new treatments. In disseminating automated VR treatments for, e.g., depression (Lindner et al. 2019a ) and phobias (Garcia-Palacios et al. 2007 ), care must also be taken to avoid that engaging with such interventions serve as avoidance behaviors to seeking more comprehensive help. One could however certainly imagine automated VR treatments as part of a stepped-care model, and in most cases, any treatment will be preferable to none. Research on consumer smartphone applications for mental health (Larsen et al. 2019 ; Shen et al. 2015 ) suggests that few future consumer VR applications that will be released on ordinary digital marketplaces can be expected to be evidence-based and effective.

Conclusions

It has now been 25 years since the first mental health applications of VR technology appeared and much has happened since, both in terms of scientific progression and technological advances. The recent release of consumer VR platforms constitutes a true paradigm shift in the development and dissemination of clinical VR, which has inspired and continues to inspire a new generation of interventions grounded in a CBT framework. How the field will develop is as difficult to predict now as in the field’s infancy. A review article from 1996 on “VR psychotherapy” made a number of interesting predictions on the future of the field: while some predictions did turn out true, most ultimately did not (Glantz et al. 1996 ). At the turn of the millennium, few people would have predicted that less than 20 years later, half of the world’s population would own a device that not only lets you speak to nearly anyone on the globe, but also features instant, fast access to the Internet, more computational power than high-end computers of the day, several high-definition digital cameras and a display good enough to view movies on—all packaged in a device weighing less than 200 g, less than a centimeter thick and affordable enough that many people switch models every year. VR and other mixed reality technologies are still awaiting mass adoption, but when this does happen, there will already be a firm evidence base to inform the next generation of VR interventions for mental health. The field of VR-CBT is expanding rapidly with new publications every week, but the same constraints on time and funding prevalent elsewhere in academic research apply also here. Thus, while the field at large will hopefully continue to house an impressive width of research interests, individual research groups and researchers will likely find themselves at a crossroads of sort, whether they like it or not. Do we continue exploring the efficacy of innovative clinical applications of new VR technology under optimal conditions, or is the time ripe to focus on public health dissemination and implementation in routine care? Should we focus on developing automated treatments or user-friendly VR tools for clinicians to solve practical issues and do things not possible in real life? Do we expand to include new interventions for previously neglected mental disorders, or should we aim to improve on existing intervention for disorders that VR has been shown to work with? Should we continue to create virtual equivalents of what we otherwise do in the therapy room, or is it time to leave the constraints of the real world behind and truly think outside the (real world) box? As in the mid-1990s, such decisions will shape the future of VR-CBT. Given the momentum that consumer VR has already picked up this time around, it certainly looks like consumer VR is now here to stay—and if so, so is VR-CBT.

Alsem, S. (2019). Using interactive virtual reality to treat aggressive behavior problems in children. in 9th World Congress of Behavioural and Cognitive Therapies .

Google Scholar  

Anderson, P. L., Edwards, S. M., & Goodnight, J. R. (2016). Virtual reality and exposure group therapy for social anxiety disorder: results from a 4–6 year follow-up. Cognitive Therapy and Research . https://doi.org/10.1007/s10608-016-9820-y .

Anderson, A. P., Mayer, M. D., Fellows, A. M., Cowan, D. R., Hegel, M. T., & Buckey, J. C. (2017). Relaxation with immersive natural scenes presented using virtual reality.  Aerospace Medicine and Human Performance, 88 , 520–526. https://doi.org/10.3357/AMHP.4747.2017 .

Article   PubMed   Google Scholar  

Andersson, G. (2016). Internet-delivered psychological treatments. Annual Review of Clinical Psychology, 12 , 157–179. https://doi.org/10.1146/annurev-clinpsy-021815-093006 .

Andersson, E., Hedman, E., Wadström, O., Boberg, J., Andersson, E. Y., Axelsson, E., et al. (2016). Internet-based extinction therapy for Worry: A Randomized Controlled Trial. Behav. Ther. doi: https://doi.org/10.1016/j.beth.2016.07.003 .

Botella, C., Baños, R. M., Perpiñá, C., Villa, H., Alcañiz, M., & Rey, A. (1998). Virtual reality treatment of claustrophobia: a case report. Behaviour Research and Therapy, 36 , 239–246. https://doi.org/10.1016/S0005-7967(97)10006-7 .

Botella, C., Fernández-Álvarez, J., Guillén, V., García-Palacios, A., & Baños, R. (2017). Recent Progress in virtual reality exposure therapy for phobias: a systematic review. Current Psychiatry Reports, 19 , 42. https://doi.org/10.1007/s11920-017-0788-4 .

Bouchard, S., Bernier, F., Boivin, É., Dumoulin, S., Laforest, M., Guitard, T., et al. (2013). Empathy toward virtual humans depicting a known or unknown person expressing pain. Cyberpsychology, Behavior and Social Networking, 16 , 61–71. https://doi.org/10.1089/cyber.2012.1571 .

Bouchard, S., Robillard, G., Giroux, I., Jacques, C., Loranger, C., St-pierre, M., et al. (2017). Using virtual reality in the treatment of gambling disorder : the development of a new tool for cognitive behaviour therapy. Frontiers in Psychology, 8 , 1–10. https://doi.org/10.3389/fpsyt.2017.00027 .

Article   Google Scholar  

Buchholz, J. L., & Abramowitz, J. S. (2020). The therapeutic alliance in exposure therapy for anxiety-related disorders: a critical review. Journal of Anxiety Disorders, 70 , 102194. https://doi.org/10.1016/j.janxdis.2020.102194 .

Carl, E., Stein, T., A., Levihn-Coon, A., Pogue, J. R., Rothbaum, B., Emmelkamp, P., et al. (2018). Virtual reality exposure therapy for anxiety and related disorders: a meta-analysis of randomized controlled trials. Journal of Anxiety Disorders . https://doi.org/10.1016/j.janxdis.2018.08.003 .

Carl, E., Stein, A. T., Levihn-Coon, A., Pogue, J. R., Rothbaum, B., Emmelkamp, P., et al. (2019). Virtual reality exposure therapy for anxiety and related disorders: a meta-analysis of randomized controlled trials. Journal of Anxiety Disorders, 61 , 27–36. https://doi.org/10.1016/j.janxdis.2018.08.003 .

Carlbring, P., Westling, B. E., Ljungstrand, P., Ekselius, L., & Andersson, G. (2001). Treatment of panic disorder via the internet: a randomized trial of a self-help program. Behavior Therapy, 32 , 751–764. https://doi.org/10.1016/S0005-7894(01)80019-8 .

Carlbring, P., Andersson, G., Cuijpers, P., Riper, H., & Hedman-Lagerlöf, E. (2018). Internet-based vs. face-to-face cognitive behavior therapy for psychiatric and somatic disorders: an updated systematic review and meta-analysis. Cognitive Behaviour Therapy, 47 , 1–18. https://doi.org/10.1080/16506073.2017.1401115 .

Carlin, A. S., Hoffman, H. G., & Weghorst, S. (1997). Virtual reality and tactile augmentation in the treatment of spider phobia: a case report. Behaviour Research and Therapy, 35 , 153–158. https://doi.org/10.1016/S0005-7967(96)00085-X .

Cebolla, A., Herrero, R., Ventura, S., Miragall, M., Bellosta-Batalla, M., Llorens, R., et al. (2019). Putting oneself in the body of others: a pilot study on the efficacy of an embodied virtual reality system to generate self-compassion. Frontiers in Psychology, 10 . https://doi.org/10.3389/fpsyg.2019.01521 .

Chakraborty, B., Collins, L. M., Strecher, V. J., & Murphy, S. A. (2009). Developing multicomponent interventions using fractional factorial designs. Statistics in Medicine, 28 , 2687–2708. https://doi.org/10.1002/sim.3643 .

Article   PubMed   PubMed Central   Google Scholar  

Didehbani, N., Allen, T., Kandalaft, M., Krawczyk, D., & Chapman, S. (2016). Virtual reality social cognition training for children with high functioning autism. Computers in Human Behavior, 62 , 703–711. https://doi.org/10.1016/j.chb.2016.04.033 .

Donker, T., Cornelisz, I., van Klaveren, C., van Straten, A., Carlbring, P., Cuijpers, P., et al. (2019). Effectiveness of self-guided app-based virtual reality cognitive behavior therapy for acrophobia: a randomized clinical trial. JAMA Psychiatry, 76 , 682. https://doi.org/10.1001/jamapsychiatry.2019.0219 .

Donker, T., Klaveren, C. Van, Cornelisz, I., Kok, R. N., and van Gelder, J.-L. van (2020). Analysis of usage data from a self-guided app-based virtual reality cognitive behavior therapy for acrophobia: a randomized controlled trial. Journal of Clinical Medicine 9, 1614. doi: https://doi.org/10.3390/jcm9061614 .

Falconer, C. J., Rovira, A., King, J. A., Gilbert, P., Antley, A., Fearon, P., et al. (2016). Embodying self-compassion within virtual reality and its effects on patients with depression. BJPsych Open, 2 , 74–80. https://doi.org/10.1192/bjpo.bp.115.002147 .

Faric, N., Potts, H. W. W., Hon, A., Smith, L., Newby, K., Steptoe, A., et al. (2019a). What players of virtual reality exercise games want: thematic analysis of web-based reviews. Journal of Medical Internet Research, 21 , 1–13. https://doi.org/10.2196/13833 .

Faric, N., Yorke, E., Varnes, L., Newby, K., Potts, H. W., Smith, L., et al. (2019b). Younger adolescents’ perceptions of physical activity, Exergaming, and virtual reality: qualitative intervention development study. JMIR Serious Games, 7 , e11960. https://doi.org/10.2196/11960 .

Fernández-Álvarez, J., Rozental, A., Carlbring, P., Colombo, D., Riva, G., Anderson, P. L., et al. (2019). Deterioration rates in virtual reality therapy: an individual patient data level meta-analysis. Journal of Anxiety Disorders, 61 , 3–17. https://doi.org/10.1016/j.janxdis.2018.06.005 .

Fleming, T. M., Bavin, L., Stasiak, K., Hermansson-Webb, E., Merry, S. N., Cheek, C., et al. (2017). Serious games and gamification for mental health: current status and promising directions. Frontiers in Psychiatry, 7 . https://doi.org/10.3389/fpsyt.2016.00215 .

Fodor, L. A., Coteț, C. D., Cuijpers, P., Szamoskozi, Ș., David, D., & Cristea, I. A. (2018). The effectiveness of virtual reality based interventions for symptoms of anxiety and depression: a meta-analysis. Scientific Reports, 8 , 10323. https://doi.org/10.1038/s41598-018-28113-6 .

Fominykh, M., Prasolova-Førland, E., Stiles, T. C., Krogh, A. B., & Linde, M. (2017). Conceptual framework for therapeutic training with biofeedback in virtual reality: first evaluation of a relaxation simulator. Journal of Interactive Learning Research, 29 (1), 51–75.

Fox, J., Ahn, S. J. (. G.)., Janssen, J. H., Yeykelis, L., Segovia, K. Y., & Bailenson, J. N. (2015). Avatars versus agents: a meta-analysis quantifying the effect of agency on social influence. Human Computer Interaction, 30 , 401–432. https://doi.org/10.1080/07370024.2014.921494 .

Freeman, J., Avons, S. E., Meddis, R., Pearson, D. E., & IJsselsteijn, W. (2000). Using behavioral realism to estimate presence: a study of the utility of postural responses to motion stimuli. Presence Teleoperators and Virtual Environments, 9 , 149–164. https://doi.org/10.1162/105474600566691 .

Freeman, D., Reeve, S., Robinson, A., Ehlers, A., Clark, D., Spanlang, B., et al. (2017). Virtual reality in the assessment, understanding, and treatment of mental health disorders. Psychological Medicine, 47 , 2393–2400. https://doi.org/10.1017/S003329171700040X .

Freeman, D., Haselton, P., Freeman, J., Spanlang, B., Kishore, S., Albery, E., et al. (2018). Automated psychological therapy using immersive virtual reality for treatment of fear of heights: a single-blind, parallel-group, randomised controlled trial. Lancet Psychiatry, 5 , 625–632. https://doi.org/10.1016/S2215-0366(18)30226-8 .

Freeman, D., Yu, L. M., Kabir, T., Martin, J., Craven, M., Leal, J., et al. (2019). Automated virtual reality (VR) cognitive therapy for patients with psychosis: study protocol for a single-blind parallel group randomised controlled trial (gameChange). BMJ Open, 9 , 1–8. https://doi.org/10.1136/bmjopen-2019-031606 .

Furness, T. A. (1986). The super cockpit and its human factors challenges.  Proceedings of the Human Factors Society Annual Meeting, 30 , 48–52. https://doi.org/10.1177/154193128603000112 .

Gao, T., Zhang, T., Zhu, L., Gao, Y., & Qiu, L. (2019). Exploring psychophysiological restoration and individual preference in the different environments based on virtual reality. International Journal of Environmental Research and Public Health, 16 , 1–14. https://doi.org/10.3390/ijerph16173102 .

Garcia-Palacios, A., Hoffman, H. G., See, S. K., Tsai, A., & Botella, C. (2001). Redefining therapeutic success with virtual reality exposure therapy. Cyberpsychology & Behavior, 4 , 341–348. https://doi.org/10.1089/109493101300210231 .

Garcia-Palacios, A., Botella, C., Hoffman, H., & Fabregat, S. (2007). Comparing acceptance and refusal rates of virtual reality exposure vs. in vivo exposure by patients with specific phobias. CyberPsychology and Behaviour, 10 , 722–724. https://doi.org/10.1089/cpb.2007.9962 .

Glantz, K., Durlach, N. I., Barnett, R. C., & Aviles, W. A. (1996). Virtual reality (VR) for psychotherapy: From the physical to the social environment. Psychotherapy: Theory, Research, Practice, Training, 33 , 464–473. https://doi.org/10.1037/0033-3204.33.3.464 .

Gold, S. M., Enck, P., Hasselmann, H., Friede, T., Hegerl, U., Mohr, D. C., et al. (2017). Control conditions for randomised trials of behavioural interventions in psychiatry: a decision framework. Lancet Psychiatry, 4 , 725–732. https://doi.org/10.1016/S2215-0366(17)30153-0 .

Gromer, D., Reinke, M., Christner, I., & Pauli, P. (2019). Causal interactive links between presence and fear in virtual reality height exposure. Frontiers in Psychology, 10 , 1–11. https://doi.org/10.3389/fpsyg.2019.00141 .

Guitard, T., Bouchard, S., Bélanger, C., & Berthiaume, M. (2019). Exposure to a standardized catastrophic scenario in virtual reality or a personalized scenario in imagination for generalized anxiety disorder. Journal of Clinical Medicine, 8 , 309. https://doi.org/10.3390/jcm8030309 .

Article   PubMed Central   Google Scholar  

Gupta, A., Scott, K., & Dukewich, M. (2018). Innovative technology using virtual reality in the treatment of pain: does it reduce pain via distraction, or is there more to it? Pain Medicine (United States), 19 , 151–159. https://doi.org/10.1093/pm/pnx109 .

Heim, E., Rötger, A., Lorenz, N., & Maercker, A. (2018). Working alliance with an avatar: how far can we go with internet interventions? Internet Interventions, 11 , 41–46. https://doi.org/10.1016/j.invent.2018.01.005 .

Hoffman, H. G., Garcia-Palacios, A., Carlin, A., Furness III, T. A., & Botella-Arbona, C. (2003). Interfaces that heal: Coupling real and virtual objects to treat spider phobia. International Journal of Human Computer Interaction, 16 , 283–300. https://doi.org/10.1207/S15327590IJHC1602_08 .

Johnson, D., Deterding, S., Kuhn, K.-A., Staneva, A., Stoyanov, S., & Hides, L. (2016). Gamification for health and wellbeing: a systematic review of the literature. Internet Interventions, 6 , 89–106. https://doi.org/10.1016/j.invent.2016.10.002 .

Juvrud, J., Gredebäck, G., Åhs, F., Lerin, N., Nyström, P., Kastrati, G., et al. (2018). The immersive virtual reality lab: possibilities for remote experimental manipulations of autonomic activity on a large scale. Frontiers in Neuroscience, 12 . https://doi.org/10.3389/fnins.2018.00305 .

Kahlon, S., Lindner, P., & Nordgreen, T. (2019). Virtual reality exposure therapy for adolescents with fear of public speaking: a non-randomized feasibility and pilot study. Child and Adolescent Psychiatry and Mental Health, 13 , 47. https://doi.org/10.1186/s13034-019-0307-y .

Kakoschke, N. (2019). Participatory design of a virtual reality approach-avoidance training intervention for obesity. in 9th World Congress of Behavioural and Cognitive Therapies.

Keizer, A., Van Elburg, A., Helms, R., & Dijkerman, H. C. (2016). A virtual reality full body illusion improves body image disturbance in anorexia nervosa. PLoS One, 11 , 1–21. https://doi.org/10.1371/journal.pone.0163921 .

Kenney, M. P., & Milling, L. S. (2016). The effectiveness of virtual reality distraction for reducing pain: a meta-analysis. Psychology of Consciousness: Theory, Research and Practice, 3 , 199–210. https://doi.org/10.1037/cns0000084 .

Koivisto, J., & Hamari, J. (2019). The rise of motivational information systems: a review of gamification research. International Journal of Information Management, 45 , 191–210. https://doi.org/10.1016/j.ijinfomgt.2018.10.013 .

Krijn, M., Emmelkamp, P. M. G., Biemond, R., De Wilde De Ligny, C., Schuemie, M. J., & Van Der Mast, C. A. P. G. (2004). Treatment of acrophobia in virtual reality: the role of immersion and presence. Behaviour Research and Therapy, 42 , 229–239. https://doi.org/10.1016/S0005-7967(03)00139-6 .

Laamarti, F., Eid, M., & El Saddik, A. (2014). An overview of serious games. Internatiol Journal Computer Games Technology, 2014 . https://doi.org/10.1155/2014/358152 .

Lambe, S., Knight, I., Kabir, T., West, J., Patel, R., Lister, R., et al. (2020). Developing an automated VR cognitive treatment for psychosis: gameChange VR therapy.  Journal of Behavioral and Cognitive Therapy, 30 , 33–40. https://doi.org/10.1016/j.jbct.2019.12.001 .

Larsen, M. E., Huckvale, K., Nicholas, J., Torous, J., Birrell, L., Li, E., et al. (2019). Using science to sell apps: evaluation of mental health app store quality claims. npj Digit Med 2. https://doi.org/10.1038/s41746-019-0093-1 .

Lee, S. Y., & Kang, J. (2020). Effect of virtual reality meditation on sleep quality of intensive care unit patients: a randomised controlled trial. Intensive & Critical Care Nursing, 59 , 102849. https://doi.org/10.1016/j.iccn.2020.102849 .

Lindner, P., Miloff, A., Hamilton, W., Reuterskiöld, L., Andersson, G., Powers, M. B., et al. (2017). Creating state of the art, next-generation virtual reality exposure therapies for anxiety disorders using consumer hardware platforms: design considerations and future directions. Cognitive Behaviour Therapy, 46 , 404–420. https://doi.org/10.1080/16506073.2017.1280843 .

Lindner, P., Hamilton, W., Miloff, A., & Carlbring, P. (2019a). How to treat depression with low-intensity virtual reality interventions: perspectives on translating cognitive behavioral techniques into the virtual reality modality and how to make anti-depressive use of virtual reality–unique experiences. Frontiers in Psychiatry, 10 , 1–6. https://doi.org/10.3389/fpsyt.2019.00792 .

Lindner, P., Miloff, A., Fagernäs, S., Andersen, J., Sigeman, M., Andersson, G., et al. (2019b). Therapist-led and self-led one-session virtual reality exposure therapy for public speaking anxiety with consumer hardware and software: a randomized controlled trial. Journal of Anxiety Disorders, 61 , 45–54. https://doi.org/10.1016/j.janxdis.2018.07.003 .

Lindner, P., Miloff, A., Hamilton, W., & Carlbring, P. (2019c). The potential of consumer-targeted virtual reality relaxation applications: descriptive usage, uptake and application performance statistics for a first-generation application. Frontiers in Psychology, 10 , 1–6. https://doi.org/10.3389/fpsyg.2019.00132 .

Lindner, P., Miloff, A., Zetterlund, E., Reuterskiöld, L., Andersson, G., & Carlbring, P. (2019d). Attitudes toward and familiarity with virtual reality therapy among practicing cognitive behavior therapists: a cross-sectional survey study in the era of consumer VR platforms. Frontiers in Psychology, 10 , 1–10. https://doi.org/10.3389/fpsyg.2019.00176 .

Lindner, P., Dagöö, J., Hamilton, W., Miloff, A., Andersson, G., Schill, A., et al. (2020a). Virtual reality exposure therapy for public speaking anxiety in routine care: a single-subject effectiveness trial. Cognitive Behaviour Therapy, 1–21 . https://doi.org/10.1080/16506073.2020.1795240 .

Lindner, P., Miloff, A., Bergman, C., Andersson, G., Hamilton, W., & Carlbring, P. (2020b). Gamified, automated virtual reality exposure therapy for fear of spiders: a single-subject trial under simulated real-world conditions. Frontiers in Psychiatry . https://doi.org/10.3389/fpsyt.2020.00116 .

Lindner, P., Rozental, A., Jurell, A., Reuterskiöld, L., Andersson, G., Hamilton, W., et al. (2020c). Experiences of gamified and automated virtual reality exposure therapy for spider phobia: a qualitative study. JMIR Serious Games . https://doi.org/10.2196/17807 .

Ling, Y., Nefs, H. T., Morina, N., Heynderickx, I., & Brinkman, W. P. (2014). A meta-analysis on the relationship between self-reported presence and anxiety in virtual reality exposure therapy for anxiety disorders. PLoS One, 9 , 1–12. https://doi.org/10.1371/journal.pone.0096144 .

Lomaliza, J. P., and Park, H. (2019). Improved heart-rate measurement from mobile face videos. Electronics 8. https://doi.org/10.3390/electronics8060663 .

Maskey, M., Rodgers, J., Grahame, V., Glod, M., Honey, E., Kinnear, J., et al. (2019). A randomised controlled feasibility trial of immersive virtual reality treatment with cognitive behaviour therapy for specific phobias in young people with autism spectrum disorder. Journal of Autism and Developmental Disorders, 49 , 1912–1927. https://doi.org/10.1007/s10803-018-3861-x .

McCann, R. A., Armstrong, C. M., Skopp, N. A., Edwards-Stewart, A., Smolenski, D. J., June, J. D., et al. (2014). Virtual reality exposure therapy for the treatment of anxiety disorders: an evaluation of research quality. Journal of Anxiety Disorders, 28 , 625–631. https://doi.org/10.1016/j.janxdis.2014.05.010 .

McCauley, M. E., & Sharkey, T. J. (1992). Cybersickness: perception of self-motion in virtual environments. Presence Teleoperators and Virtual Environments, 1 , 311–318. https://doi.org/10.1162/pres.1992.1.3.311 .

Mennin, D. S., Ellard, K. K., Fresco, D. M., & Gross, J. J. (2013). United we stand: emphasizing commonalities across cognitive-behavioral therapies. Behavior Therapy, 44 , 234–248. https://doi.org/10.1016/j.beth.2013.02.004 .

Migoya-Borja, M., Delgado-Gómez, D., Carmona-Camacho, R., Porras-Segovia, A., López-Moriñigo, J.-D., Sánchez-Alonso, M., et al. (2020). Feasibility of a virtual reality-based psychoeducational tool (VRight) for depressive patients. Cyberpsychology, Behavior and Social Networking, 23 , 246–252. https://doi.org/10.1089/cyber.2019.0497 .

Miloff, A., Lindner, P., Hamilton, W., Reuterskiöld, L., Andersson, G., & Carlbring, P. (2016). Single-session gamified virtual reality exposure therapy for spider phobia vs. traditional exposure therapy: study protocol for a randomized controlled non-inferiority trial. Trials, 17 , 60. https://doi.org/10.1186/s13063-016-1171-1 .

Miloff, A., Lindner, P., Dafgård, P., Deak, S., Garke, M., Hamilton, W., et al. (2019). Automated virtual reality exposure therapy for spider phobia vs. in-vivo one-session treatment: a randomized non-inferiority trial. Behaviour Research and Therapy, 118 , 130–140. https://doi.org/10.1016/j.brat.2019.04.004 .

Miloff, A., Carlbring, P., Hamilton, W., Andersson, G., Reuterskiöld, L., & Lindner, P. (2020). Measuring alliance toward embodied virtual therapists in the era of automated treatments with the virtual therapist alliance scale (VTAS): development and psychometric evaluation. Journal of Medical Internet Research, 22 , e16660. https://doi.org/10.2196/16660 .

Miragall, M., Baños, R. M., Cebolla, A., & Botella, C. (2015). Working alliance inventory applied to virtual and augmented reality (WAI-VAR): psychometrics and therapeutic outcomes. Frontiers in Psychology, 6 . https://doi.org/10.3389/fpsyg.2015.01531 .

Morina, N., Ijntema, H., Meyerbröker, K., & Emmelkamp, P. M. G. (2015). Can virtual reality exposure therapy gains be generalized to real-life? A meta-analysis of studies applying behavioral assessments. Behaviour Research and Therapy, 74 , 18–24. https://doi.org/10.1016/j.brat.2015.08.010 .

Neguț, A., Jurma, A. M., & David, D. (2017). Virtual-reality-based attention assessment of ADHD: ClinicaVR: classroom-CPT versus a traditional continuous performance test. Child Neuropsychology, 23 , 692–712. https://doi.org/10.1080/09297049.2016.1186617 .

Noori, F. M., Kahlon, S., Lindner, P., Nordgreen, T., Torresen, J., & Riegler, M. (2019). Heart rate prediction from head movement during virtual reality treatment for social anxiety. In In 2019 International Conference on Content-Based Multimedia Indexing (CBMI) (IEEE) (pp. 1–5). https://doi.org/10.1109/CBMI.2019.8877454 .

Chapter   Google Scholar  

Oh, C. S., Bailenson, J. N., & Welch, G. F. (2018). A systematic review of social presence: definition, antecedents, and implications.  Frontiers in Robotics and AI, 5 , 472–475. https://doi.org/10.3389/frobt.2018.00114 .

Olsson, A., & Phelps, E. A. (2007). Social learning of fear. Nature Neuroscience, 10 , 1095–1102. https://doi.org/10.1038/nn1968 .

Opriş, D., Pintea, S., García-Palacios, A., Botella, C., Szamosközi, Ş., & David, D. (2012). Virtual reality exposure therapy in anxiety disorders: a quantitative meta-analysis. Depression and Anxiety, 29 , 85–93. https://doi.org/10.1002/da.20910 .

Öst, L. G. (1989). One-session treatment for specific phobias. Behaviour Research and Therapy, 27 , 1–7.

Page, S., & Coxon, M. (2016). Virtual reality exposure therapy for anxiety disorders: small samples and no controls? Frontiers in Psychology, 7 , 1–4. https://doi.org/10.3389/fpsyg.2016.00326 .

Parsons, T. D., & Rizzo, A. a. (2008). Affective outcomes of virtual reality exposure therapy for anxiety and specific phobias: a meta-analysis. Journal of Behavior Therapy and Experimental Psychiatry, 39 , 250–261. https://doi.org/10.1016/j.jbtep.2007.07.007 .

Perpiñá, C., Botella, C., Baños, R., Marco, H., Alcañiz, M., & Quero, S. (1999). Body image and virtual reality in eating disorders: is exposure to virtual reality more effective than the classical body image treatment? Cyberpsychology & Behavior, 2 , 149–155. https://doi.org/10.1089/cpb.1999.2.149 .

Perpina, C., Botella, C., & Ban, R. M. (2003). Virtual reality in eating disorders. European Eating Disorders Review, 278 , 261–278. https://doi.org/10.1002/erv.520 .

Peterson, S. M., Furuichi, E., & Ferris, D. P. (2018). Effects of virtual reality high heights exposure during beam-walking on physiological stress and cognitive loading. PLoS One, 13 , 1–17. https://doi.org/10.1371/journal.pone.0200306 .

Pfeiffer, J., Pfeiffer, T., Meißner, M., and Weiß, E. (2020). Eye-tracking-based classification of information search behavior using machine learning: evidence from experiments in physical shops and virtual reality shopping environments. Inf. Syst. Res., isre.2019.0907. doi: https://doi.org/10.1287/isre.2019.0907 .

Posner, M. I., Nissen, M. J., & Klein, R. M. (1976). Visual dominance: an information-processing account of its origins and significance. Psychological Review, 83 , 157–171. https://doi.org/10.1037/0033-295X.83.2.157 .

Powers, M. B., & Emmelkamp, P. M. G. (2008). Virtual reality exposure therapy for anxiety disorders: a meta-analysis. Journal of Anxiety Disorders, 22 , 561–569. https://doi.org/10.1016/j.janxdis.2007.04.006 .

Price, M., & Anderson, P. (2007). The role of presence in virtual reality exposure therapy. Journal of Anxiety Disorders, 21 , 742–751. https://doi.org/10.1016/j.janxdis.2006.11.002 .

Provoost, S., Lau, H. M., Ruwaard, J., and Riper, H. (2017). Embodied conversational agents in clinical psychology: a scoping review. J Med Internet Res 19. doi: https://doi.org/10.2196/jmir.6553 .

Rebenitsch, L., & Owen, C. (2016). Review on cybersickness in applications and visual displays. Virtual Reality, 20 , 101–125. https://doi.org/10.1007/s10055-016-0285-9 .

Riva, G., Baños, R. M., Botella, C., Mantovani, F., & Gaggioli, A. (2016). Transforming experience: the potential of augmented reality and virtual reality for enhancing personal and clinical change. Frontiers in Psychiatry, 7 , 1–14. https://doi.org/10.3389/fpsyt.2016.00164 .

Rizzo, A., Difede, J., Rothbaum, B. O., Reger, G., Spitalnick, J., Cukor, J., et al. (2010). Development and early evaluation of the virtual Iraq/Afghanistan exposure therapy system for combat-related PTSD. Annals of the New York Academy of Sciences, 1208 , 114–125. https://doi.org/10.1111/j.1749-6632.2010.05755.x .

Rosen, J. C. (1996). Body image assessment and treatment in controlled studies of eating disorders. The International Journal of Eating Disorders, 20 , 331–343. https://doi.org/10.1002/(SICI)1098-108X(199612)20:4<331::AID-EAT1>3.0.CO;2-O .

Rothbaum, B. O., Hodges, L. F., Kooper, R., Opdyke, D., Williford, J. S., & North, M. (1995a). Effectiveness of computer-generated (virtual reality) graded exposure in the treatment of acrophobia. The American Journal of Psychiatry, 152 , 626–628. https://doi.org/10.1176/ajp.152.4.626 .

Rothbaum, B. O., Hodges, L. F., Kooper, R., Opdyke, D., Williford, J. S., & North, M. (1995b). Virtual reality graded exposure in the treatment of acrophobia: a case report. Behavior Therapy, 26 , 547–554. https://doi.org/10.1016/S0005-7894(05)80100-5 .

Rothbaum, B. O., Hodges, L., Watson, B. A., Kessler, G. D., & Opdyke, D. (1996). Virtual reality exposure therapy in the treatment of fear of flying: a case report. Behaviour Research and Therapy, 34 , 477–481. https://doi.org/10.1016/0005-7967(96)00007-1 .

Rothbaum, B. O., Hodges, L., Alarcon, R., Ready, D., Shahar, F., Graap, K., et al. (1999). Virtual reality exposure therapy for PSTD Vietnam veterans: a case study. Journal of Traumatic Stress, 12 , 263–271. https://doi.org/10.1023/A:1024772308758 .

Rozental, A., Kottorp, A., Boettcher, J., Andersson, G., & Carlbring, P. (2016). Negative effects of psychological treatments: an exploratory factor analysis of the negative effects questionnaire for monitoring and reporting adverse and unwanted events. PLoS One, 11 , e0157503. https://doi.org/10.1371/journal.pone.0157503 .

Saredakis, D., Szpak, A., Birckhead, B., Keage, H. A. D., Rizzo, A., & Loetscher, T. (2020). Factors associated with virtual reality sickness in head-mounted displays: a systematic review and meta-analysis. Frontiers in Human Neuroscience, 14 . https://doi.org/10.3389/fnhum.2020.00096 .

Scarfe, P., & Glennerster, A. (2019). The science behind virtual reality displays. Annu Rev Vis Sci, 5 , 529–547. https://doi.org/10.1146/annurev-vision-091718-014942 .

Scheveneels, S., Boddez, Y., Van Daele, T., & Hermans, D. (2019). Virtually unexpected: no role for expectancy violation in virtual reality exposure for public speaking anxiety. Frontiers in Psychology, 10 , 1–10. https://doi.org/10.3389/fpsyg.2019.02849 .

Schleider, J. L., Mullarkey, M. C., & Weisz, J. R. (2019). Virtual reality and web-based growth mindset interventions for adolescent depression: protocol for a three-arm randomized trial. Journal of Medical Internet Research, 21 , 1–14. https://doi.org/10.2196/13368 .

Schwartzman, D., Segal, R., & Drapeau, M. (2012). Perceptions of virtual reality among therapists who do not apply this technology in clinical practice. Psychological Services, 9 , 310–315. https://doi.org/10.1037/a0026801 .

Seabrook, E., Kelly, R., Foley, F., Theiler, S., Thomas, N., Wadley, G., et al. (2020). Understanding how virtual reality can support mindfulness practice: mixed methods study. Journal of Medical Internet Research, 22 , e16106. https://doi.org/10.2196/16106 .

Segal, R., Bhatia, M., and Drapeau, M. (2011). Therapists’ perception of benefits and costs of using virtual reality treatments. Cyberpsychology , Behavior and Social Networking 14, 29–34. doi: https://doi.org/10.1089/cyber.2009.0398 .

Serrano, B., Baños, R. M., & Botella, C. (2016). Virtual reality and stimulation of touch and smell for inducing relaxation: a randomized controlled trial. Computers in Human Behavior, 55 , 1–8. https://doi.org/10.1016/j.chb.2015.08.007 .

Shen, N., Levitan, M.-J., Johnson, A., Bender, J. L., Hamilton-Page, M., & Jadad, A. (Alex) R., et al. (2015). Finding a depression app: a review and content analysis of the depression app marketplace. JMIR mHealth and uHealth, 3 , e16. https://doi.org/10.2196/mhealth.3713 .

Shin, D. (2018). Empathy and embodied experience in virtual environment: to what extent can virtual reality stimulate empathy and embodied experience? Computers in Human Behavior, 78 , 64–73. https://doi.org/10.1016/j.chb.2017.09.012 .

Sidenmark, L., & Gellersen, H. (2020). Eye, head and torso coordination during gaze shifts in virtual reality.  ACM Transactions on Computer-Human Interaction, 27 , 1–40. https://doi.org/10.1145/3361218 .

Silviu Matu, D. (2019). Using virtual reality to study and modify cognitions in cognitive-behavior therapy: theoretical rationales and experimental results. in 9th World Congress of Behavioural and Cognitive Therapies.

Slater, M. (2018). Immersion and the illusion of presence in virtual reality. British Journal of Psychology, 109 , 431–433. https://doi.org/10.1111/bjop.12305 .

Slater, M., Neyret, S., Johnston, T., Iruretagoyena, G., & Crespo, M. Á. de la C., Alabèrnia-Segura, M., et al. (2019). An experimental study of a virtual reality counselling paradigm using embodied self-dialogue. Scientific Reports, 9 , 10903. https://doi.org/10.1038/s41598-019-46877-3 .

Slater, M., Gonzalez-Liencres, C., Haggard, P., Vinkers, C., Gregory-Clarke, R., Jelley, S., et al. (2020). The ethics of realism in virtual and augmented reality.  Frontiers in Virtual Reality, 1 , 1–13. https://doi.org/10.3389/frvir.2020.00001 .

Sonia Huang, J. (2011). An examination of the business strategies in the second life virtual market. Journal of Media Business Studies, 8 , 1–17. https://doi.org/10.1080/16522354.2011.11073520 .

Spiegel, J. S. (2018). The ethics of virtual reality technology: social hazards and public policy recommendations. Science and Engineering Ethics, 24 , 1537–1550. https://doi.org/10.1007/s11948-017-9979-y .

Spiegel, B., Fuller, G., Lopez, M., Dupuy, T., Noah, B., Howard, A., et al. (2019). Virtual reality for management of pain in hospitalized patients: a randomized comparative effectiveness trial. PLoS One, 14 , e0219115. https://doi.org/10.1371/journal.pone.0219115 .

Sutherland, I. E. (1968). A head-mounted three dimensional display. In Proceedings of the December 9-11, 1968, fall joint computer conference, part I on - AFIPS ‘68 (Fall, part I) (New York, New York, USA: ACM press), 757. doi: https://doi.org/10.1145/1476589.1476686 .

Tardif, N., Therrien, C.-É., and Bouchard, S. (2019). Re-examining psychological mechanisms underlying virtual reality-based exposure for spider phobia. Cyberpsychology, Behavior and Social Networking 22, 39–45. doi: https://doi.org/10.1089/cyber.2017.0711 .

Tarp, K., Bojesen, A. B., Mejldal, A., & Nielsen, A. S. (2017). Effectiveness of optional videoconferencing-based treatment of alcohol use disorders: randomized controlled trial. JMIR Mental Health, 4 , e38. https://doi.org/10.2196/mental.6713 .

Tashjian, V. C., Mosadeghi, S., Howard, A. R., Lopez, M., Dupuy, T., Reid, M., et al. (2017). Virtual reality for management of pain in hospitalized patients: results of a controlled trial. JMIR Mental Health, 4 , e9. https://doi.org/10.2196/mental.7387 .

Titov, N., Dear, B., Nielssen, O., Staples, L., Hadjistavropoulos, H., Nugent, M., et al. (2018). ICBT in routine care: a descriptive analysis of successful clinics in five countries. Internet Interventions, 13 , 108–115. https://doi.org/10.1016/j.invent.2018.07.006 .

Tobler-Ammann, B. C., Surer, E., Knols, R. H., Borghese, N. A., & de Bruin, E. D. (2017). User perspectives on Exergames designed to explore the hemineglected space for stroke patients with visuospatial neglect: usability study. JMIR Serious Games, 5 , e18. https://doi.org/10.2196/games.8013 .

Troscianko, E. T. (2018). Fiction-reading for good or ill: eating disorders, interpretation and the case for creative bibliotherapy research. Medical Humanities, 44 , 201–211. https://doi.org/10.1136/medhum-2017-011375 .

Waller, G., & Turner, H. (2016). Therapist drift redux: why well-meaning clinicians fail to deliver evidence-based therapy, and how to get back on track. Behaviour Research and Therapy, 77 , 129–137. https://doi.org/10.1016/j.brat.2015.12.005 .

Wechsler, T. F., Kümpers, F., & Mühlberger, A. (2019). Inferiority or even superiority of virtual reality exposure therapy in phobias?—A systematic review and quantitative meta-analysis on randomized controlled trials specifically comparing the efficacy of virtual reality exposure to gold standard in vivo Exp. Frontiers in Psychology, 10 . https://doi.org/10.3389/fpsyg.2019.01758 .

Wieland, L. S., Berman, B. M., Altman, D. G., Barth, J., Bouter, L. M., D’Adamo, C. R., et al. (2017). Rating of included trials on the efficacy–effectiveness spectrum: development of a new tool for systematic reviews. Journal of Clinical Epidemiology, 84 , 95–104. https://doi.org/10.1016/j.jclinepi.2017.01.010 .

Wild, J., & Clark, D. M. (2011). Imagery rescripting of early traumatic memories in social phobia. Cognitive and Behavioral Practice, 18 , 433–443. https://doi.org/10.1016/j.cbpra.2011.03.002 .

Won, A. S., Perone, B., Friend, M., and Bailenson, J. N. (2016). Identifying anxiety through tracked head movements in a virtual classroom. Cyberpsychology , Behavior and Social Networking 19, 380–387. doi: https://doi.org/10.1089/cyber.2015.0326 .

Wong, S. (2019). Feared self and obsessive-compulsive symptoms: an experimental manipulation using virtual reality. in 9th World Congress of Behavioural and Cognitive Therapies.

Download references

Acknowledgments

Open access funding provided by Karolinska Institute. The author wishes to thank Dr. Alexander Miloff and William Hamilton for many fruitful discussions on the topics discussed above. The author is funded by an internal grant from the Centre for Psychiatry Research, Region Stockholm and Karolinska Institutet.

Author information

Authors and affiliations.

Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, & Stockholm Health Care Services, Region Stockholm, Katrinebergsbacken 35A, 117 61, Stockholm, Sweden

Philip Lindner

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Philip Lindner .

Ethics declarations

Conflict of interest.

The author reports involvement in several academia-industry collaborations related to the work described, including direct financial reimbursement for consulting work for one private company (Mimerse), but holds no financial stake in any such company.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Lindner, P. Better, Virtually: the Past, Present, and Future of Virtual Reality Cognitive Behavior Therapy. J Cogn Ther 14 , 23–46 (2021). https://doi.org/10.1007/s41811-020-00090-7

Download citation

Accepted : 05 October 2020

Published : 20 October 2020

Issue Date : March 2021

DOI : https://doi.org/10.1007/s41811-020-00090-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Virtual reality
  • Gamification
  • Serious game
  • Find a journal
  • Publish with us
  • Track your research

Encyclopedia Britannica

  • Games & Quizzes
  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Introduction
  • Education and training
  • Entertainment
  • Living in virtual worlds

virtual reality headset

  • What is a computer?
  • Who invented the computer?
  • What can computers do?
  • Are computers conscious?
  • What is the impact of computer artificial intelligence (AI) on society?

Grade school students working at computers in a school library. Study learn girl child class technology

virtual reality

Our editors will review what you’ve submitted and determine whether to revise the article.

  • Live Science - What Is Virtual Reality?
  • Lifewire - What is Virtual Reality?
  • Nature - Computer science: Visionary of virtual reality
  • National Center for Biotechnology Information - PubMed Central - How Virtual Reality Technology Has Changed Our Lives: An Overview of the Current and Potential Applications and Limitations
  • Table Of Contents

virtual reality headset

Recent News

virtual reality (VR) , the use of computer modeling and simulation that enables a person to interact with an artificial three-dimensional (3-D) visual or other sensory environment . VR applications immerse the user in a computer-generated environment that simulates reality through the use of interactive devices, which send and receive information and are worn as goggles, headsets, gloves, or body suits. In a typical VR format, a user wearing a helmet with a stereoscopic screen views animated images of a simulated environment. The illusion of “being there” ( telepresence) is effected by motion sensors that pick up the user’s movements and adjust the view on the screen accordingly, usually in real time (the instant the user’s movement takes place). Thus, a user can tour a simulated suite of rooms, experiencing changing viewpoints and perspectives that are convincingly related to his own head turnings and steps. Wearing data gloves equipped with force-feedback devices that provide the sensation of touch, the user can even pick up and manipulate objects that he sees in the virtual environment.

The term virtual reality was coined in 1987 by Jaron Lanier, whose research and engineering contributed a number of products to the nascent VR industry. A common thread linking early VR research and technology development in the United States was the role of the federal government, particularly the Department of Defense , the National Science Foundation , and the National Aeronautics and Space Administration (NASA). Projects funded by these agencies and pursued at university-based research laboratories yielded an extensive pool of talented personnel in fields such as computer graphics , simulation, and networked environments and established links between academic, military, and commercial work. The history of this technological development, and the social context in which it took place, is the subject of this article.

Artists, performers, and entertainers have always been interested in techniques for creating imaginative worlds, setting narratives in fictional spaces, and deceiving the senses. Numerous precedents for the suspension of disbelief in an artificial world in artistic and entertainment media preceded virtual reality. Illusionary spaces created by paintings or views have been constructed for residences and public spaces since antiquity, culminating in the monumental panoramas of the 18th and 19th centuries. Panoramas blurred the visual boundaries between the two-dimensional images displaying the main scenes and the three-dimensional spaces from which these were viewed, creating an illusion of immersion in the events depicted. This image tradition stimulated the creation of a series of media—from futuristic theatre designs, stereopticons, and 3-D movies to IMAX movie theatres—over the course of the 20th century to achieve similar effects. For example, the Cinerama widescreen film format, originally called Vitarama when invented for the 1939 New York World’s Fair by Fred Waller and Ralph Walker, originated in Waller’s studies of vision and depth perception. Waller’s work led him to focus on the importance of peripheral vision for immersion in an artificial environment, and his goal was to devise a projection technology that could duplicate the entire human field of vision. The Vitarama process used multiple cameras and projectors and an arc-shaped screen to create the illusion of immersion in the space perceived by a viewer. Though Vitarama was not a commercial hit until the mid-1950s (as Cinerama), the Army Air Corps successfully used the system during World War II for anti-aircraft training under the name Waller Flexible Gunnery Trainer—an example of the link between entertainment technology and military simulation that would later advance the development of virtual reality.

Sensory stimulation was a promising method for creating virtual environments before the use of computers. After the release of a promotional film called This Is Cinerama (1952), the cinematographer Morton Heilig became fascinated with Cinerama and 3-D movies. Like Waller, he studied human sensory signals and illusions , hoping to realize a “cinema of the future.” By late 1960, Heilig had built an individual console with a variety of inputs—stereoscopic images, motion chair, audio, temperature changes, odours, and blown air—that he patented in 1962 as the Sensorama Simulator, designed to “stimulate the senses of an individual to simulate an actual experience realistically.” During the work on Sensorama, he also designed the Telesphere Mask, a head-mounted “stereoscopic 3-D TV display” that he patented in 1960. Although Heilig was unsuccessful in his efforts to market Sensorama, in the mid-1960s he extended the idea to a multiviewer theatre concept patented as the Experience Theater and a similar system called Thrillerama for the Walt Disney Company .

computer chip. computer. Hand holding computer chip. Central processing unit (CPU). history and society, science and technology, microchip, microprocessor motherboard computer Circuit Board

The seeds for virtual reality were planted in several computing fields during the 1950s and ’60s, especially in 3-D interactive computer graphics and vehicle/flight simulation. Beginning in the late 1940s, Project Whirlwind , funded by the U.S. Navy, and its successor project, the SAGE ( Semi-Automated Ground Environment ) early-warning radar system, funded by the U.S. Air Force, first utilized cathode-ray tube (CRT) displays and input devices such as light pens (originally called “light guns”). By the time the SAGE system became operational in 1957, air force operators were routinely using these devices to display aircraft positions and manipulate related data.

During the 1950s, the popular cultural image of the computer was that of a calculating machine , an automated electronic brain capable of manipulating data at previously unimaginable speeds. The advent of more affordable second-generation ( transistor ) and third-generation ( integrated circuit ) computers emancipated the machines from this narrow view, and in doing so it shifted attention to ways in which computing could augment human potential rather than simply substituting for it in specialized domains conducive to number crunching. In 1960 Joseph Licklider , a professor at the Massachusetts Institute of Technology (MIT) specializing in psychoacoustics, posited a “man-computer symbiosis” and applied psychological principles to human-computer interactions and interfaces. He argued that a partnership between computers and the human brain would surpass the capabilities of either alone. As founding director of the new Information Processing Techniques Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA), Licklider was able to fund and encourage projects that aligned with his vision of human-computer interaction while also serving priorities for military systems, such as data visualization and command-and-control systems.

Another pioneer was electrical engineer and computer scientist Ivan Sutherland , who began his work in computer graphics at MIT’s Lincoln Laboratory (where Whirlwind and SAGE had been developed). In 1963 Sutherland completed Sketchpad , a system for drawing interactively on a CRT display with a light pen and control board. Sutherland paid careful attention to the structure of data representation, which made his system useful for the interactive manipulation of images. In 1964 he was put in charge of IPTO, and from 1968 to 1976 he led the computer graphics program at the University of Utah , one of DARPA’s premier research centres. In 1965 Sutherland outlined the characteristics of what he called the “ultimate display” and speculated on how computer imagery could construct plausible and richly articulated virtual worlds. His notion of such a world began with visual representation and sensory input, but it did not end there; he also called for multiple modes of sensory input. DARPA sponsored work during the 1960s on output and input devices aligned with this vision, such as the Sketchpad III system by Timothy Johnson, which presented 3-D views of objects; Larry Roberts’s Lincoln Wand, a system for drawing in three dimensions; and Douglas Engelbart ’s invention of a new input device, the computer mouse .

research on virtual reality

Within a few years, Sutherland contributed the technological artifact most often identified with virtual reality, the head-mounted 3-D computer display. In 1967 Bell Helicopter (now part of Textron Inc. ) carried out tests in which a helicopter pilot wore a head-mounted display (HMD) that showed video from a servo-controlled infrared camera mounted beneath the helicopter. The camera moved with the pilot’s head, both augmenting his night vision and providing a level of immersion sufficient for the pilot to equate his field of vision with the images from the camera. This kind of system would later be called “augmented reality” because it enhanced a human capacity (vision) in the real world. When Sutherland left DARPA for Harvard University in 1966, he began work on a tethered display for computer images ( see photograph ). This was an apparatus shaped to fit over the head, with goggles that displayed computer-generated graphical output. Because the display was too heavy to be borne comfortably, it was held in place by a suspension system. Two small CRT displays were mounted in the device, near the wearer’s ears, and mirrors reflected the images to his eyes, creating a stereo 3-D visual environment that could be viewed comfortably at a short distance. The HMD also tracked where the wearer was looking so that correct images would be generated for his field of vision. The viewer’s immersion in the displayed virtual space was intensified by the visual isolation of the HMD, yet other senses were not isolated to the same degree and the wearer could continue to walk around.

Virtual Human Interaction Lab

Our Mission

Since its founding in 2003, researchers at the Virtual Human Interaction Lab (VHIL) have sought to better understand the psychological and behavioral effects of Virtual Reality (VR) and Augmented Reality (AR). VR is finally widely available for consumers, and every day we are seeing new innovations. It is critical, now more than ever, that we seek answers to these important questions: What psychological processes operate when people use VR and AR? How does this medium fundamentally transform people and society? What happens when anyone can have a perfect experience at the touch of a button? And how can we actively seek to create and consume VR that enhances instead of detracts from the real world around us?

Recent Research

  • Wang, P., & Bailenson, J. (2025). Virtual Reality as a Research Tool (In Press) . Reimer, L. van Swol., & A. Florack (Eds.), The Routledge Handbook of Communication and Social Cognition .
  • Bailenson, J., Beams, B., Brown, J., DeVeaux, C., Han, E., Queiroz, A., Ratan, R., Santoso, M., Srirangarajan, T., Tao, Y. ., & Wang, P. (2024). Seeing the World through Digital Prisms: Psychological Implications of Passthrough Video Usage in Mixed Reality . Technology, Mind, and Behavior (TMB) , 5 (2). https://doi.org/10.1037/tmb0000129
  • Han, E. ., & Bailenson, J. (2024). Social Interaction in VR . Oxford Research Encyclopedia of Communication . https://doi.org/10.1093/acrefore/9780190228613.013.1489

OPINION article

Enhancing our lives with immersive virtual reality.

\r\nMel Slater,,*

  • 1 Event Lab, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain
  • 2 Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
  • 3 Department of Computer Science, University College London, London, UK
  • 4 Institut d’investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain

Virtual reality (VR) started about 50 years ago in a form we would recognize today [stereo head-mounted display (HMD), head tracking, computer graphics generated images] – although the hardware was completely different. In the 1980s and 1990s, VR emerged again based on a different generation of hardware (e.g., CRT displays rather than vector refresh, electromagnetic tracking instead of mechanical). This reached the attention of the public, and VR was hailed by many engineers, scientists, celebrities, and business people as the beginning of a new era, when VR would soon change the world for the better. Then, VR disappeared from public view and was rumored to be “dead.” In the intervening 25 years a huge amount of research has nevertheless been carried out across a vast range of applications – from medicine to business, from psychotherapy to industry, from sports to travel. Scientists, engineers, and people working in industry carried on with their research and applications using and exploring different forms of VR, not knowing that actually the topic had already passed away.

The purpose of this article is to survey a range of VR applications where there is some evidence for, or at least debate about, its utility, mainly based on publications in peer-reviewed journals. Of course not every type of application has been covered, nor every scientific paper (about 186,000 papers in Google Scholar): in particular, in this review we have not covered applications in psychological or medical rehabilitation. The objective is that the reader becomes aware of what has been accomplished in VR, where the evidence is weaker or stronger, and what can be done. We start in Section 1 with an outline of what VR is and the major conceptual framework used to understand what happens when people experience it – the concept of “presence.” In Section 2, we review some areas where VR has been used in science – mostly psychology and neuroscience, the area of scientific visualization, and some remarks about its use in education and surgical training. In Section 3, we discuss how VR has been used in sports and exercise. In Section 4, we survey applications in social psychology and related areas – how VR has been used to throw light on some social phenomena, and how it can be used to tackle experimentally areas that cannot be studied experimentally in real life. We conclude with how it has been used in the preservation of and access to cultural heritage. In Section 5, we present the domain of moral behavior, including an example of how it might be used to train professionals such as medical doctors when confronting serious dilemmas with patients. In Section 6, we consider how VR has been and might be used in various aspects of travel, collaboration, and industry. In Section 7, we consider mainly the use of VR in news presentation and also discuss different types of VR. In the concluding Section 8, we briefly consider new ideas that have recently emerged – an impossible task since during the short time we have written this page even newer ideas have emerged! And, we conclude with some general considerations and speculations.

Throughout and wherever possible we have stressed novel applications and approaches and how the real power of VR is not necessarily to produce a faithful reproduction of “reality” but rather that it offers the possibility to step outside of the normal bounds of reality and realize goals in a totally new and unexpected way. We hope that our article will provoke readers to think as paradigm changers, and advance VR to realize different worlds that might have a positive impact on the lives of millions of people worldwide, and maybe even help a little in saving the planet.

1. Virtual Reality – Foundations

1.1. introduction – now is the time.

“It’s a very interesting kind of reality. It’s absolutely as shared as the physical world. Some people say that, well, the physical world isn’t all that real. It’s a consensus world. But the thing is, however real the physical world is – which we never can really know – the virtual world is exactly as real, and achieves the same status. But at the same time it also has this infinity of possibility that you don’t have in the physical world: in the physical world, you can’t suddenly turn this building into a tulip; it’s just impossible. But in the virtual world you can …. [Virtual reality] gives us this sense of being able to be who we are without limitation; for our imagination to become objective and shared with other people.” Jaron Lanier, SIGGRAPH Panel 1989, Virtual Environments and Interactivity: Windows to the Future.

Although said more than 25 years ago by the person who coined the term “virtual reality” (VR) this statement about the excitement and potentiality that was apparently just around the corner in the late 1980s really does apply today. The dream at the time was a VR that would be available cheaply on a mass scale worldwide. The expectation and hope was very high. As Timothy Leary said in the following year’s SIGGRAPH Panel, imagining a time when the cost of an HMD and body-tracking equipment would be at low-end consumer level, “… suddenly the barriers of class and linguistics and education and nationality are gone. The kid in the inner city can slip on the telepresence hardware and talk to young people in China or Russia. And have flirtations with kids in Japan. In other words, to me there is something wonderfully democratic about cyberspace. If it’s virtual you can be anyone, you can be anything this time around. We are getting close to a place where that is feasible.” Unfortunately, the feasibility was not there, or at least not realizable at that time or anywhere near it. Now though the possibility is real, and for whatever reason now is the time.

During the past 25 years when VR was supposed to have “died” 1 masses of research into both the development of the technology and its application in a vast array of areas has been continuing. Scott Fisher, one of the VR pioneers in a 1989 essay reported in Packer and Jordan (2002) set out a number of applications: telepresence, where VR provides an interface through which the participant operates in a distant place embodied in a robot located there; data visualization; applications in architectural visualization; medicine including surgical simulation; education and entertainment; remote collaboration. These were all applications that were being worked on at the time. In this article, we set out how VR has been used in these and in a variety of other applications, applications that have already shown results that may be of significant benefit for individuals and society. With VR available on a mass scale, the potential for these benefits to have significant impact is now all the greater. However, as Jaron Lanier also said in the 1990 panel “… there’s really a serious danger of expectations being raised too high.” This remains true today, but we can have slightly less caution since research in the intervening quarter of a century has demonstrated results that stand on a reasonably solid scientific basis.

For an overview of a range of applications of VR (not all considered in this article), see the paper by one of the pioneers of VR, Frederick Brooks (1999) , with an updated discussion by Slater (2014) . What follows is not meant to be a survey of all possible results in all possible applications. We have selected areas that we believe are particularly important for demonstrating how VR has been and might be used to improve the lives of people, and to help overcome some societal problems, or at the very least help in scientific understanding of problems and contribute toward solutions. Readers might find that their favorite topic, research result, or paper has not been mentioned. This is because we have focused on illustrative results and developments rather than attempting to be comprehensive. Indeed, to write comprehensively about every section in this article would require something like the whole article length devoted to it. Even so without trying to be comprehensive, we have found it necessary to cite many references. We have concentrated on scientific papers in peer-reviewed journals. Immersive VR has shown an extremely impressive array of applications over the years, but what is important now, given the lesson of what happened in its first phase, is that we emphasize results that have some level of scientific support. The scope of this article is on the uses of VR; we are not presenting techniques, methods, interfaces, algorithms, or any of the technical side, except where this is relevant to explain a particular application or results.

Our thesis is similar to that presented in the quote from Jaron Lanier above: VR offers us a way to simulate reality. We do not say that it is “exactly as real” as physical reality but that VR best operates in the space that is just below what might be called the “reality horizon.” If a virtual knife stabs you, you are not going to be physically injured but nevertheless might feel stress, anxiety, and even pain. If a virtual human unexpectedly kisses you, you may blush with embarrassment, and your heart start pounding, but it will be a virtual kiss only. On the other hand, as Lanier said, the real power of VR is to go beyond what is real, it is more than simulation, it is also creation, allowing us to step out of the bounds of reality and experience paradigms that are otherwise impossible.

Virtual reality is “reality” that is “virtual.” This means that, in principle, anything that can happen in reality can be programed to happen but “virtually,” a point that we return to in Chapter 8, since, for example, this is not the case with touch and force feedback. Therefore, writing about the potentialities inherent in VR is a difficult task – since it encompasses what can be done in physical reality (for good or evil). But even more, since it is VR, we emphasize that we can break out of the bounds of reality and accomplish things that cannot be done in physical reality. Herein lies its real power. With VR we can, for example, simulate and improve traditional physiotherapy by making it more interesting for the patient by changing their apparent location and activity to something more interesting than just what they are actually doing. In reality, a machine might be helping someone to move their legs for physiotherapy, but with VR they can be given the illusion that rather than just moving their legs for therapy they might be playing soccer in the World Cup. This type of approach augments current practices. But, VR can go way beyond this and introduce radical paradigm shifts.

In VR we are currently still at the stage similar to that of the transition between theater and movies as pointed out by Pausch et al. (1996) . Movies were originally just another way to show theater. It took a while before moviemakers developed a new grammar, ways of presenting a story unique to this medium. So, the same will be true of VR. Nowadays, a computer game in VR is just a traditional computer game – but displayed in a different medium. Eventually there will be a paradigm shift, one that we cannot know at the time of writing. Putting this another way, VR is revolutionary, even though it has taken 50 years to get from the initial idea in the lab to becoming a mass consumer product. How this product might develop and change the world in which we live remains unknown. In this article, we try to set out some of what has been done with VR and to some extent what might be done. We address positive uses of VR, while recognizing from the outset that there will be, like with any technology, uses that are morally repugnant. For example, vehicles can do serious damage when used improperly, even though their designed purpose is to transport people or facilitate commercial activity.

1.2. Essential Concepts

The idea of immersive VR in the form that we think of it today was foreshadowed by Ivan Sutherland in 1965 ( Sutherland, 1965 ) and then realized with the “Sword of Damocles” HMD described in a paper published 3 years later ( Sutherland, 1968 ). 2 This was not the first ever HMD – see, for example, a collection of pictures compiled by Stephen R. Ellis of NASA Ames, which includes one dating back to 1613. 3 Nor was this the first ever virtual environment system – see the multisensory Sensorama system by Morton Heilig, 4 or Myron Krueger’s pioneering work on Artificial Reality ( Krueger et al., 1985 ; Krueger, 1991 ), or the years of work on flight simulators ( Page, 2000 ). However, it was the first that, although using almost totally different technology than available today, introduced (and implemented) the concepts that make up a VR system. An HMD delivers two computer-generated images, one for each eye. The 2D images are computed and rendered with appropriate perspective with respect to the position of each eye in the three-dimensionally described virtual scene. Together, the images therefore form a stereo pair. The two small displays are placed in front of the corresponding eye, with some optics that enables the user to see the images. The displays are mounted in a frame, which additionally has a mechanism to continually capture the position and orientation of the user’s head, and therefore gaze direction (assuming that the eyes are looking straight ahead). Hence, as the head of the user moves, turns, or looks up and down, this information is transmitted to the computer that recomputes the images and sends the resulting signals to the displays. From the point of view of the users, it is as if they are in an alternate life-sized environment, since wherever they look, in whichever direction, they see this surrounding computer-generated world in 3D stereo with movement and motion parallax. (The same can be done with specialized sound.) In fact, from this point on we drop the term “user” and refer to the “participant.” VR is different from other forms of human–computer interface since the human participates in the virtual world rather than uses it .

In the 1980s, NASA Ames developed the VIEW system (Virtual Interface Environment Workstation) described by Fisher et al. (1987) . 5 This was a full VR system with all components recognizable today: head-tracked wide field-of-view relatively light weight HMD, audio, tracking of the body, tracked gloves that allowed participants to interact with virtual objects, tactile and force feedback (haptics), and where the VR could be linked to a telerobotics system (Section 6.4).

Also in the 1980s a company VPL led by Jaron Lanier became a driving force of VR developments constructing the Eyephone HMD, tracked data gloves 6 for interaction, whole body tracking, and reality built for two ( Blanchard et al., 1990 ). 7 They also developed a visual programming language that made it possible to build virtual environments with limited programming. It was a goal for people to be able to construct their virtual realities, while in VR, and immediately share these with multiple people. It was probably through the work of VPL that the idea of VR became widely publicized.

The degree of excitement, creativity, speculation, visions of a positive future, belief in the near-term mass availability of VR cannot be overemphasized. Indeed, the ideas and realizations that were around in the late 1980s and early 1990s can be read anew today and have a new freshness – and are especially important because what was hoped for then (VR for the mass of people at low cost) is now becoming a reality. Readers are urged to read the proceedings of two panels that occurred at the SIGGRAPH conference in 1989 ( Conn et al., 1989 ) and 1990 ( Barlow et al., 1990 ) to get an idea of the excitement and promise of the heady days of early VR.

Head-mounted display technology puts the displays close to the eyes. Another type of immersive VR system was developed by Cruz-Neira et al. (1992) referred to as a CAVE™ system ( Cruz-Neira et al., 1993 ). Here, images are back-projected onto the walls of an approximately 3 m cubed room (front projected onto the floor by a projector mounted on the ceiling above the open topped cuboid). Typically, three walls and the floor are screens. The images are projected interlaced at, e.g., 90 frames per second, 45 showing left eye images and the others the right eye images. Lightweight shutter glasses alternately have one eye lens opaque and the other transparent, in sync with the projected images. The brain fuses the two into one overall 3D stereo scene. Through head tracking mounted on the glasses, the image is correctly perspective computed for the head position, direction, and orientation of the participant. More than one person can be in the Cave simultaneously, and wearing the stereo shutter glasses, but the perspective is only correct for the one wearing the head-tracked glasses. Hence, such Cave-like systems, like HMDs deliver a surrounding 3D world. Of course, such a system has been far more expensive than an HMD system, both in terms of the space required and the cost (high powered projectors, a multiprocessor computer system, complex software for lock-step stereo rendering across all the displays, equipment maintenance). Moreover, as the promise of HMD driven VR diminished in the 1990s through the failure to develop high quality displays at low enough cost, and with acceptable ergonomics (such as weight), Cave-like systems came to be used as an alternative. However, unlike HMDs, each Cave was typically tailor-made to order (it depended on available space apart from anything else) and never became a mass product. Caves became one of the mainstays of VR research and applications from the late 1990s and through the 2000s until recently. The applications we discuss below include both HMD and Cave systems.

Conceptually, a minimal VR system places a participant into a surrounding 3D world that is delivered to a display system by a computer. At the very least, the participant’s head is tracked so that image and auditory updates depend on head-position and orientation. The computer graphics of the system delivers perspective-projected images individually to each eye, and the resulting scenario should be seen with correct parallax. Ideally, there should be a means whereby participants can effect changes in the virtual world. This may be accomplished by 3D tracked data gloves, or a handheld device such as a Wand (which is like a mouse or joystick but tracked in 3D space). Note that this says nothing about how the world is rendered. Even with the wire frame (lines only) images portrayed in Sutherland (1968) , Ivan Sutherland noted that “An observer fairly quickly accommodates to the idea of being inside the displayed room and can view whatever portion of the room he wishes by turning his head ….Observers capable of stereo vision uniformly remark on the realism of the resulting images.”

1.3. Immersion and Presence

Consciousness of our immediate surroundings necessarily depends on the data picked up by our sensory systems – vision, sound, touch, force, taste, and smell. This is not to say that we simply reproduce the sensory inputs in our brains – far from it, perception is an active process that combines bottom-up processing of the sensory inputs with top-down processing (including prior experience, expectations, and beliefs) based on our previously existing model of the world. After a few seconds of walking into a room we think that we “know” it. In reality, eye scanning data show that we have foveated on a very small number of key points in the room, and then our eye scan paths tend to follow repeated patterns between them ( Noton and Stark, 1971 ). The key points are determined by our prior model of what a room is. We have “seen” a small proportion of what there is to see; yet, our perceptual system has inferred a full model of the room in which we are located. In fact it has been argued that our model of the scene around us tends to drive our eye movements rather than eye movements leading to our perceptual model of the scene ( Chernyak and Stark, 2001 ). It was argued by Stark (1995) that this is the reason why VR works, even in spite of relatively simplistic or even poor rendering of the surroundings. VR offers enough cues for our perceptual system to hypothesize “this is a room” and then based on an existing internal model infer a model of this particular room using a perceptual fill-in mechanism. Recall the quote from Sutherland above how people accommodated to and remarked on the realism of the wire frame rendered scene displayed in the “Sword of Damocles” HMD.

The technical goal of VR is to replace real sense perceptions by the computer-generated ones derived from a mathematical database describing a 3D scene, animations of objects within the scene – represented as transformations over sets of mathematical objects – including changes caused by the intervention of the participant. If sensory perceptions are indeed effectively substituted then the brain has no alternative but to infer its perceptual model from its actual stream of sensory data – i.e., the VR. Hence, consciousness is transformed to consciousness of the virtual scenario rather than the real one – in spite of the participant’s sure knowledge that this is not real.

Effective substitution of real sensory data is an ideal. In practice, it depends on several factors, not least of which is – which sensory systems are included? Typically, vision, and often auditory, more rarely touch, more rarely force feedback, more rarely still smell, and almost unknown taste. 8 If we consider the typical VR system, it is primarily centered around vision, may have sound, and may have some element of tactile feedback. However, even vision alone is often enough for numerous applications, since anyway for many people it is perceptually dominant. So, participants in a VR typically encounter a situation where their visual system places them on say a roller coaster, but all other sense perceptions are from the surrounding physical environment. Nevertheless, they may scream and react as if they are on the roller coaster even while talking to a friend in reality standing nearby.

Factors that are critical for effective sensory substitution have been known for several years ( Heeter, 1992 ; Held and Durlach, 1992 ; Loomis, 1992 ; Sheridan, 1992 , 1996 ; Steuer, 1992 ; Zeltzer, 1992 ; Barfield and Hendrix, 1995 ; Ellis, 1996 ; Slater and Wilbur, 1997 ): such as wide field-of-view vision, stereo, head tracking, low-latency from head move to display, high-resolution displays, and of course the more sensory systems that are substituted the better. However, these types of technical factors (and there are others) are for one purpose – to afford the participant to perceive using natural sensorimotor contingencies ( O’Regan and Noë, 2001a , b ; Noë, 2004 ). What this means is that in order to perceive we use our bodies in a natural way. We turn our head, move our eyes, bend down, look under, look over, look around, reach out, touch, push, pull, and doing all or some subset of these things simultaneously. Perception is a whole body action. Hence, the primary technological goal of VR is to realize perception through such natural sensorimotor contingencies to the best extent possible , and of course this continually comes up against limitations. For example, if while wearing an HMD or in a Cave we look very closely at an object, eventually we will see pixels. Or, in most existing VR systems, if we touch some arbitrary virtual object we will not feel it.

By an immersive VR system we mean one that delivers the ability to perceive through natural sensorimotor contingencies. This is entirely determined by the technology. Whether you can turn around 360°, all the while seeing a very low-latency continuous update of your visual field in correspondence with your gaze direction, is completely a function of the extent to which the system can do this. We can classify systems in this way as being more or less immersive. We say that system A is more immersive than system B if A can be used to simulate the perception afforded by B but not vice versa . Hence, in this sense an HMD is “more immersive” than a Cave, since there is something that can be represented in an HMD that cannot be represented in a Cave (even a six-sided Cave): the virtual representation of the participant’s body. In a Cave when you look down toward yourself you will see your real body. In an HMD with head tracking you can see a virtual body substituting your own (if this has been programed). Moreover, the virtual body can be designed to look like the real one, or not, and certainly with body tracking can be programed to move with real body movements and so on. So, in this way an HMD-based system can (in an ideal sense) be set up to simulate a Cave, but not vice versa .

Immersion describes the technical capabilities of a system, it is the physics of the system. A subjective correlate of immersion is presence . If a participant in a VR perceives by using her body in a natural way, then the simplest inference for her brain’s perceptual system to make is that what is being perceived is the participant’s actual surroundings. This gives rise to the subjective illusion that is referred to in the literature as presence – the illusion of “being there” in the environment depicted by the VR displays – in spite of the fact that you know for sure that you are not actually there . This specific feeling of “being there” has also been referred to as “place illusion” (PI) (to distinguish it from the multiple alternative meanings that have been attributed to the term “presence”) ( Slater, 2009 ). It was coined by Marvin Minsky (1980) to describe the similar feeling that can arise when embodying a remote robotic device in a teleoperator system.

Place illusion can occur in a static environment where nothing happens – just looking around a stereo-displayed scenario, for example, where nothing is changing. When there are events in the environment, events that respond to you, that correlate with your actions, and refer to you personally, then provided that the environment is sufficiently credible (i.e., meets the expectations of how objects and people are expected to behave in the type of setting depicted), this will give rise to a further and independent illusion that we refer to as “Plausibility” (Psi) that the events are really happening. Again, this is an illusion in spite of the sure knowledge that nothing real is happening. A virtual human approaches and smiles at you, and you find yourself smiling back, even though too late you may say to yourself – why did I smile back, there is no one there?

The real-time update of sensory perception as a result of movement (e.g., head turning) gives rise to the sense of “being there” – the illusory sensation of being in the computer-generated environment ( Sanchez-Vives and Slater, 2005 ). The dynamic changes following events caused by or to the participants can give rise to the illusion that the events are really happening – “plausibility” ( Slater, 2009 ). With a technically good VR system (wide field-of-view high-resolution stereo display, with low-latency head tracking at a minimum), the “being there” aspect is essentially determined for all but a few moments during an experience ( Slater and Steed, 2000 ). Psi is much harder to attain, often requiring specific domain knowledge (e.g., the virtual representation of a doctor’s surgery for the purposes of training had better be according to their expectations if doctors are to accept it). In this article, we use PI to refer to the illusion of being there, whereas presence refers to both PI and Psi. Following Sanchez-Vives and Slater (2005) , the behavioral correlate of “presence” is that participants behave in VR as they would do in similar circumstances in reality. For a more formal treatment of PI, Psi, and presence, including experimental results, see Slater et al. (2010a) . 9 These issues are taken up again in Chapter 8.

This fundamental aspect of VR to deliver experience that gives rise to illusory sense of place and an illusory sense of reality is what distinguishes it fundamentally from all other types of media. It is true that in response to a fire in a movie scene, the viewers’ hearts might start racing, with feelings of fear and discomfort. But, they will not run out of the cinema for fear of the fire. In VR, about 10% did run out when confronted by a virtual fire even though the fire did not look realistic ( Spanlang et al., 2007 ). In a movie that includes a fight between two strangers in a bar, audience members will not intervene to stop the fight. In VR, they do – under the right circumstances – specifically when the victim shares some social identity with the participant ( Slater et al., 2013 ), which itself is remarkable because obviously there is no one real there with whom to share social identity.

So, VR is a powerful tool for the achievement of authentic experience – even if what is depicted might be wholly imaginary and fantastic. In a scenario with dinosaurs such as that shown in “Back to Dinosaur Island – Jurassic World with Oculus Rift,” 10 of course participants know that the situation is not real. Nevertheless, they would typically have the illusion of being there and have the illusory sensation that the dinosaur’s actions are really happening.

Evidence over the past 25–30 years shows that PI and Psi can occur even in quite low-level systems. This is because VR relies on the brain “filling in” detail in response to the apparent situation, so that just like in physical reality people find themselves responding with physiological and reflex actions before they consciously reason out the situation – in this case that in fact nothing real is happening. That reasoning or high-level cognitive processing occurs more slowly, after the autonomic bodily responses have already occurred. For example, put someone next to a virtual precipice and their heart will start pounding ( Meehan et al., 2002 ), even though eventually of course they can say to themselves that it is not really there. VR effectively relies on this duality – between very rapid brain activation that causes the body to respond (by the body responding, we include autonomous responses and thoughts that are generated in response to an apparent situation) and the slower cognitive process that reasons things out, which is of course a vital mechanism for survival, and occurs normally in physical reality.

Since VR evokes realistic responses in people, it is fundamentally a “reality simulator.” By this we mean that participants can be placed in a scenario that depicts potentially real events, with the likelihood that they would act and respond quite realistically. This can obviously be exploited for many applications including rehearsal for the actual events, planning, training, knowledge dissemination, and so on. However, VR is also an unreality simulator! The events that it depicts may be ones that are highly unlikely to happen or cannot happen because they violate fundamental laws of physics, such as defying the laws of gravity. In VR, the physical laws can be simulated to the limit that computational power supports, or they can be changed or violated. Similarly, social conventions can be violated. A person might one day participate in a world that has never existed, such as Pandora from James Cameron’s movie Avatar. 11 But still, provided some fundamental principles are adhered to, giving rise to the illusions of being in the virtual place where real events are taking place – participants can nevertheless demonstrate realistic responses. At the simplest level your heart is likely to race equally being faced with a realistic depiction of a precipice (something that could happen) or being chased by otherworld monsters. In this way, VR dramatically extends the range of human experiences way beyond anything that is likely to be encountered in physical reality. Hence, the amazing capability of VR not just as a reality simulator but as an unreality simulator that can paradoxically give rise to realistic behavior.

In this article, we will outline some of the applications that have been developed that show the positive use of VR for the potential benefit of society and individuals – how VR can be used to enhance well-being across a vast range of aspects of life. VR as a reality simulator has its uses in various forms of training, for education, for travel, some of which are discussed in the sections below. Moreover, VR as an unreality simulator can be used for many different types of entertainment – that extend from passive to active. It should also be noted that VR as an unreality simulator can also be used to solve “real” problems – as we will indicate later.

In each of the sections below, we will tackle a different domain of application. We will show in each section what has been done at the time of writing and give some indication of the degree to which it has been successful (i.e., its scientific validation). Additionally, where relevant, we will discuss ideas and proposals indicating what could be done in this domain.

2. Science, Education, and Training

2.1. psychology and neuroscience, 2.1.1. the virtual body.

In Franz Kafka’s Metamorphosis, 12 Gregor Samsa woke up one morning lying in bed and found himself transformed into a horrible insect-like creature. The body felt like his own, but he had to learn how to move himself in new ways, and of course it had an impact on his attitudes and behaviors and those of others who saw him. Using VR, it has been shown to be possible to actually experiment with these types of body transformations, though rather more pleasant ones, and in the early days at the VPL company, there was experimentation by Jaron Lanier with embodiment in a virtual lobster body.

The question of how the brain represents the body is fundamental in cognitive neuroscience. How does the brain distinguish that this object is “my” hand and part of my body, but that object, a cup, is not part of my body, or that other object is your hand and not part of me? Common sense would have us believe that our own internal body representation is stable, something that changes only slowly through time, but experiments have shown that it is quite easy to shift the illusion of body ownership to objects that are not part of the body at all, or to a radically transformed body, so that our body representation is highly malleable.

A classic and very simple experiment to show this is called the rubber hand illusion (RHI) presented by Botvinick and Cohen (1998) in a one page Nature paper in 1998, which has had an enormous impact on the field (over 1800 citations – Google Scholar – at the time of writing). It has led to a vast literature that exploits these illusions to understand how the brain represents the body. Recent reviews are provided in Blanke (2012) ; Ehrsson (2012) ; and Blanke et al. (2015) . In the RHI, the subject sits by a table onto which a rubber hand is placed in an anatomically plausible position, and approximately parallel to the subject’s corresponding real hand. The real hand is hidden behind a partition. The experimenter sitting opposite the subject taps and strokes the seen rubber hand and the hidden real hand synchronously in time and as far as possible at the same locations on the two hands. From the subject’s point of view, there is a rubber hand seen on the table in front, and arranged so that it could be the subject’s own hand, and this hand is seen to be tactilely stimulated. But, corresponding to the seen stimulation, there is actually felt stimulation on the real hand. The brain’s perceptual system resolves this conflict by integrating the two separate but synchronous inputs into one, resulting in the perceptual and proprioceptive illusion that the rubber hand is the subject’s hand. 13 , 14 This feeling, just like PI or Psi, is impossible to describe – it has to be experienced. If the visual and tactile stimulation are asynchronous, then the illusion does not occur, or occurs to a much lesser extent. To elicit a behavioral measure of the illusion, the idea of “proprioceptive drift” was introduced in Botvinick and Cohen (1998) . Before the stimulation, participants with eyes closed had to point to their hand under the table on which their arm was resting. After the stimulation, participants were again asked to repeat the pointing procedure. The distance between the post- and pre-measures is called the proprioceptive drift, where greater values indicate that participants pointed more toward the rubber hand after than before. Indeed, it was found that the drift was on the average positive for those in the synchronous condition and zero for those in the asynchronous.

Armel and Ramachandran (2003) went on to show that subjects also respond physiologically to a threat to the rubber hand. They argued that our internal body representation is updated moment to moment based on the stimulus contingencies received. Synchronous multisensory perception leading to the hypothesis that a rubber hand might be our real hand is taken on by the brain that very quickly generates the corresponding illusion as a way to resolve the contradiction between the seen and felt synchronous stimulation. There are limitations, such as the rubber hand needing to look like a human hand, its position must be plausible, and so on, but the fundamental result that we can have strong feelings of ownership over an object that we know for certain is not part of our body is clearly demonstrated by this illusion.

Lenggenhager et al. (2007) 15 and Ehrsson (2007) 16 went on to show how similar multisensory techniques could be used to induce out-of-body illusions. Each of these used an HMD via which subjects saw a distant body. The HMD received video signals from cameras pointing toward the body. In the case of Lenggenhager et al. (2007) , the distant body was a manikin with its back to the subject. The manikin was seen to be stroked on the back, which was felt on the subject’s back through synchronous stimulation by the experimenter. Subjects then had the strange illusion of being located at or drawn toward the manikin body to their front. In the case of Ehrsson (2007) , the video cameras were pointed to the back of the subject’s own seated body. So from the perspective of the subject, they saw their own body from behind themselves. The experimenter synchronously stroked the subject’s real chest (out-of-sight) and visibly made similar strokes under the cameras. From the point of view of the subjects, they saw and felt stroking toward themselves (since their viewpoint was that of the stereo cameras), but they were apparently located behind their real body. Here, the visual and tactile information cohered to generate the illusion of being behind their own body. When the space under the camera was attacked with a hammer, participants responded physiologically (since the hammer would seem to be coming toward the illusory location of their chest). When the visual and tactile stimulation was asynchronous neither the illusion nor the physiological response occurred to the same extent.

Following this, a form of VR to study body ownership with respect to the whole body (full body ownership) was achieved by Petkova and Ehrsson (2008) through the use of video cameras mounted on top of a manikin that fed a stereo HMD worn by the participant, so that when participants looked down toward their real body, they would see the manikin body instead of their own. This was accompanied by visuotactile synchrony , induced by applying tactile stimulation to the real body synchronized with a corresponding visual stimulation to the manikin body. The result was subjective illusion of ownership over the manikin body, demonstrated also by a physiological response when a knife threatened that body. The illusion diminished when visuotactile asynchrony was applied.

The use of VR to transform the body was first realized by Jaron Lanier, in the late 1980s. The importance of this work for cognitive neuroscience was not realized at the time, and it was never published scientifically, although see Lanier (2006) and it is referred to in Lanier (2010) . Lanier used the term “homuncular flexibility” to refer to the finding that the brain can adapt to different body configurations and learn how to manipulate such an alien body – for example, manipulating end-effectors of a body representation as a lobster by learning to use muscles in the stomach, or though combinations of different muscle activations. The extreme flexibility of the body representation had been studied in the 1980s by Lackner (1988) . It was found that applying vibrations of around 100 Hz to a muscle tendon on the biceps leads the forearm to move in flexion, but if the movement is resisted, then there will be an illusion of movement of the forearm in the opposite direction (extension). Now suppose that both hands are holding the waist and such muscle spindle vibrations are applied. There is an illusion that both arms are extending, but since the hands are attached to the waist this is impossible. The way that the brain resolves this is to give the illusion of an expanding waistline! By vibrating on the other side of the muscle tendons the arms can be given the illusion of flexing – which will result in a shrinking waist illusion. Ehrsson et al. (2005) used these illusions with brain imaging to capture brain activation changes associated with these radical changes in the body. Tidoni et al. (2015) used these vibratory techniques in conjunction with VR as part of a developing program for the rehabilitation of disabled patients. This followed earlier work by Leonardis et al. (2012) who used such vibrations to induce illusory movements but in conjunction with a brain–computer interface (BCI) motor-imagery paradigm, i.e., the participant imagines moving their arm, feels their arm moving through application of the vibrations technique, and then sees the corresponding virtual arm move. This was part of an Embodiment Station (discussed in Section 6.5).

Regarding non-human body configurations Ehrsson (2009) and Guterstam et al. (2011) showed, for example, that using the multisensory techniques associated with the RHI, it is possible to give participants the illusion of owning additional arms. Regarding body shape, Kilteni et al. (2012) 17 showed that it is possible to have an illusion of ownership over an asymmetric human body, where one arm is three times as long as another, and where the participant responds by automatically withdrawing the arm when there is a threat to the distant hand. This illusion had first been implemented and experienced at VPL in the 1980s, although not published. Steptoe et al. (2013) showed how humans could adapt to having a tail, through embodiment using a Cave-like system, but seeing the virtual body from behind. Participants learned how to use the tail in order to avoid harm to the body. More recently, Won et al. (2015a) have continued to study homuncular flexibility, showing that people can learn to control virtual bodies through mappings that are different from the usual ones. Some implications of this across a range of fields have been discussed in Won et al. (2015b) .

Returning to the RHI, Ijsselsteijn et al. (2006) found that an illusion of ownership can be attained over a 2D projection of an arm on a table top when the visuotactile synchronous stimulation is applied as in the RHI. Although the subjective illusion was reported, the proprioceptive drift effect did not occur. Using VR, Slater et al. (2008) showed that a virtual arm could be felt as owned by participants when seen to be stroked synchronously with the corresponding hidden real arm. This was achieved by a virtual arm being displayed on a powerwall as projecting (in stereo) out of the real shoulders of participants. A tracked wand was used to tap and stroke the participant’s hidden real hand, which was shown on the display as a virtual ball tapping the virtual hand. This was done synchronously in which case the full illusion of ownership occurred including proprioceptive drift, or asynchronously, which typically did not result in the illusion.

In the full body illusion setup of Petkova and Ehrsson (2008) , there was no head tracking so that participants had to be looking down in a fixed orientation toward their body, in order to see the manikin body as substituting their real body. Slater et al. (2010b) carried out the first study of full body ownership using VR where participants saw a virtual body that was spatially coincident with their own and which they saw through a wide field-of-view stereo and head-tracked Fakespace Wide5 HMD. 18 Hence, when they looked down toward themselves they saw a virtual body that substituted their actual (hidden) body and from the viewpoint of the eyes of that virtual body (coincident with their own). We refer to this as first-person perspective (1PP). The experiment also included visuotactile synchrony (they felt their arm being stroked in synchrony with seeing their corresponding virtual arm stroked) or visuotactile asynchrony. There was also a condition where the virtual body was seen from a third-person perspective (3PP) (i.e., the virtual body was not spatially coincident with the real body, but to the left of the participant’s location). In this setup, it was found that 1PP was clearly the dominant factor, although visuotactile synchrony had some contribution. Remarkably, the illusion occurred in spite of the fact that all the participants were adult males but were embodied in a young female body. 19 The difference between the results of Petkova and Ehrsson (2008) and Slater et al. (2010b) was taken up by Maselli and Slater (2013) . The vital importance of 1PP for body ownership was also emphasized by Petkova et al. (2011) and considered further by Maselli and Slater (2014) .

One of the major advantages of VR in this context compared to using rubber hands or manikin bodies is that virtual limbs or the whole virtual body can be moved. Sanchez-Vives et al. (2010) exploited this to show that the illusion of ownership over a virtual arm can be induced by synchrony between real and virtual hand movements (visuomotor synchrony). Participants wearing a data glove that tracked the movements of their hand and fingers saw a virtual hand (projected in stereo 3D on a powerwall) move in synchrony or asynchrony with their real hand movements. This resulted in an illusion of ownership just as with visuotactile stimulation.

The same can be done for the body as a whole. Through real-time motion capture, mapped onto the virtual body, when the person moves their real body they would see the virtual body move correspondingly. Participants can see their virtual body moving by directly looking toward themselves and in virtual mirror reflections (and shadows) ( Slater et al., 2010a ). Kokkinara and Slater (2014) showed in later work that when there is a 1PP view of the virtual body then visuomotor synchrony is the more powerful inducer of the body ownership illusion than visuotactile synchrony.

We use the term virtual embodiment (or just embodiment) to refer to the process of replacing a person’s body by a virtual one. This requires the stereo HMD with wide field-of-view (so that the person can actually see their virtual body), with head tracking, at the minimum. Additional multisensory correlations such as visuotactile and visuomotor synchrony may be included. A technical setup to achieve this is described in Spanlang et al. (2014) . Virtual embodiment may give rise under the right multisensory conditions (such as 1PP, visuotactile, and/or visuomotor synchrony) to the illusion of body ownership , which is a perceptual illusion that the virtual body feels as if it is the person’s own body (even though it may look nothing like their real body).

There has been a lot of work on building virtual embodiment technology ( Spanlang et al., 2013 , 2014 ), studying the conditions that can lead to such body ownership illusions ( Slater et al., 2008 , 2009 , 2010b ; Sanchez-Vives et al., 2010 ; Borland et al., 2013 ; González-Franco et al., 2013 ; Llobera et al., 2013 ; Maselli and Slater, 2013 , 2014 ; Pomes and Slater, 2013 ; Blom et al., 2014 ; Kokkinara and Slater, 2014 ) and exploring the effects of distortions away from the normal form of a person’s actual body ( Slater et al., 2010b ; Normand et al., 2011 ; Kilteni et al., 2012 ; Steptoe et al., 2013 ). There have also been studies on how illusions of body ownership might result in various changes to the real body.

For example, it had previously been shown that the RHI leads to a cooling of the real hand ( Moseley et al., 2008 ) – though see also Rohde et al. (2013) – as well as an increase in its histamine reactivity ( Barnsley et al., 2011 ). Cooling of several points on the body has also been reported in a 3PP full body illusion ( Salomon et al., 2013 ). There is also evidence using VR suggesting that the 1PP full body ownership illusion can result in changes in temperature sensitivity ( Llobera et al., 2013 ). It has also been shown that when in the full body illusion the virtual hand is attacked that there is an electrical brain response (EEG) that corresponds to what would be expected to occur when a real hand is attacked ( González-Franco et al., 2013 ). Banakou and Slater (2014) showed that embodiment in a virtual body that is perceived from 1PP and that moves synchronously with the real body can result in illusory agency over an act of speaking. The virtual body was seen directly and in a virtual mirror. Participants spent a few minutes simply moving with the virtual body moving synchronously with their movements in the experimental condition or asynchronously in another. At some moment, the virtual body unexpectedly uttered some words (45 in total) with appropriate lip sync. Those in the visuomotor synchronous condition later reported a subjective illusion of agency over the speaking – as if they had been the ones who had been speaking rather than only the virtual body. Moreover, when participants were asked to speak after this exposure, the fundamental frequency of their own voice shifted toward that of the higher frequency voice of the virtual body. Thus embodiment resulted in the preparation of a new motor plan for speaking, which was exhibited by participants in the synchronous condition changing the way that they spoke after compared to before the experiment. This did not happen for those in the asynchronous condition.

Thus, VR offers a very powerful tool for the neuroscience of body representation. For a recent review of this field, see Blanke et al. (2015) . It can be used to do effectively and relatively simply what is impossible by any other means – instantly produce an illusion of change to a person’s body. In the next section, we consider some of the consequences of changing representations of the self.

2.1.2. Changing the Body Can Change the Self

“… one of the fundamental differences between virtual reality and other forms of user-interface is that you’re really present in it, your body is represented and you can react with it as you, … And the fact that you’re in it, and that you define yourself is really fascinating. Oftentimes, being able to change your own definition is actually part of a practical application. Like in the world we did last year, where an architect was designing a day care center and could change himself into a child and use it with a child’s body and run faster and have different proportions and all that.” Jaron Lanier ( Barlow et al., 1990 ).

This quote is another illustration that much of what is being discussed today was already thought of and even implemented in the heady days of early VR. If VR can endow someone with a different body, what consequences does this have? We have already mentioned above that ownership over a rubber hand can lead to physiological responses, and there is some evidence that points to the possibility that the experimental real arm can experience (very small) drops in temperature, and that the same can occur over different parts of the body in a virtual whole body illusion, or that in the virtual arm illusion that there may be a change in temperature sensitivity. But, are there higher-level changes to attitudes, behaviors, even cognition?

Yee and Bailenson (2007) introduced a paradigm called the “Proteus Effect,” where it was argued that the digital self-representation of a person could influence their attitudes and behaviors in online and virtual environments. Essentially, the personality or type of body or the actions associated with the digital representation would influence the actual real-time behaviors of participant, both in the VR and later outside it. In their 2007 paper, they showed that being embodied in an avatar that had a face that was judged as more attractive than their actual one led participants to move closer to someone else displayed in a collaborative virtual environment than those participants whose avatar face was judged less attractive. Similarly, being embodied in taller avatars led to more aggressive behaviors in a negotiation task than being embodied in shorter avatars. These results also carried over to representations in online communities ( Yee et al., 2009 ). Groom et al. (2009) embodied White or Black people in a Black or White virtual body, in the context of a scenario in which they were in an interview applying for a job. The embodiment was through an HMD with head tracking, with the body seen in a mirror, and lasted for just over 1 min. Using a racial Implicit Association Test (IAT) ( Greenwald et al., 1998 ), they found after the exposure there was greater bias in favor of White for those embodied in the Black virtual body. This difference did not occur when participants simply imagined being in a White or Black body. Hershfield et al. (2011) studied the effect of embodiment in aged versions of themselves on their savings behavior. They embodied people in a virtual body that either had a representation of their own faces, or their faces aged by about 20 years. The virtual body was shown in a virtual mirror. They found some modest evidence in favor of the hypothesis that being confronted with their future selves influenced their behavior toward greater savings for the future. See also the example concerned with fostering exercise ( Fox and Bailenson, 2009 ) in Section 3.1.2.

The theoretical basis of Proteus Effect ( Yee and Bailenson, 2007 ) is Self-Perception Theory [e.g., Bem (1972) ], which suggests that people infer their attitudes by observing their own behaviors and the context in which these occur, and almost all the examples above do put people into behavioral situations. It has been also been argued that attitudinal and behavioral correlates of transformed body ownership can be explained as people behaving according to how others would expect someone with that type of body to behave ( Yee and Bailenson, 2007 ). Essentially, this comes down to stereotyping. For example, in the case of the racial bias study of Groom et al. (2009) , participants were put into precisely a situation that is known to be one where there is implicit bias against Black people compared to White.

In an experiment by Kilteni et al. (2013) , people were embodied either in a dark-skinned casually dressed (Jimi Hendrix-like) body or in a light-skinned virtual body. The body moved with visuomotor synchrony, but also there was synchronous visuotactile feedback through a drumming task, so that participants saw their virtual hands hit a virtual hand drum that was coincident in space with a real hand drum. Hence, when they hit the virtual drum they would also feel it. 20 In this experiment, those embodied in the dark-skinned casual body expressed significantly greater body movement while drumming than those embodied in the light-skinned body that was wearing a formal suit. This result occurred, in the view of the stereotype theory, because there is greater expectation that people who look more like Jimi Hendrix would be more bodily expressive. However, self-perception theory and stereotyping cannot account for attitudinal changes that have been observed in experiments where only the body changes, and there are no particular behavioral demands within the study. These results are better explained within the multisensory perception framework based on the research that has stemmed from the RHI.

Peck et al. (2013) carried out a racial bias study where participants were embodied for 12 min in either a Black body, a White one, a purple one, or no body at all. The body moved synchronously with real body movements of the participants through real-time motion capture and was seen directly by looking toward the self with the head-tracked HMD and in a mirror. 21 Those in the “no body” condition saw a mirror reflection of a Black body, but which moved asynchronously to their own movements. A racial IAT was applied some days before the experience and then immediately after. It was found that average implicit racial bias significantly decreased only for those who had the Black embodiment. During the 12 min of exposure, the participant did not have any task except to move and to look toward themselves and in the mirror while doing so. The only events that occurred were that 12 virtual characters walked by, 6 of them Black and the others White. It is likely that the results are different from Groom et al. (2009) because of the much longer exposure time, the full body synchronous movement, and the fact that there was no task, so that this was only based on body ownership through multisensory perception. Given the contrary earlier result of Groom et al. (2009) , it was hard to believe that just 12 min of this experience could apparently reduce implicit racial bias. However, independently it was shown by Maister et al. (2013) that the RHI over a black rubber hand also leads to a reduction of implicit racial bias in light-skinned people. For a review of this area of research see Maister et al. (2015) . Recent results demonstrate that the decrease in implicit bias lasts for at least 1 week after the exposure ( Banakou et al., 2016 ).

van der Hoort et al. (2011) showed using the multisensory techniques of Petkova and Ehrsson (2008) that when average sized adults have an illusion of body ownership over smaller or larger manikin bodies that this results in changes in their perception of object sizes (in a small body objects seem to be larger, but smaller in a large body). Banakou et al. (2013) reproduced this result in immersive VR. 22 They showed that the illusion of body ownership of adults over small body leads to overestimation of object sizes. However, if the form of the body represented that of a (4-year-old) child then the size overestimation was approximately double that compared to when the form of the body was an adult body but shrunk down to the same size as the child. Moreover, in the child embodiment case, there were changes in implicit attitudes about the self toward being child-like substantially beyond changes induced by the illusion of ownership of the adult-shaped body of the same size. In other words, only the form of the body (child-like compared to adult-like) has this effect.

The child and racial bias studies relied on an IAT – e.g., Greenwald et al. (1998) – a reaction time measure where participants have to quickly associate between two target concepts (e.g., Black and White people) and an attribute (e.g., Positive and Negative). When the concept and attributes must be simultaneously selected (e.g., when deciding if a stimulus matches White or Black but where each is also associated with Positive or Negative), then a faster choice in pairing say Black and Negative and White and Positive, compared to Black and Positive with White and Negative, would indicate an implicit racial bias. Such implicit bias is found notwithstanding the explicit attitudes of people, which may not be discriminatory, there being a dissociation between implicit and explicit bias ( Greenwald and Krieger, 2006 ). Indeed, in the explicit racial attitudes test in Peck et al. (2013) there was no evidence of explicit racial bias – although there was implicit racial bias shown in the preexperiment IAT. When it comes to discriminatory behavior, the IAT results have better predictive power in social interaction than explicit measures ( Greenwald et al., 2009 ) – for example, with respect to eye contact, proxemics, and hiring practice ( Ziegert and Hanges, 2005 ; Rooth, 2010 ). Even though the use and interpretation of the IAT may be controversial, there is evidence supporting its explanatory and predictive power ( Jost et al., 2009 ).

With respect to embodiment in a child body, it is known that perception from the perspective of a smaller body results in size overestimations ( van der Hoort et al., 2011 ), and indeed this occurred for both the adult and child conditions in Banakou et al. (2013) . However, this does not explain why the overestimation in the child condition was almost double that of the adult condition. Since we have all been children it is possible that the brain relies on autobiographical memory thus making the world appear larger, and more rapidly finds associations between the self and child-like categories. However, with respect to the racial bias study ( Peck et al., 2013 ), none of the participants had ever had dark skin, and yet 12 min of exposure was enough to significantly change their IAT score away from indications of bias. How is this possible? Our answer suggests that the body ownership and agency over the virtual body is more than a superficial illusion, and that it goes beyond the perceptual to influence cognitive processing. It was argued in Banakou et al. (2013) ; Llobera et al. (2013) that a fundamental mechanism may be through the postulated “cortical body matrix” ( Moseley et al., 2012 ), which maintains a multisensory representation of the space immediately around the body in a body-centered reference frame. The system is responsible for homeostatic regulation of the body, and for dynamically reconstructing the body representation moment to moment based on current multisensory information. It was argued that if, as seems likely, such a system exists, it then operates globally in a hierarchical top-down fashion, so that attribution of the whole body to the self leads to attribution of the body parts to the self. Moreover, it was proposed that it also maintains an overall consistency between the multifaceted aspects of self (personality, attitudes, and behaviors) and the body representation. We can view IAT changes as direct evidence of this – changing the body apparently leads to changes in implicit attitudes. We can say that as well as body ownership over a different body leading to changes in implicit attitudes, the documented changes in implicit attitudes are a very strong signal that in fact there has been a change in body ownership. A further study also hints at the likelihood that a change in body ownership can also result in cognitive changes ( Osimo et al., 2015 ), where it was shown that swapping bodies with (virtual) Sigmund Freud led to an improvement in mood after a self-counseling process. 23

The use of embodiment and the transformative power that it seems to have is fundamental feature that separates immersive VR from other types of system, and recent scientific results do back up the statement by Jaron Lanier in the quote at the head of this section, said a quarter of a century ago.

2.1.3. Spatial Representation and Navigation

Virtual reality is especially suitable for the study of spatial representation and spatial navigation. This at the core of the use of VR: to break down the walls of our room, to transport us to another space, a space that we can explore with or without moving (see Section 6). Spatial navigation is useful for a number of areas and purposes: for learning to navigate a certain model space such as a foreign city to be visited, for rehabilitation of spatial abilities after a neurological disorder or brain injury that affected this function, for neuroscience research (to understand the basis of spatial cognition, memory, and sensory processing), for city design, or to treat post-traumatic stress disorder (PTSD) associated with a space, among others.

We may want to move around the city of Paris and to become oriented before we travel to the real city. Or we do not plan to go, and we just want to visit virtual Paris. First of all, how do we move around the city? We can move with a joystick. This allows us to navigate easily from our couch, for example. However, this method may not be optimal if we are planning to internalize, to “learn” the spatial map of Paris, which is better achieved if we move our bodies, since this then enhances theta frequencies in the hippocampus ( Kahana et al., 1999 ). We can also navigate by walking-in-place ( Slater et al., 1995 ; Usoh et al., 1999 ). Another technique for moving through distances that are greater than the physical space in which the participant can move is called “redirected walking,” where, for example, the system takes advantage of participant head turns to rotate the environment more than the head turn – in this way giving people the impression that they had walked in a long straight line when in reality they had walked in a curve or vice versa ( Razzaque et al., 2001 , 2002 ), research that is ongoing, e.g., Suma et al. (2015) . Or, we could eventually navigate by thought alone if the VR is connected to a BCI ( Pfurtscheller et al., 2006 ). This is an excellent possibility for patients who are completely immobilized since they can feel the freedom of navigating by thought, an experience very positively evaluated by users ( Friedman et al., 2007 ; Leeb et al., 2007 ) (see Section 6.5).

Understanding the brain mechanisms that underlie the generation of internal maps of the external world, the storage (or memory) of these maps, and their use in the form of navigation strategies is an important field in neuroscience (notice that the Nobel Prize in Physiology or Medicine 2014 was shared, one-half awarded to John O’Keefe, the other half jointly to May-Britt Moser and Edvard I. Moser “for their discoveries of cells that constitute a positioning system in the brain,” known as “place cells” and “grid cells”). Many of the associated studies have been carried out in rodents that were navigating in laboratory mazes. But, how can we study navigation in humans? VR navigation has been found to provide a consistent sensitive method for the study of hippocampal function ( Gould et al., 2007 ). The hippocampus is the main brain structure supporting spatial representation, a structure that is larger than average in London taxi drivers, who are famous for learning the map of London in great detail ( Maguire et al., 2000 ). Virtual cities have been used to determine, for example, that we activate different parts of the brain when we do wayfinding versus route following ( Hartley et al., 2003 ), and to identify spatial cognition deficits in disorders such as depression ( Gould et al., 2007 ) or Alzheimer ( Cushman et al., 2008 ).

Even though the brain processes underlying spatial navigation in rodents used to be studied in real mazes, in recent years VR for rodents has also become a valuable tool in basic research in neuroscience. This technique allows navigation of virtual spaces while the animals walk in place on a rotating ball, such that their head is stable and their brain can be visualized while they do spatial tasks ( Harvey et al., 2009 ). Even more recent VR systems for rodents allow 2D navigation including head rotations, resulting in the activation of all the same brain mechanisms that had been identified for freely moving animals, while the animals remain static and walking-in-place ( Aronov and Tank, 2014 ). This approach allows detailed observation of specific brain cells during navigation.

Since navigation in virtual space can activate the same brain mechanisms as navigation in the real world, spatial “presence” can be successfully generated ( Brotons-Mas et al., 2006 ; Wirth et al., 2007 ). The illusory sensation of spatial presence allows the recreation of all the sensations associated with a particular place by using VR, which is useful in order to treat PTSD associated with a space. This has been widely used with soldiers that had been in Iraq and Afghanistan ( Rizzo et al., 2010 ). Virtual spaces such as virtual Iraq, and in particular virtual navigation, have also been used for assessment and rehabilitation following traumatic brain injury, a lesion also frequent in soldiers ( Reger et al., 2009 ). Assessment tasks and training tasks for rehabilitation often go hand in hand, and thus retraining in topographical orientation, wayfinding, and spatial navigation in VR is often used in cognitive rehabilitation following traumatic brain injury, neurological disorders ( Bertella et al., 2001 ; Koenig et al., 2009 ; Kober et al., 2013 ). Furthermore, it has been proposed that sustained experiential demands on spatial ability carried out in VR protect hippocampal integrity against age-related decline ( Lovden et al., 2012 ).

Virtual reality can be used to study the strategies that humans use for spatial navigation, which reveals the underlying geometry of cognitive maps. These maps could have a Euclidean structure preserving metrics and angles or a topological graph structure. To study this, experiments in the VENLab 24 ( Rothman and Warren, 2006 ; Schnapp and Warren, 2007 ) included a large area that allowed tracked displacements while in VR. A virtual environment representing a virtual hedge maze allowed identification of the location of certain landmarks. By creating two “wormholes” that rotate and/or translate a walker between remote places in the virtual hedge maze, they made the space non-Euclidean, in order to explore the navigational strategies used by different subjects. This is a good example of how VR can be used in this domain to achieve things that are impossible in reality.

The study of navigation and wayfinding in VR has a long history. A good starting point for those interested in following this up is the special issue of the journal Presence – Teleoperators and Virtual Environments, edited by Darken et al. (1998) . There is a difference between techniques for navigating effectively within a virtual environment, and the extent to which learning wayfinding through a space in a virtual environment transfers to real-world knowledge. Darken and Goerger (1999) pointed out that while the use of VR seems to produce the best results in terms of acquiring spatial knowledge of a terrain, when it comes to actual performance VR training often does not transfer, and can even make the situation worse. The authors, based on a number of studies, concluded that using specific VR techniques (e.g., a virtual compass) and relying on specific virtual imagery during the learning process does not transfer well to real-world wayfinding. However, those who use the VR to rehearse what they will later do in reality, to make a plan, without relying on detailed cues but rather transferring their experience into more abstract spatial knowledge do a lot better. Ruddle et al. (1999) carried out a direct comparison between navigation on a desktop system compared to a head-tracked HMD. They found that although there were no differences in task performance between the two systems in the sense of measuring the distance traveled, the HMD users stopped more frequently to look around the scene and were able to better estimate straight line paths between waypoints. On the other hand, those using the desktop system seemed to develop a kind of tunnel vision. This difference between the two illustrates that in immersive VR there is generation of the types of kinesthetic and proprioceptive cues, i.e., body-centered perception – contributing to what we referred to earlier as natural sensorimotor contingencies for perception – that improve the chance of transfer of knowledge to real-world task behavior. Ruddle and Lessels (2009) carried out a further study where they compared navigation task performance in a virtual environment under three different conditions: (1) a desktop interface, (2) an HMD that was tethered, so that although participants could look around, they could not walk, and (3) a wide area tracking system that allowed participants to really walk. They found that in both their reported experiments (which differ in rendering style of the environment) that those who were able to really walk outperformed the other two groups. See also Ruddle et al. (2011b) . In fact, it was later found that walking (in this case enabled through an omnidirectional treadmill) clearly resulted in improved cognitive maps of the space compared to other methods ( Ruddle et al., 2011a , 2013 ) as predicted by Brotons-Mas et al. (2006) . In this context, it is worth noting that when comparing presence in a virtual environment through a head-tracked HMD, using (1) point-and-click techniques, (2) walking-in-place where the body moves somewhat like walking but not actually walking, and (3) real walking using wide area tracking, Usoh et al. (1999) found that subjectively reported PI (the component of presence referring to the sense of “being there”) was greater for both types of walking compared to the point-and-click technique. On some presence measures, real walking was preferred to walking-in-place, and as would be expected, real walking was the most efficient form of navigation.

A recent study by Sauzéon et al. (2015) used a powerwall-based VR system to test the effect on episodic memory of a virtual apartment. Participants had two methods for navigation through the apartment, either passively watching or using a joystick to actively explore. It was found that episodic memory was superior in the active condition. A similar setup using a virtual model of the city of Tübingen was shown to be advantageous in helping stroke patients to recover some wayfinding ability ( Claessen et al., 2015 ).

In a very famous experiment in 1963, Held and Hein (1963) took 10 pairs of neonatal kittens and arranged that 1 navigated an environment by actively moving around it, but the second was carried along passively in a basket by movements of the first. They found that the kittens that were passively moved around, although in principle subject to the same visual stimuli as the active ones, developed significant visual-motor deficits. The authors concluded that “self produced movement with its concurrent visual feedback is necessary for the development of visually-guided behavior.” A similar observation was obtained in rats while walking versus being driven in a toy car ( Terrazas et al., 2005 ), while simultaneous brain recordings were obtained, and the spatial information carried per neuronal spikes in place cells was found to be smaller in the passive navigation. This type of finding fits very well with findings in human studies in virtual environments. The conclusion from these studies is that simply putting someone in a VR in order to learn a particular environment can be effective provided that the form of locomotion includes active control by the participant. Concomitant with our views that the most important factor behind PI is the affordance by the system of perception through natural sensorimotor contingencies, the more that the whole body can be involved in the process of locomotion, the better the result in transfer to the real world, and the formation of cognitive maps.

This is an important and vitally important area of research, and above, we have scratched the surface. As VR becomes used on a mass scale, one of its most frequent uses will probably be for virtual travel. If people simply use VR to observe an environment then the form of interface for navigation does not matter much – other than adhering to excellent user interface principles suitable for VR: of greater interest are the sights and sounds encountered. However, if people want to use it for rehearsal, to learn about how to get from A to B, then they had better use a form of body-centered interface, at least equivalent to walking-in-place, but preferably one of the new generation of treadmill interfaces that are currently in development.

2.2. Scientific and Data Visualization

Immersive VR visualization and interaction with data is relevant for scientific evaluation and also in the fields of training and education. It also allows an active interaction with the representations, e.g., in drug design (see below). We can walk through brains 25 , 26 or molecules, and we can fly through galaxies. The requirements and level of interaction will vary depending on whether this “walk” is for professional use, for students, or for the general public. Immersion in the data could take place alone or in a shared environment, where we explore and evaluate with others. The data could be static, or we could be immersed in dynamic processes. The data should be viewable in multiscale form.

Three-dimensional representation of real or modeled data is important for understanding data and for decision-making following this understanding, a relevant topic for a number of fields, especially at this time of exponentially growing datasets. Even when most of the analysis tools are computer-run algorithms, human vision is highly sensitive to patterns, trends, and anomalies ( van Dam et al., 2002 ). There is a substantial difference between looking at 3D data representations on a screen and being immersed in the data, navigating through it, interacting with it with our own body, and exploring it from the outside and the inside. It is logical to expect that when VR commercial systems are pervasive, there will be a trend for currently used 3D data representations on a flat screen to be visualized in immersive media. This, along with the body-tracking systems, will allow a more natural interaction with the data. The extent to which this interaction with data goes further than the “cool” effect and adds real value to the comprehension, evaluation, and subsequent decisions taken as a result is an important issue to explore. It is also important to identify ways to maximally exploit the potential of this data immersion capability.

Specific examples of VR for data visualization include molecular visualization and chemical design. In a recently described system called the “Molecular Rift,” the immersive 3D visualization of molecules is combined with interaction with molecules based on gesture-recognition ( Norrby et al., 2015 ). In this version, participants were immersed into protein–ligand complexes. The system was evaluated by groups with experience in medical chemistry and drug design, and the study was focused on the improvement of the user-interaction with the molecules based on gestures and not in the evaluation of improved performance of drug design or specific tasks. Out of 14 users, all of them found the system potentially useful for drug design, and they enjoyed using it, while none experienced motion sickness.

A more specific task in interaction with molecules was tested by Leinen et al. (2015) . In this study, a task of manipulating nanometer-sized molecular compounds on surfaces was tested under usual scanning probe microscopy versus immersive visualization through an Oculus Rift HMD. The hand-controlled manipulation for extracting a molecule from a surface was improved by the visual feedback provided by immersive VR visualization: preestablished 3D trajectories were followed with higher precision, and deviations from them were better controlled than in immersive than in non-immersive systems ( Leinen et al., 2015 ).

Moving from the nanoscale to the microscale, a specific task consisting of the evaluation of the spatial distribution of glycogen granules in astrocytes (glial cells, a type of brain cells) was evaluated in an immersive environment in a Cave-like system ( Cali et al., 2015 ). A section of the hippocampus of 226 μm 3 at a voxel resolution of 6 nm was 3D reconstructed based on electron microscopy image stacks. A set of procedures and software was developed to allow such immersive reconstruction. The distribution of glycogen granules initially appeared to have a random distribution, but they were discovered to be grouped into clusters of various sizes with particular spatial relationships to specific tissue features. The authors found the immersive evaluation of the 3D structure to be pivotal to identify such non-random distribution ( Cali et al., 2015 ). The use of an interactive VR room also allowed multiple users to share and discuss the evaluation of the cellular details. In this study there were, however, no comparisons between task performance across different display media.

A comparison across three different media – 3D reconstructions rendered on (1) a monoscopic desktop display, (2) a stereoscopic visual display on a computer screen (fishtank), and (3) a Cave-like system – was carried out by Prabhat et al. (2008) . In this study, confocal images of Drosophila data: the egg chamber, the brain, and the gut, were evaluated by subjects who had to describe or quantify specific features mostly related to spatial distribution or colocalization and geometrical relationships. A more immersive environment was preferred qualitatively by subjects, and task performance was also superior.

Immersive VR is of great value for surgery training, an aspect that is developed in Section 2.4 where specific examples are described. Visualization of the human body from an immersive perspective can provide medical students an unprecedented understanding of anatomy, being able to explore the organs from micro to macro scales. Furthermore, immersive dynamic models of body processes in physiological and pathological conditions would result in an experience of “immersive medicine.”

Large-scale coordinated efforts to understand the brain are under way in projects such as the European Human Brain Project 27 , 28 and BRAIN 29 Initiative of the United States. These projects are generating detailed multiscale and multidimensional information about the brain. Immersive VR will have a role in the visualization of these brain reconstructions or of the simulations built based on the experimental data. The Blue Brain Project (predecessor of the Human Brain Project) has already generated a full digital reconstruction of a rat slice of somatosensory cortex with 31,000 neurons based on real neurons, and 37 million synapses ( Markram et al., 2015 ). This simulation generates patterns of neuronal activity that reproduce those generated in the brain and is amenable of immersive exploration into the structure and function of the brain.

Considering now a larger spatial scale, astronomical visualization in immersive VR has also been explored, both for professional and educational purposes ( Schaaff et al., 2015 ). These authors represented high-resolution simulations of re-ionization of an Isolated Milky Way-M31 Galaxy Pair, with various different representations. It is interesting for education that information can be added to the immersive displays.

There is an exciting perspective in the scientific and data visualization area that will open new doors to our understanding. It will be important to evaluate the extent to which immersion and interaction with data results in a more thorough, intuitive, and profound understanding of structures and processes. But in any event, once this route is open, visualization of 3D models on a flat screen will feel like watching Star Wars on a small black and white TV (see Presentation S1 in Supplementary Material).

2.3. Education

Isaac Asimov’s novels Fantastic Voyage (1966), 30 based on the movie of the same name, 31 and Fantastic Voyage II: Destination Brain (1987) 32 portrayed a situation with humans shrunk to microscopic scale entering into the body of a patient. VR and the detailed human body scans that now exist make this possible (of course in virtual reality). McGhee et al. (2015) have used the “fantastic voyage” approach to support education of stroke patients about their condition by allowing them to move through a brain representation using the Oculus Rift HMD.

The area of application of VR in education is vast. For recent reviews, see Abulrub et al. (2011) , Mikropoulos and Natsis (2011) , Merchant et al. (2014) , and Freina and Ott (2015) . There are several reasons why VR is an excellent tool for education. First, it can change the abstract into the tangible. This could be especially powerful in the teaching of mathematics. For example, Hwang and Hu (2013) suggest that the use of a collaborative virtual environment has advantages for students learning geometrical concepts compared to traditional paper and pencil learning. However, it is not completely clear which type of VR system was used, although it appears to be of the desktop variety. Kaufmann et al. (2000) describe an HMD-based augmented reality system that provides a learning environment for spatial abilities including concepts from vector algebra. They provide anecdotal evidence for the effectiveness of the method. Roussou (2009) reviews the teaching of mathematics in VR using a “virtual playground” 33 , 34 and in particular describes an experiment on learning how to compare fractions by 50 children of between 8 and 12 years in a Cave-like system ( Roussou et al., 2006 ). In a between-groups experiment, there were three conditions – children who learned using active exploration of the scenario ( n = 17), those who used the virtual playground but who learned by passively observing a friendly virtual robot ( n = 14), and another group who did not use VR but rather a Lego-based method ( n = 19). Quantitative analysis of the results found no advantage to any system. A detailed qualitative analysis, however, suggested that the passive VR condition tended to foster a reflective process among the children, and great enjoyment in interacting with the robot, associated with better understanding.

The second advantage of VR in education is, notwithstanding the results of the virtual playground experiment, that it supports “doing” rather than just observing. One example of this is surgical training (see Section 2.4), for example, one review emphasizes how VR is increasingly used in neurosurgery training ( Alaraj et al., 2011 ), ideally in conjunction with a haptic interface ( Müns et al., 2014 ). Indeed, a European consensus program for endoscopic surgery VR training has been designed and agreed ( van Dongen et al., 2011 ). For an example in engineering learning see Ewert et al. (2014) .

The third advantage is that it can substitute methods that are desirable but practically infeasible even if possible in reality. For example, if a class needs to learn about Niagara Falls 1 week, the Grand Canyon the next, and Stonehenge 35 the week after, it is infeasible for the class to visit all of those places. Yet, virtual visits are entirely possible, and such environments have been under construction ( Lin et al., 2013 ) including the idea of virtual field trips ( Çaliskan, 2011 ). It has certainly been suggested that immersive VR will change the nature of field trips, 36 and although there have been plenty of inventive demonstrations 37 , 38 , 39 it seems that as yet there have been no studies of the effectiveness of this, although perhaps it is so obviously advantageous that formal studies may be unnecessary.

The fourth advantage of VR in education involves breaking the bounds of reality as part of exploration. For example, changing how activities such as juggling would be if there was a small change in gravity, or how it would be to ride on a light beam, a universe where the speed of light were different. These ideas were envisaged and implemented for VR by Dede et al. (1997) ; however, there has been no more recent follow-up, which could now occur given greater availability of VR equipment.

In this article, we have emphasized that the real power of VR is that it enables approaches that go beyond reality in a very fundamental way – more than just exploring strange physics. An example of this in the field of education was provided by Bailenson et al. (2008) , concerned with the delivery of teaching rather than the content. In a collaborative virtual environment, it is possible to arrange the virtual classroom so that every student is at the center of attention of the teacher, and where the teacher has feedback about which students are not receiving enough eye gaze contact. Additionally, virtual colearners who could be either model students or distracting students can influence learning, and the results overall showed that these techniques do improve educational outcomes. Bailenson and Beall (2006) referred to this type of technique as “transformed social interaction.”

Overall, for the reasons we have given, and no doubt others, VR is an extremely promising tool for the enhancement of learning, education, and training. We have not mentioned other possibilities such as music or dance, or various dexterous skills, but for these areas VR has clearly great potential.

2.4. Surgical Training

Within the area of VR for training, surgical training has been a thoroughly investigated field ( Alaraj et al., 2011 ). The use of simulations in surgical planning, training, and teaching is highly necessary. To give an illustrative example of why VR is necessary for surgery: interventional cardiology has currently no other satisfactory training strategy than learning on patients ( Gallagher et al., 2005 ). It seems that acquiring such training on a virtual human body would be a better option.

In the training of medical students and in particular of surgeons, there is a relevant potential role for VR as a tool to learn anatomy through virtual 3D models. Even though there are studies trying to evaluate how useful VR can be to improve the learning of anatomy ( Nicholson et al., 2006 ; Seixas-Mikelus et al., 2010 ; Codd and Choudhury, 2011 ) – including studies proposing that VR could replace the use of corpses in medical school – fully immersive and interactive systems have hardly been used up to now. Most of the 3D models used so far are for screen displays. Still, even the visualization of non-immersive 3D body models to study anatomy yields good results for learning, and therefore this is an area that should expand in the future, integrating fully immersive systems and different forms of manipulation and interaction of the trainees with the body models.

One of the first publications of VR in the field of surgery was on VR-hepatic surgery training, and the words “Surgical simulation and virtual reality: the coming revolution ” were on the title of both the article ( Marescaux et al., 1998 ) and the editorial ( Krummel, 1998 ) in the Annals of Surgery nearly 20 years ago. However, the revolution has not happened yet, although the field is now ready for this possibility.

Surgical training in VR requires a combination of haptic devices and visual displays. Haptic devices transmit forces consisting of both the forces exerted by the surgeon and a simulation of the forces and resistances of the various body tissues. A critical question is whether the skills acquired in a virtual training are successfully transferred to the real world of surgery. Seymour et al. (2002) , in a highly cited article, provides one of the first demonstrations that this is the case. The performance of laparoscopic cholecystectomy gallbladder dissection was found to be 29% faster for VR-trained versus classically trained surgeons, while errors were six times less likely to occur in the VR-trained group. The system used though (Minimally Invasive Surgical Trainer-Virtual Reality – MIST VR system – Mentice AB, Gothenburg, Sweden), was a 2D representation on a screen of a haptic system used for simulated surgery. These results are likely to improve with a more immersive system. To illustrate the value given to surgical training in VR, an FDA panel voted in August 2004 to make VR simulation of carotid stent placement an important component of training. In the same month, the Society for Cardiovascular Angiography and Interventions, the Society for Vascular Medicine and Biology, and the Society for Vascular Surgery all publicly endorsed the use of VR simulation in carotid stent training ( Gallagher and Cates, 2004 ).

The most common uses so far of VR for surgical training have been those of laparoscopic procedures ( Seymour et al., 2002 ), carotid artery stenting ( Gallagher and Cates, 2004 ; Dawson, 2006 ), and ophthalmology [Eyes Surgical, based on Jonas et al. (2003) ]. In general terms, a large number of studies – out of which only a few seminal ones are cited here – coincide in finding positive results of VR training.

Most of the systems mentioned above concentrate on the local surgical procedure, e.g., how to place a stent or dissect the gallbladder. However, the reality in a surgery room is more complex, and the surgery may need to be performed in situations where the patient’s physiological variables are not stable, or there can be a hemorrhage, or even a fire in the surgical theater. The response of the surgical team to these situations will be critical for the well-being of the patient, and immersive VR should be an optimal frame for such training. VR can embed the specific surgical procedure, for example, the placement of the carotid stent, into various contexts and under a number of emergency situations. In this way, during training, not only the contents but also the skills and the experience of being in a surgery room for many years can be transmitted to the trainees, which can include not only surgeons but all the sanitary personnel, each in their specialized roles.

There is a huge explosion of research in the effectiveness of VR-based training for surgery including meta-analyses and reviews ( Al-Kadi et al., 2012 ; Zendejas et al., 2013 ; Lorello et al., 2014 ), transfer of training ( Buckley et al., 2014 ; Connolly et al., 2014 ), and many specialized applications ( Arora et al., 2014 ; Jensen et al., 2014 ; Singh et al., 2014 ). This is likely to be a field that expands considerably.

3. Physical Training and Improvement

Here, we broadly address issues relating to physical training and improvement through sports and exercise, an area of growing interest to professional sports.

3.1. Sports

In the 1990 SIGGRAPH Panel ( Barlow et al., 1990 ), Jaron Lanier mentioned the idea of being able to play table tennis (ping-pong) with a remote player using networked VR. Of course this is now possible 40 and is certain to be readily available in the near future. For example, a version has been implemented using two powerwall displays plus tracking for each player ( Li et al., 2010 ). However, the opponent need not be a remote player in a shared VR but may be a virtual character. Immersive VR, at least with hand tracking if not full body tracking, has ideal characteristics for playing table tennis or other competitive sports, with the possible advantage of not having to spend time traveling to the gym.

There are several areas where VR can provide useful advantage for sport activities. First, for leisure and entertainment reasons – such as the table tennis example above. Second, for learning, training, and rehearsal. To the extent that VR supports natural sensorimotor contingencies at high enough precision, it could be used for these purposes. However, here it would be important to carry out rigorous studies to check in case small differences between the VR version and the real version might lead to poor skills transfer, or incorrect learning. For example, learning to spin or slam in table tennis requires very fine motor control depending on vision, proprioception, vestibular feedback, tactile feedback, force feedback, even the movement of air, and the sound of the ball hitting the table and the bat. Hence, to build a virtual table tennis that is useful for skill acquisition or improvement must take into account all of these factors, or the critical ones if these are known. On the other hand, virtual table tennis could be thought of as a game in its own right and nothing much to do with the real thing. In this case, virtual table tennis would fall under the first category – entertainment and leisure. Additionally, as we will see in Section 6.3 in the context of acting rehearsal, although VR misses fine detailed facial expression that is critical for successful acting, it is nevertheless useful for that aspect of rehearsal known as “blocking,” which is concerned more with overall spatial configuration of the actors in the scenario. Similarly, even without being able to reproduce all the fine detail necessary for the transfer of training skills to reality, VR may be useful in team sports to plan overall strategy and tactics. A third utility of VR in sports is for rehabilitation following injury. We will briefly consider some of these areas.

In a comprehensive review of VR for training in ball sports Miles et al. (2012) analyze eight challenges: effective transfer of training, the types of skills best learned in VR, the technologies that result in the best quantifiable performance measures, stereoscopic displays have both advantages and disadvantages (e.g., vision is not the same as in real life) – under which conditions should they be used?, the role of fidelity – to what extent and under what conditions is it important?, what kind of feedback should be delivered to the learner, how and when is feedback appropriate?, the effectiveness of teaching motor skills in the inevitable presence of latency and inaccuracies of representation, and finally, cost. The review points out several inevitable hurdles that must be overcome. For example, in training for field games such as American Football or soccer, the area of play is huge compared to the effective space in which someone in a VR system can typically move. A play on a field may involve running 25 m, whereas the effective area of tracking is say 2 m around a spot where the participant in VR must stand. Clearly, using a Wand to navigate or even a treadmill may miss critical aspects of the play (see also Section 2.1.3 for a brief discussion of different methods of moving through a large virtual environment). The paper reports many such pitfalls that need to be overcome and points out that studies have been inconclusive and therefore, there is the need for more research.

Craig (2013) reviews how VR might be used to understand perception and action in sport. She argues that VR offers some clear advantages for this and gives a number of examples where it has been successful, as well as pointing out problems. However, she wonders why if it is successful it has not been widely used in training up to now, but where there is reliance on alternatives such as video. She points out that one problem has been cost, though this is likely to be ameliorated in the near term. A second problem is to effectively and differentially meet the needs of players and coaches, pointing out how VR action replays could be seen from many different viewpoints, including those of the player and of the coach so that different relevant learning would be possible. Another advantage of VR would be to train players to notice deceptive movements in opponents, by directing attention to specific moves or body parts that signal such intentions. However, she points out as mentioned above how it is critical to provide appropriate cues to avoid mislearning.

Ruffaldi et al. (2011) examined the theoretical requirements for successful training transfer in the context of rowing and described a haptic-enabled VR system with a single large screen for visual feedback. Rauter et al. (2013) described a different VR simulator for rowing. This was a Cave-like system enhanced with auditory and haptic capabilities, an earlier version described in von Zitzewitz et al. (2008) . Their study, carried out with eight participants, compared skill acquisition between conventional training on water, with training in the simulator. Examining the differences between the two they concluded that both with respect to questionnaire and biomechanical responses that the methods were similar enough for the simulator to be used as a complementary training tool, since there was sufficient and appropriate transfer of training using this method. Wellner et al. (2010b) described an experiment where 10 participants took part in simulated rowing. The novelty was that they added a virtual audience to test the idea that the presence of an audience would encourage the rowers in a competitive situation. They did not find a notable outcome in this regard, only the relatively high degree of presence felt by the participants. On similar lines, Wellner et al. (2010a) examined whether the presence of virtual competitors in a rowing competition would boost performance. No definite results were found, but according to the authors, the study had some flaws, and in any case the sample size was small ( n = 10). In spite of null results, it is important to note how VR affords the possibility to experiment with such factors that would be possible, but logistically very difficult to do in reality.

Another example of this use of VR that is logistically very difficult to do otherwise is for spectators to attend sports matches when they cannot physically attend (e.g., someone in the US who is a fan of English soccer). Instead, they can view them, as if they were there – and have the excitement of seeing the game life-sized, first hand, and among a crowd of enthusiasts. Kalivarapu et al. (2015) implemented a system to display American Football in a high-resolution, six-sided, Cave-like system and also in an Oculus DK2 HMD. They carried out a study with 60 participants who were divided into three conditions: Cave ( n = 20), HMD ( n = 20), and video ( n = 20), where the game and associated events were shown on video. They concluded that the Cave and HMD experiences gave the participants greater opportunity to interact (i.e., view from different vantage points) compared to the video. Participants nevertheless experienced a greater degree of realism in the Cave, perhaps not surprising because of its greater resolution (and several orders of magnitude greater cost). On the whole, the HMD and Cave produced similar results across a number of aspects of presence. There is a growing interest in the use of VR for sports viewing and other events, mainly using 360° video. See also the “Wear the Rose” system that gives fans the chance to experience rugby games first hand, 41 , 42 , 43 , 44 and an example of its use in American Football. 45

There have been many other applications of VR in sports – impossible to cover all of them here – for example, a baseball simulator, 46 for handball goalkeeping ( Bideau et al., 2003 ; Vignais et al., 2009 ), skiing ( Solina et al., 2008 ), detecting deceptive movements in rugby ( Brault et al., 2009 ; Bideau et al., 2010 ), and pistol shooting 47 ( Argelaguet Sanz et al., 2015 ), among others. A special issue of Presence – Teleoperators and Virtual Environments was devoted to VR and sports ( Vignais et al., 2009 ; Multon et al., 2011 ), which would be a good starting point for readers wishing to follow up this topic in more detail (see Presentation S2 in Supplementary Material).

3.2. Exercise

It is well known that aerobic exercise is extremely good for us, especially as we age. A meta study of research relating to older adults carried out by Colcombe and Kramer (2003) showed that there is a clear benefit for certain cognitive functions. A more recent survey by Sommer and Kahn (2015) again showed the benefits of exercise for cognition for a variety of conditions. Yu et al. (2015) showed its utility for Alzheimer patients and Tiozzo et al. (2015) for stroke patients. However, repetitive exercise with aerobic benefits can be boring; indeed, Hagberg et al. (2009) found in a study that enjoyment is important in increasing physical exercise.

Virtual reality opens up the possibility of radically altering how we engage in exercise. Instead of just being on a stepping machine watching a simple 2D representation of a terrain, we can be walking up an incline on the Great Wall of China, or walking up the steps in a huge auditorium where we are excitedly going to watch a sports game, or even walking up steps to a fantasy castle in a science fiction scenario. Instead of just riding an exercise bike, we can be cycling through the landscape of Mars. 48 , 49 , 50

One use of VR for exercising would be an extension of approaches that have already been tested, normally referred to as “exergaming.” This involves, for example, connecting an exercise bike to a display, so that the actions of the rider affect what is displayed, e.g., faster pedaling leads to corresponding depiction of increased optic flow on the display. Moreover, other motivational factors can be introduced such as virtual competitors (as we saw in the rowing example above). Anderson-Hanley et al. (2011) carried out a study with n = 14 older adults using a cybercycle (an exercise bike with a screen in front) and competitive avatars as in a race. 51 Their evidence suggested that this social factor tended to increase participants’ effort. Finkelstein and Suma (2011) used a three-walled stereoscopic display and upper body tracking of participants who had to dodge virtual planets flying toward them. Their experiment included n = 30 participants who played for 15 min. They found that the method produces increased heart rate (i.e., is aerobic) and motivates children and adults to exercise. Mestre et al. (2011) had n = 12 participants in an experiment that used an exerbike (with a large screen) where they compared video feedback with video and music feedback. They found that the addition of music was beneficial both psychologically (for motivation and pleasure) and behaviorally. Anderson-Hanley et al. (2012) carried out a formal clinical trial where they used “cybercycling,” as above, stationary cycling tied to a screen display, with older people ( n = 102). They were interested in testing among other things whether such cycling would improve executive function. They found that cognitive function was improved among the cybercyclers, and that it was likely that it would help to prevent cognitive decline compared to traditional exercise. Overall, while there has been significant work in this area, a systematic review carried out by Bleakley et al. (2013) found that although these types of approach are safe and effective, that that there is limited high quality evidence currently available.

It is one thing to be cycling or walking on a treadmill or exercise steps while looking at a screen, since this is anyway the case with most exercise machines even though the display may be very simplistic. Since the exerciser is not actually moving through space, looking at a screen should be harmless. However, it is not obvious that the same activities could be safely or successfully carried while people are wearing an HMD, which not only obscures their vision of the real world but may also lead to a degree of nausea – which is all the more likely to occur while moving through virtual space. Shaw et al. (2015b) discussed five major design challenges in this field. First, to overcome the problem of possible sickness; second, to have reliable tracking of the body; third to deal with health and safety aspects; fourth the choice of player visual perspective; and fifth, the problem of latency. They described a system that was designed to overcome these problems, that used an Oculus DK2 HMD, and which was evaluated in an experimental study ( Shaw et al., 2015a ). This had n = 24 participants (2 females, ages between 20 and 24). They compared three setups: a standard exercise bike with no feedback, the exercise bike with an external display, and the bike with the HMD. The fundamental findings were that on several measures (calories burned, distance traveled) the two feedback systems outperformed the bike only condition but did not differ from each other. The two systems with feedback were also evaluated as more enjoyable than the bike only, and the HMD was more enjoyable and was associated with greater motivation than the external display system. Only 4 out of 26 reported some minor symptoms of simulator sickness. As the authors pointed out, the study was limited, since the participants were almost all males, and with limited age range, and it is not known how well these results would generalize. Bolton et al. (2014) also described a system that combined an Oculus Rift HMD 52 with an exercise bike that was designed to reduce the possibility of motion sickness; however, no experimental results were given. There are several other applications without associated papers such as RiftRun 53 where participants run on the spot to virtually run through an environment.

Overall, as in other fields, there are promising but far from conclusive results, but irrespective of scientific studies it is highly likely that immersive VR will be combined with personal exercise systems, since the relatively low cost now makes this possible, and some sports providers may decide that the “cool” factor makes such an enterprise worth the economic risk. Whether these are successful or not will obviously depend on consumer uptake.

Finally, as in other applications, we emphasize that VR allows us to go beyond what is possible in reality. Even cycling through Mars is just cycling. It is physically possible, if highly unlikely to be realized. Perhaps though there are fundamentally new paradigms that can really exploit the power of VR – the virtual unreality that we mentioned in the opening of this article. One approach is to use VR to implicitly motivate people toward greater exercise rather than as a means to carry out the exercise itself. Fox and Bailenson (2009) carried out a study where participants using a head-tracked HMD-based VR saw a virtual character from 3PP (i.e., across the room and looking toward them) with a face that was based on a photograph of their own face and that therefore had some likeness to themselves. Participants at various points were required to carry out physical exercises or not. While they did not carry out these exercises the body of their virtual doppelganger became fatter, and while they did the exercises the virtual body became thinner. There were n = 22 participants in this reinforcement condition, n = 22 in another condition where the virtual body did not change, and n = 19 in another condition where there was just an empty virtual room with no character. The dependent variable was the amount of voluntary exercise that participants carried out in a final phase of the experiment (during which there was also positive and negative reinforcement). It was found that the greatest exercise was carried out by the group that had the positive and negative reinforcement. In order to check that it was the facial likeness that accounted for this result, a second experiment introduced another condition, which was that the face of the virtual body was that of someone else. Here, the result only occurred for the condition of the virtual doppelganger. Finally, it could be argued that the participants in the voluntary exercise phase only exercised to avoid the unpleasant sensation of seeing their virtual doppelganger “gaining weight.” A third study examined participants’ level of exercise during a 24-h period after the conclusion of the study, through a questionnaire returned online. The setup was that they saw their doppelganger exercising on a treadmill, or a virtual character that did not look like themselves exercising, or a condition where their doppelganger was not doing any exercise but just standing around. The results suggested that those who saw their virtual look-alike exercising did carry out significantly more exercise in the real world in a period after the experiment than the other two conditions.

A second approach might be to use VR to provide a surrogate for exercising, rather than providing a motivation to exercise physically in reality. Kokkinara et al. (2016) illustrated what might be possible. Participants who were seated wearing an HMD and unmoving (except for their head) saw from 1PP their virtual body standing and carrying out walking movements across a field. They saw this when they looked down directly toward their legs that would be walking, and also in a shadow. In another condition they saw the body from a 3PP. After experiencing this virtual walking for a while they approached a hill, and the body walked up the hill. In the embodied 1PP condition participants had a high level of body ownership and agency over the walking, compared with the 3PP condition. More importantly, for this discussion, while walking up the hill participants had stronger skin conductance responses (more sweat) and greater mean heart rate in the embodied condition, compared to a period before the hill climbing, which did not occur for those in the 3PP. There were 28 participants each of whom experienced both conditions (there was another factor, but it is not relevant to this discussion).

Although there are caveats for both of these studies, the important aspect for our present purpose is that they illustrate how VR might be used to break out of the boundaries of physical reality and achieve useful results through quite novel paradigms. Of course it must always be better to carry out actual physical exercise rather than relying on your virtual body to do it for you. Yet sometimes, for example, on a long flight, virtual exercise might be the only possibility. Indeed, in this context, it has been found that participants who perceive their virtual body from 1PP in a comfortable posture are more likely to feel actual comfort than those who see their body in an uncomfortable posture ( Bergström et al., 2016 ). 54 The point is that VR has the power to go beyond what we can do in physical reality, even in principle, and become a radically new medium with different ways of thinking and novel ways of accomplishing life-changing goals.

4. Social and Cultural Experiences

There are many areas of social interaction between people where it is important to have good scientific understanding. What factors are involved in aggression of one group against another, or in various forms of discrimination? Which factors might be varied in order to decrease conflict, improve social harmony? It is problematic to carry out experimental studies in this area for reasons discussed below. However, immersive VR provides a powerful tool for the simulation of social scenarios, and due to its presence-inducing properties can be effectively used for laboratory-based controlled studies. Similarly, away from the domain of experiments, there are many aspects of our cultural heritage that people cannot experience – how an ancient site might have looked in its day, the experience of being in a Roman amphitheater as it might have been at the time, and so on. Again, VR offers the possibility of direct experience of such historical and cultural sites and events. In this section, we consider some examples of the application of VR in these fields, starting first with social psychology.

Loomis et al. (1999) pointed out how VR would be a useful tool for research in psychology and Blascovich et al. (2002) in social psychology. Here, the potential benefits are enormous. First, studies that are impossible in reality for practical or ethical reasons are possible in VR. Second, VR allows exact repetition of experimental conditions across all trials of an experiment. Moreover, virtual human characters programed to perform actions in a social scenario can do so multiple times. This is not possible with confederates or actors, who can become tired and also have to be paid. Although it is costly to produce a VR scenario, once it is done, it can be used over and over again. Also, the scenarios can be arbitrary rather than restricted to laboratory settings. Rovira et al. (2009) pointed out how the use of VR in social science allows for both internal and ecological validity. The first refers to the possibility of valid experimental designs including issues such as repeatability across different trials and conditions, the precision at which outcomes can be measured, and so on. The second refers to generalizability. For example, in a study of the causes of violence, VR can place people in a situation of violence, which cannot be done in a real-life setting. This means that there is the possibility of generalization of results out of the laboratory to what may occur in reality. In particular, VR can be used to study extreme situations that are ethically and practically impossible in reality. This relies on presence – PI and Psi – leading to behavior in VR that is sufficiently similar to what would be expected in real-life behaviors under the approximately the same conditions. In the sections below, we briefly review examples of research in this area.

4.1. Proxemics

How do you feel when a stranger approaches you and stands very close? The answer may vary from culture to culture, but at least in the “Anglo-Saxon” world you are likely to back away. Proxemics is the study of interpersonal distances between people, discussed in depth by Hall (1969) . He defined intimate, personal, social, and public distances that people maintain toward each other (and these distances may be culturally dependent). An interesting question is the extent to which these findings also occur in VR. If a virtual human character approaches and stands close to you, in principle this is irrelevant since nothing real is happening – there is no one there. Even if the character represents a physically remote actual person who is in the same shared virtual environment as you, they are not really in the same space as you, and therefore not close. We briefly consider proxemics behavior in VR because it is a straightforward but fundamental social behavior, and finding that the predictions of proxemics theory hold true for VR is a foundation for showing that VR could be useful for the study of social interaction.

There has not been a great deal of work on this topic that has exploited VR. Bailenson et al. (2001) showed that people tend to keep greater distances from virtual representations of people than cylinders in an immersive VR. This work was continued in Bailenson et al. (2003) where it was shown that participants maintain greater distances from virtual people when approaching them from the front, than from the back, and also greater distances when there is mutual eye gaze. Participants also moved away when virtual characters approached them. Readers might be wondering – so what? This is obvious. It has to be remembered though that these are virtual characters, no real social interaction is taking place at all. Further studies have shown that proxemics behavior tends to operate in virtual environments ( Guye-Vuilleme et al., 1999 ; Wilcox et al., 2006 ; Friedman et al., 2007 ).

McCall et al. (2009) showed that proxemics behavior can be used as a predictor of aggression. Proxemics distances of n = 47 (mainly self-identified as White) participants were measured from two White or two Black virtual characters. Subsequently, participants engaged in a shooting game with those virtual characters. It was found that there was a positive correlation between the distance maintained from the characters in the first phase and the degree of aggression exhibited toward them in the second phase but only for the condition where both virtual characters were Black.

Llobera et al. (2010) examined proxemics in immersive VR by measuring how skin conductance response varied with the approach of one or multiple virtual characters toward the participant, to different interpersonal distances. This was to test the finding of McBride et al. (1965) of a relationship between proximity and heightened skin conductance. It was found that there was a greater skin conductance response as a function of the closeness to which the characters approached participants and the number of characters simultaneously approaching. However, it was found that there was no difference in these responses when cylinders were used instead of characters. It was suggested that skin conductance cannot differentiate between the arousal caused by characters breaking social distance norms and the arousal caused by fear of collision with a large object (the cylinder) moving close to the participants.

Kastanis and Slater (2012) showed how a reinforcement learning (RL) agent controlling the movements of a virtual character could essentially learn proxemics behavior in order to realize the goal of moving the participant to a specific location in the virtual environment. Participants in an immersive VR saw a male humanoid virtual character standing at a distance and facing them. Every so often the character would walk varying distances toward the participant, walk away from the participant, or wave for the participant to move closer to him. 55 The RL behind the character gained a positive reward every time the participant stepped backwards toward a target position. The long run aim was to get the participant to move far back to this target, unknown to the participant herself. The RL eventually learned that if its character went very close to the participant, then the participant would step backwards. Moreover, if the character was far away then it sacrificed short-term reward by simply waiving toward the participant to come closer to itself, because then its moving forwards action would be effective in moving the participant backwards. Hence, the RL relied on presence (the participant moving back when approached too close – from the prediction of proxemics theory) and learned how to exploit this proxemics behavior to achieve its task. For all participants, the RL learned to get the participant back to the target within a short time. This method could not have worked unless proxemics occurred in the VR. Having shown that this is the case we move on to more complex social interaction.

4.2. Discrimination

Research suggests that VR can provide insights into discrimination by affording the opportunity for people to have simulated experiences of the world through another group’s perspective even if only briefly. For example, we saw earlier how simply placing White people in a Black body in a situation known to be associated with race discrimination led to an increase in implicit racial bias ( Groom et al., 2009 ). On the other hand, virtual body representation has been shown to be effective with respect to racial bias, where White people embodied in a Black-skinned body show a reduction in implicit racial bias ( Peck et al., 2013 ) 56 in a neutral social situation as we saw in Section 2.1.2.

More generally, the method of virtual embodiment has also been used to give adults the experience of being a child ( Banakou et al., 2013 ), has been shown to affect motor behavior while playing the drums ( Kilteni et al., 2013 ), and has been used to give people the illusory sensation of having carried out an action that they had in fact not carried out ( Banakou and Slater, 2014 ). Some of the work in the area of body representation applied to implicit bias is reviewed in Slater and Sanchez-Vives (2014) and Maister et al. (2015) .

A further question is whether embodied experiences as an “outgroup” member will actually translate into different behavior toward members of the group. Although not in the context of discrimination there is some evidence from the work of Ahn et al. (2013) that this might be the case. They immersed people with normal vision into an HMD-delivered VR where they experienced certain types of color blindness. In three experiments ( N = 44, N = 97, and N = 57), they compared the effects of perspective taking where participants simply imagined being color blind to a condition where the display actually made them color blind in the virtual environment. They found that indeed the VR experience did result in greater helping behavior of participants toward color blind people both within the experiment and in their behavior after the experiment (with a moderate effect size of the squared multiple correlation of around 10%). It illustrates how VR might be used to put people experientially in situations and how this may influence their behavior compared with only imaginal techniques.

4.3. Authoritarianism

Stanley Milgram carried out a number of experiments in the 1960s designed to address the question of how events such as the Holocaust could have occurred ( Milgram, 1974 ). He was interested in finding explanations of how ordinary people can be persuaded to carry out horrific acts. The type of experiments that he conducted involved experimental subjects giving apparently lethal electric shocks to strangers. These are a very famous experiments that are as topical today as in the 1960s, and barely a week goes by when there is not some mention of it in news media, 57 or further research relating to it is reported. 58 There were several different variants of the experiment that Milgram designed. Typically, the experimental subject, normally recruited from the local town (near Yale University) rather than from among psychology students, were invited to the laboratory where he or she met another person, also supposedly recruited in the same way. The other person was in fact a confederate of the experimenter, an actor hired for the purpose, this being unknown to the subject. The experimenter invited the subject and the actor to draw lots to determine their respective roles in the experiment. It turned out that the subject was to play the role of Teacher, and the actor the role of Learner, but the outcome of this draw was fixed in advance. Then both the Teacher (subject) and Learner (actor) were taken to another room, where the Learner had electrodes placed on his body connected to an electric shock machine. It was explained that the idea was to examine how punishment might aid in learning. The Learner was to learn some word-pair associations, and whenever he gave a wrong answer he was to be shocked. The Learner, acting in a jovial manner, explained that he had a mild heart condition, and the experimenter assured both Learner and Teacher that “Although the shocks may be painful they are not dangerous.” There are online videos showing the original experiment. 59

The Learner was left in the room, and the experimenter took the Teacher back into the main laboratory, closing the door to that room. He explained to the Teacher that he had to read out cues for the word-pair tests and whenever the Learner gave the wrong answer the Teacher should increase the voltage on a dial and administer an electric shock at that voltage. The voltages were labeled from 15 V (slight shock) to 375 V (danger: severe shock) to 450 V (marked “XXX”). During the course of the experiment, a tape was played giving the responses of the Learner. With the low voltage shocks there was no response. After a while though the Learner could be heard saying “ouch!” and as the voltage increased further he complained more and more vociferously, eventually saying that he had the heart condition and that his heart was starting to bother him. He shouted that he wanted to be let out of the experiment, and finally with the strongest shocks he became completely silent. If at any point the Teachers said that they felt uncomfortable or that they wanted to stop, the experimenter would say one of “The experiment requires that you continue,” “It is absolutely essential that you continue,” or “You have no other choice, you must go on” in a prescribed sequence. Participants generally found that the experience was extremely stressful, and even if they continued through to lethal voltages they were clearly very upset.

Prior to the experiment, Milgram had asked a number of psychologists about how many people would go all the way and administer even lethal voltages to the Learner. The view was that only a tiny minority of people, those with psychopathic tendencies, would do so. In the version of the experiment described above, about 60% of subjects went all the way to administer the most lethal shocks. The results stunned the world since it apparently showed that ordinary people could be led to administer severe pain to another at the behest of an authority figure. There is a wealth of data and analysis and a description of many different versions of this experiment in Milgram (1974) , but the basic conclusion was that people will tend to obey authority figures. Here, ordinary people were being asked to carry out actions in a lab in a prestigious institution (Yale University) and in the cause of science. They tended to obey even if they found that doing so was extremely uncomfortable. Although this is not the place for discussion of this interpretation, interested readers can find alternative explanations for the results in, for example, Burger (2009) ; Miller (2009) ; Haslam and Reicher (2012) ; and Reicher et al. (2012) .

Participants in these experiments were deceived – they were led to believe that the Learner was really just another subject, a stranger, and that he was really receiving the electric shocks. The problem was not so much the stress, but that fact that participants were not informed about what might happen, were not aware that they may be faced with an extremely stressful situation, and were ordered to continue participating even after they had clearly expressed the desire to stop. These and other issues led to strong criticism from within the academic community that eventually led to a change in ethical standards – informed consent, the right to withdraw from an experiment at any moment without giving reasons, and care for the participants including debriefing. See also a discussion of these issues as they relate to VR in Madary and Metzinger (2016) . Hence, these experiments on obedience, no matter how useful, cannot be carried out today for research purposes, no matter how valuable they might seem to be scientifically. Yet, the questions addressed are fundamental since it appears that humans may be too ready to obey the authority of others even to the extent of committing horrific acts.

In 2006, a virtual reprise of one version of the Milgram experiments was carried out ( Slater et al., 2006 ), with full ethical approval. The approval was given because participants were warned in advance about possible stress, could leave the experiment whenever they wanted, and of course they knew for sure that no one in reality was being harmed because in this experiment the Learner was a (poorly rendered) virtual female character displayed in a Cave-like VR setting. 60 The participants (Teachers) sat in the Cave system by a desk on which there was an electric shock machine. They saw the virtual Learner on the other side of a (virtual) partition, projected in stereo on the front wall of the Cave. They went through the same routine with the virtual Learner as in Milgram’s experiment, reading out cue words, and administering “electric shocks” to the virtual Learner whenever she answered with an incorrect wrong word-pair association. Just as in the original experiment, after a while she began to complain and demanded to be let out of the experiment, and eventually seemed to faint. However, if participants expressed a wish to stop, no argument against this was given, and they stopped immediately.

Even though carried out in VR, many of the same results as the original were obtained, though at a lower level of intensity of stress. There were n = 34 participants, 23 of whom saw and heard the virtual Learner throughout the experiment, and 11 who saw and spoke to her initially but then a curtain descended, and they only communicated with her through text once the question and answer session began. All those who communicated by text gave all of the shocks. However, 6 of the 23 who saw and heard the Learner withdrew from the experiment before giving all shocks. In other words, 74% continued to the end, in spite of the fact of feeling uncomfortable, as was shown by their physiological responses (skin conductance and electrocardiogram responses).

In the paper, it was argued that the gap between reality and VR makes these types of experiments possible. Presence (PI and Psi) leads to participants tending to respond to virtual stimuli as if they were real. But, on the other hand, they know that it is not real, which can also dampen down their responses. In debriefing, when participants were asked why they did not stop even though they felt uncomfortable, a typical answer was “Since I kept reminding myself that it wasn’t real.” From the original experiments of Stanley Milgram we know (at least for the 1960s around Yale in the US) how people actually responded. In VR, we see that they responded similarly, though not with the very strong and visible stress that many of the original participants displayed. Using VR, we can study these types of events, and how people respond to them, and construct predictive theory that may help us understand how people might respond in reality. The predictions can then be tested against what happens in naturally occurring events and the theory examined for its viability. This type of approach can also be used to gather real-time data about brain activity of people when faced with such a situation ( Cheetham et al., 2009 ).

4.4. Confronting Violence

You are in a bar or other public place and suddenly a violent argument breaks out between two other people there. It seems to be about something trivial. One man is clearly the perpetrator, and the victim is trying to calm down the situation, but his every attempt at conciliation is used by the perpetrator as a cue for greater belligerence. Eventually the perpetrator starts to physically assault the victim. What do you do? Suppose you are alone there? Suppose there are other people? Perhaps the victim shares some social identity with you, such as being a member of the same club or same ethnic group different to that of the aggressor. How do you respond? Do you try to intervene to stop the argument? Or walk away? How is your response influenced by these factors such as number of other bystanders or shared social identity with the victim or aggressor?

This area of research was initiated in the late 1960s provoked by a specific incident when apparently 38 bystanders observed a woman being murdered and did nothing to help. 61 Latane and Darley (1968) introduced the notion of the “bystander effect,” which postulates that the more bystanders there are at an emergency event such as this, the less likely it is that anyone would intervene, due to diffusion of responsibility, see also Darley and Latané (1968) . However, other researchers have also suggested the importance of social identity as a factor, the perceived relationships between the people involved, for example, see Reicher et al. (2006) ; Hopkins et al. (2007) ; Manning et al. (2007) ; and Levine and Crowther (2008) . There is a meta-analysis and review of the field by Fischer et al. (2011) .

As pointed out by Rovira et al. (2009) , one of the problems in this area of research is that for ethical and practical reasons it is not possible to actually carry out controlled experimental studies that depict a violent incident such as that described in the opening paragraph of this section. This is very similar to the situation of the Obedience studies discussed above. Instead, researchers have to study surrogates such as the responses of people to someone falling ( Latane and Rodin, 1969 ) or responses to an injured person laying on the ground ( Levine et al., 2005 ). However, these are not violent emergencies so that it may not be valid to extrapolate results from such scenarios to what might happen in actual violent emergencies. In VR it is possible to set up simulated situations, where we know from presence research that people are likely to react realistically to the events portrayed. King et al. (2008) suggested the use of Second Life to provide a non-immersive simulation of the bystander situation and described a case study where a particular person was victimized to examine how the presence of bystanders mediated the level of helping offered. It was concluded that one reason that people did not intervene was that they thought that this should be the responsibility of the Second Life monitors rather than the ordinary “citizens.” In another video-game setting, Kozlov and Johansen (2010) found that participants were less prone to helping behavior in the presence of larger groups of virtual characters. A possible problem though with using video games is that they do not mobilize the body – there are no natural sensorimotor contingencies so that PI becomes something at best imaginal. In some applications this may not be important. However, when studying people’s responses to emergency situations it may be prudent to have whole body engagement, some illusion that the body itself is present and at risk. Garcia et al. (2002) showed that only imagining the presence of other bystanders results in a bystander effect to the extent that participants are less likely to help others after the end of the study if they had been primed to think about or being in a group than being alone. Hence, it might be the case that video games are mainly aids to imagination and that results obtained from video games might be the same as those from imagination. Indeed, a result from Stenico and Greitemeyer (2014) suggests that this might be the case. This is not to say that such results are invalid but that by themselves they are not convincing enough, and some experimental evidence is needed that does place participants into the midst of a violent emergency so that various factors influencing their responses can be investigated. But, as we have said this cannot be done both for practical and above all ethical reasons.

Slater et al. (2013) used immersive VR (a Cave-like system) to study the social identity hypothesis: that participants who share social identity with the victim are more likely to intervene to help than if they do not share social identity. The method to foster social identity with a virtual human character was through the use of soccer club affiliation. All of the n = 40 participants were fans of the English soccer team Arsenal. They were in a virtual bar where they had an initial conversation with a life-sized male virtual character (V). This character was either an Arsenal supporter depicted through his shirt and his enthusiastic conversation about Arsenal ( n = 20, “ingroup” condition), or a generic football fan, not a supporter of Arsenal ( n = 20, “outgroup” condition). After a while of this conversation another character (P) – also wearing a generic soccer shirt but not Arsenal – butted in and started to attack V especially because of his support of Arsenal. This attack increased in ferocity until after about 2 min it became a physically violent attack. 62 The main response variable was the number of times that the participant intervened on the side of V. It was found in accordance with social identity theory that those in the group where V was an enthusiastic Arsenal supporter intervened much more than those in the other group. There was a second factor, which was whether or not V occasionally looked toward the participant during the confrontation, but this had no effect. However, there was a positive correlation between the number of interventions and the extent to which participants believed that V was looking toward them for help – but only in the ingroup condition.

Since it is impossible to compare these results with any study in real life, of course their validity in the sense of how much they would generalize to real-life behavior cannot be known. However, experiments such as these generate data and concomitant theory, which can be compared in a predictive manner with what happens in real-life events. In fact, there is no other way to do this other than the use of actors – which as mentioned earlier can run into ethical and practical problems. Moreover, the knowledge gained from such experiments can be used also in the policy field, for example, providing advice to victims on how to maximize the chance that other people might intervene to help them, or of use to the emergency or security services on how to defuse such a situation. 63 It is a way to provide evidence-based policy, and if the evidence is not generalizable to real situations then with proper monitoring, the policy will ultimately be changed.

4.5. Cultural Heritage

“In today’s interconnected world, culture’s power to transform societies is clear. Its diverse manifestations – from our cherished historic monuments and museums to traditional practices and contemporary art forms – enrich our everyday lives in countless ways. Heritage constitutes a source of identity and cohesion for communities disrupted by bewildering change and economic instability.” (Protecting Our Heritage and Fostering Creativity, UNESCO). 64

The preservation of the cultural heritage of a society is considered as a fundamental human right, and there is a Hague Convention on the protection of cultural property in the event of armed conflict. 65 As we have seen tragically in recent years, there has been massive and deliberate destruction of cultural heritage, two well-known examples being the Buddhas of Bamiyan 66 and the partial destruction of Palmyra. 67 UNESCO maintains a country-by-country world heritage list. 68

The ideal way to preserve cultural heritage is physical protection, preservation, and restoration of the sites. There has also been significant work over many years concerned with digital capture and visualization of such sites, which of course can be displayed in VR ( Ch’ng, 2009 ; Rua and Alvito, 2011 ). The first and obvious application of VR in this field is to allow people all over the world to virtually visit such sites and interactively explore them. This is no different from virtual travel or tourism, except for the nature of the sight visited. This is also possible through museums that have VR installations. The second is digitization of sites for future generations, and especially those that are in danger of destruction either through factors such as environment change or conflict. The third type of application is to show how these sites might have looked fully restored in the past and under different conditions such as lighting conditions. For example, it is quite different to see the interior of a building or a cave with electric lighting than under the original conditions that the inhabitants of that time would have seen them – by candlelight or fire. The fourth is to see how sites, both cultural heritage and non-cultural heritage sites might look in the future, under different conditions such as under different global warming scenarios.

This is a massive field and mainly concerned with digitization, computer vision, reconstruction, and computer graphics techniques. Here, we give a few examples of some of the virtual constructions that have been done and that potentially could be experienced immersively in VR.

An example of one type of application is described by Gaitatzes et al. (2001) who show how museum visitors can walk through various ancient sites visualized in a Cave-like system, in particular through the ancient Greek city of Miletus. 69 Carrozzino and Bergamasco (2010) give various examples of museum installations. 70 , 71 Interestingly, they speculate on a number of reasons why the use of VR in museum settings may not have been taken up so much recent years: (1) cost; (2) it requires a team to be able to do this; (3) lots of space is needed for the installation; (4) visitors do not want to wear VR equipment; (5) it is a single person experience; and (6) VR might be thought to be not serious enough to include in such august settings as museums. Apart possibly from the last issue, each of these problems is largely overcome with the advent of low-cost, high-quality HMDs with built-in head tracking. Of course it is still true that an interdisciplinary team is required to create the environments, although see Wojciechowski et al. (2004) and Dunn et al. (2012) for an example of how to do this. In particular, digital acquisition and rendering of cultural heritage sites requires a huge amount of data to be processed. An example of how this was handled for the site of the Monastery of Santa Maria de Ripoll in Catalonia, Spain, is presented in Besora et al. (2008) and Callieri et al. (2011) and an example of a user interface for virtually navigating this site in Andújar et al. (2012) . A famous example of the virtual recreation of world heritage is the digitization and rendering of Michelangelo’s statue of David plus several statues and other artifacts of ancient Rome ( Levoy et al., 2000 ). The David statue 72 required 2 billion polygons for its representation, and the software is available as freeware from Stanford. 73

Sometimes a digital reconstruction is the only way to view a site. The ancient Egyptian temple of Kalabsha was physically moved in its location to preserve it from rising flood waters. Sundstedt et al. (2004) digitally reconstructed it to show it in its original site, and also how it may have looked two millennia earlier, including illuminating it with simulations of the type lighting that may have been used at that time. Gutierrez et al. (2008) describe a method for highly accurate illumination methods for heritage sites. Happa et al. (2010) review various examples of illuminating the past, together with descriptions of the methodology used.

Many examples of virtual cultural heritage in the past have been implemented for desktop or projection systems – though of course they could always be displayed immersively in HMDs. However, this raises other issues such as appropriate tracking, interfaces, and so on. A joystick for navigation, for example, is not always appropriate for an HMD (especially bearing in mind that movement without body action can sometimes be a cause of simulator sickness). Also a screen display has the advantage that typically it can be much higher resolution than what is possible in an HMD, where all the detailed lighting and detail rendering might not even be perceivable. Webel et al. (2013) describe their experience with a number of the newer technologies for display and tracking in the virtual construction of four different sites for display in a museum. They point out how traditional systems, such as tracking, requiring the wearing of devices, and expensive Caves are not always suitable for busy environments such as museums. However, low–cost, camera-based tracking systems do not require physical contact with visitors, and the use of the Oculus Rift HMD (in their application) allowed visitors to look around the virtual environment simply by turning their head rather than learning a joystick type of navigation method. In other words, these systems provide a natural means of interaction. As the authors wrote: “With the Oculus Rift as a display and head-tracking device, the user’s immersion can be extremely increased. The natural camera control just by turning the head, like one would do in the real world, lets users control this aspect without even thinking about it. The combination with natural interaction inputs with the Kinect or the Leap Motion enables the user to directly interact with the virtual world.”

Kateros et al. (2015) review the use of Oculus HMDs for cultural heritage and show how they were used in a number of applications and give insight into their ideas for preparing a user study. Casu et al. (2015) carried out such a study comparing the viewing of art masterpieces in the classroom through a non-immersive multimedia white board display and the Oculus Rift. Their experiment had n = 23 students in a between-groups design (12 saw the non-immersive display) and found that the HMD method was superior across a range of subjective questionnaire-based factors including motivation. Such studies, while useful, do not address the problem of the “wow factor,” i.e., using the HMD is novel, and it certainly provides a quite different experience than the multimedia white board. However, maybe once such systems become commonplace, the same results might not be obtained. There are no clear-cut answers, and it is not easy to establish criteria for the success or otherwise in comparing such systems (since there are many factors that vary between them). For example, Loizides et al. (2014) compared a powerwall with an Oculus Rift HMD for virtual visits to cultural heritage scenarios in Cyprus. They found that participants appreciated both types of display and especially the presence-inducing capabilities of the HMD. However, the HMD also led to greater nausea. As mentioned though, it is very difficult to make such comparisons because on the one hand the HMD had the natural interface for viewing (head tracking) but on the other hand much lower resolution. Moreover, the price ratio between powerwall and HMD was (at that time) 40 to 1, a factor not reflected in the difference in participant evaluation.

Finally, it should be noted that cultural heritage is not only buildings and statues. There are rich traditions in societies that are passed down the generations that are certainly no less important to preserve for the future – intangible heritage. An obvious example is folklore stories, but the medium for the ultimate representation of these for preservation through the generations is in written form. However, there are other examples, such as folk dances – which can be preserved through younger generations learning these from their elders – but this does not provide a form for others to experience. Aristidou et al. (2014) show how folk dancing can be digitally captured and represented. 74 They concentrate on the technical aspects, but clearly such efforts can be portrayed immersively (see Presentation S3 in Supplementary Material).

5. Moral Behavior

Sometimes in our professional and personal lives we are faced with problems that cannot be answered by any kind of evidence-based scientific reasoning. The science can provide information, but it cannot determine what should be done. Imagine that there is a nuclear reactor providing power for millions of people, and that the science determines that in the next 10 years there is a 5% chance that it will explode causing massive contamination. There are no resources to repair it and no alternatives. It can be decommissioned, and in the short to medium term this will cost many lives and great suffering. It can be left to run, with the corresponding risk. The science can determine the level of risk, but it cannot determine the action. In military or police action, there is the issue of “collateral damage.” Action to resolve one kind of threat that might save many lives may indeed cost many lives in its execution. The science can inform about relative risks and costs, but it cannot determine what is the right thing to do.

How people “should” and do make decisions under such conditions of moral uncertainty are subjects for study in moral philosophy and neuroscience. Normally, abstract situations are used for reasoning or gathering evidence about the responses of people. A famous example is the “trolley problem,” 75 where you have to make a choice between allowing a runaway trolley (or tram or train…) to run over and kill five unaware people in its path or diverting it to kill another person ( Foot, 1967 ; Thomson, 1976 ). What do you do? Suppose the trolley were running toward the one person, but there were five others on another track. Would you divert to the train to save the one but kill the five? According to survey evidence ( Hauser et al., 2007 ), most people will choose the action that saves the greatest number – five rather than one. 76 Suppose to save the five, however, you have to push someone else onto the track to divert the train. In this case, few people will choose to take that action.

Philosophers distinguish between utilitarian and deontological principles. The first states that it is best to take the action that maximizes the greatest good, i.e., is concerned with consequences (the end justifies the means). The second emphasizes rather that an action in itself must be ethical, based on universal maxims. For example, if it is wrong to steal then it is wrong to steal in any circumstances, irrespective of possible beneficial outcomes. See Hauser (2006) for an exposition of these various principles in the context of psychology and neuroscience. Although sacrificing one person to save five is the utilitarian solution, people also do act out of deontological principles – which is why few support actively pushing someone onto the track even though the outcome is exactly the same in utilitarian terms. Moreover, choosing to take the action of diverting the train to save five rather than one has the same outcome as not choosing to divert the train when it is running toward one with five on another track ( omission ). However, omission could be argued to be both utilitarian (five are saved rather than one) and deontological (not personally taking an action that would kill).

These discussions have been going on for centuries. But, how can we know what people would actually do? As we saw in the example of the Stanley Milgram Obedience experiments (Section 4.3) what people might say they would do and what they do actually do when faced with a situation are not necessarily the same. Below we give some examples where VR has been used, relying on its presence-inducing capabilities, to face people with such dilemmas and where their behavior can be observed. Of course, this does not solve the moral problem of what the “right” behavior should be, but rather can inform about what people actually do, and ultimately the factors and brain activity behind this.

5.1. Virtual Representations of Moral Dilemmas

Transforming a short verbal description of a scenario such as the trolley problem into VR is non-trivial. There are “five people” – which people? Gender? Age? Ethnicity? Social class? How do they look? What are they doing? Why are they there? There is a trolley or train – exactly how does it look? How fast is it going? What is the surrounding scenery? The experimental subject can divert the train – exactly how? Which action needs to be taken? How can the designer be sure that the subject will even be looking in the necessary direction? How can it be set up so that the subject sees the five and also sees the one? Doing something in VR means making it concrete and specific, obviously changing the scenario – which in one case is dependent on the imagination of the subject in response to a statement in a questionnaire, but in the other is there to be seen and heard.

Navarrete et al. (2012) implemented a version of the trolley problem, making all of the above choices but staying true to the story line, and they carried out an experiment where participants were faced with the choice between saving five or one. 77 There were n = 293 participants who experienced the scenario in an HMD-based system (NVIS). This was a between-groups experiment where one group experienced the action condition (they could act to save five) and the other group the omission condition (if they did not act five would be saved). Just over 90% of subjects chose the utilitarian solution in line with questionnaire-based results. However, those who had to actively save the five showed greater arousal (skin conductance levels) than those who could save the five by doing nothing. Moreover, the greater level of arousal was associated with a lower propensity to take the utilitarian outcome. This could indicate that following the utilitarian path leads to greater internal conflict within participants, but following it without simultaneously violating deontological principles is a less stressful choice. Ideally, in order to rule out the effect on arousal simply of carrying out the action there should be a condition that equalizes the level of physical action across the conditions. However, the important point is that such studies can be carried out at all.

Pan and Slater (2011) portrayed a dilemma equivalent to the trolley problem. Participants were taught how to control a platform that operated as an elevator in an art gallery. The gallery consisted of two floors, ground and upper level. Virtual characters entered and could ask to be taken to the upper level to view the paintings there or remained on the ground floor. At one point – in the Action condition – there were five characters on the upper level and one on the ground level. A seventh person entered and asked to be taken to the upper level. While still on the elevator, that character raised a gun and started to shoot toward all those on the upper level. The participant could leave the shooter there (risking the five) or bring the elevator down (risking the one). The Omission condition was similar except that at the critical moment there was one character upstairs and five downstairs. To avoid the problem that the types of people represented by the virtual visitors might influence the results they were portrayed as stick figures, so that characteristics such as those mentioned above – age, gender, etc. – could not be inferred. This was a between-groups experiment with 36 participants in 2 factors: the situation was portrayed in a 4-screen Cave-like system or on a single PC screen. The second factor was the Action and Omission conditions. Running such an experiment in VR really illustrates how different it is than telling people a story and asking for their response. For those in the Cave their fundamental reaction was confusion or panic illustrated by the fact that 61% of them carried out multiple actions in response to the shooting compared to 33% of those in the desktop condition. However, taking into account the final resting point of the platform, 89% of those in the Action condition in the Cave brought the lift down, whereas 22% did so in the Omission condition. For those in the desktop condition the equivalent proportions are 67 and 22%. The differences between Cave and desktop were not significant, although being a pilot experiment the sample sizes were small. This experiment was featured in a BBC Horizon documentary “Are You Born Good or Evil?” where people naïve to the experiment were filmed. More than the statistics, their reactions pointed to the fact that they did actually experience a genuine dilemma. 78 , 79 A more sophisticated version of this setup was repeated in an HMD-based study ( Friedman et al., 2014 ) concerned with embodiment and time travel, where realistic virtual characters were portrayed. In terms of responses to the dilemma they were similar to the other studies. In these studies it has been found that people become more utilitarian in VR compared to what they will say in response to a questionnaire – i.e., they are more likely to adopt a decision depending on the outcome (saving five rather than one). In another study that used desktop VR the same was found. Specifically, subjects were more likely to make utilitarian decisions in VR compared to the same scenario described textually. In other words, although participants judged it less acceptable to sacrifice one person to save five when this dilemma was presented verbally, when it came to their actual action in VR they were more likely to do so. There is therefore a division between what people will say they would do and what they would actually do faced with the situation. This illustrates what VR is useful for in these types of context.

Finally, Skulmowski et al. (2014) used a screen-based system to situate participants in a trolley that they could control and avoid colliding with people standing on branching tracks. They investigated a number of hypotheses relating to specific types of potential victims (male, female), the number balanced against each other (e.g., 10 people rather than 5 against 1, or 1 against 1), ethnicity, altogether with 11 different hypotheses. They found that there were different response times depending on gender of the potential virtual victims, with a greater tendency to sacrifice males. In this study, arousal was estimated by measuring pupil dilation (see Presentation S4 in Supplementary Material).

5.2. Doctor/Patient Interaction

One area in which VR is likely to flourish in the coming years, as its cost comes down and it becomes more ubiquitous, is for the training of professionals. In many professions, people make fundamental ethical decisions – not so dramatic as the trolley problem, but nevertheless often very important. How does a lawyer act knowing for certain that a client has committed a horrific crime? Does a health inspector close down a factory putting at risk hundreds of jobs or allow the factory to continue with unsanitary practices – when it is clear after several warnings that there will be no significant improvement? With limited resources should an agency responsible for deciding which medicinal drugs should be available on prescription go for the cheaper one that has been shown to have limited success, or the vastly superior one that is also vastly more expensive? Choosing the latter might disadvantage the greater number of people due to restrictions on other drugs, yet also save the lives of a few.

Sometimes, these issues are covered by law and sometimes not. We consider one example. How do medical professionals learn to interact with their patients in such circumstances? Of course they observe their supervisors and teachers, and they read and learn about this in medical school. However, there is no substitute for experience. But, experience requires that prior to interacting with patients the doctors have already learned to interact with patients. Hence, VR can provide training and many different scenarios that will help toward gaining experience ( Cook et al., 2010 ).

The idea of using virtual patients has been very thoroughly studied for many years 80 ( Cendan and Lok, 2012 ). For example Kleinsmith et al. (2015) has investigated empathy training with virtual patients. Here, though we consider only ethical problems in dealing with patients – where contrary to medical advice a patient demands a certain medicine; the first time that a doctor confronts this problem with a patient would typically be with a real patient. A case in point is the overprescription of antibiotics. This is a balance between the needs of society as a whole (to avoid enhanced bacterial resistance to antibiotics) and the needs of the individual. If a patient demands antibiotics but the medical evidence suggests that these would not be appropriate, does the doctor prescribe in order to have a quieter life, or perhaps avoid being sued should the decision ultimately have been a wrong one, or follow the higher principle that not prescribing unless clearly necessary may be the best thing to do for the greater good? Pan et al. (2016) carried out an experiment with n = 21 medical doctors (general practitioners; 9 being trainees with limited experience and the remainder with an average of about 6 years’ experience). The experiment was carried out using an Oculus DK2 through which each doctor had a consultation with a virtual mother and her daughter. The mother had a small cough, and the daughter demanded that the mother be given antibiotics because when faced with the same problem a year before, the antibiotics had cured the problem immediately. Since the medical indications were that this was probably a viral infection, the participants (GPs) resisted the demand for antibiotics, which unleashed a torrent of complaints and anger from the virtual daughter. 81 Finally 8 out of the 9 trainees prescribed the antibiotics, whereas 7 out of the 12 experienced doctors did so. The results also suggested that for those in experienced group, the greater their reported level of presence the less the probability that they would administer the antibiotic. The use of this type and many other scenarios in the medical and other professions could be of great utility in training, and preparing people for situations that they are almost bound to face eventually. Just as airline pilots first learn on simulators so the same is likely to be true across a range of professions.

6. Travel, Meetings, and Industry

6.1. virtual travel.

Using VR, it is possible that you may not need to have physically gone to a place to say that you have visited it. Sitting in your home you can be navigating the streets and shopping in Hong Kong, ascending Mount Everest, visiting the Taj Mahal, exploring the Forbidden City in Beijing, or even the landscape of Mars. You can watch at first hand ceremonies and customs from Polynesia to Greenland. 82 This is an obvious and long-discussed application. There are various possibilities: to visit a place virtually before going there, to visit the place instead of going there, to have a business meeting virtually with remote partners, meeting in a shared virtual environment, have a break on a beach in the middle of the day in winter during your coffee break in the office; the possibilities are limited only by imagination and what technology can deliver at the time (which of course is always changing).

This is far from a new idea. Already two decades ago people in the travel industry were considering the “virtual threat to travel and tourism” ( Cheong, 1995 ), arguing that “the perceived threat of virtual reality becoming a substitute for travel is not unfounded and should not be ignored. Virtual reality offers numerous distinct advantages over the actual visitation of a tourist site … that could result in the eventual replacement of travel and tourism by virtual reality.” The advantages of VR suggested were (1) technology could eventually support “the perfect virtual experience” where the sun never stops shining (for one kind of holiday), or the snow is perfect (for another kind), there are no unruly (real) people around, and so on. (2) It is convenient – there is not the stress of traveling, it is significantly cheaper, there are no inconveniences. (3) Places could be visited that are not easily accessible (Mars is an extreme example). One could even travel in the past or to fantasy worlds. (4) People who are unable to travel because of illness or disability would easily be able to do so. (5) There are no risks – tropical diseases, accidents, and food poisoning. (6) There is no damage to the places visited. (7) Business travel could be simplified. However, Cheong (1995) goes on to discuss the reasons why this might not really be a threat – virtual immersion is not the same as really being there; it would be difficult in VR to engage in exchanges with the locals (like discussions in a market, learning to dance the Hula); there is a level of complexity and randomness in the real world that cannot be reproduced in VR; people might confuse reality and VR; and there would be problems with countries whose revenues depend greatly on tourism.

On the one hand, of course since 1995 tourism has not been replaced by VR (on the contrary – see the next section), but on the other hand, none of the objections above seem insurmountable (even revenue from tourism could be protected by some kind of royalty system). Moreover, as global warming becomes an increasingly serious prospect and threat, VR could provide a way of lessening some of the negative impact of travel. An article by Guttentag (2010) suggested that VR could be useful for tourism for planning, management, marketing, entertainment, education, providing accessibility to inaccessible places such as archeological sites (see Section 4.5) with consequent heritage preservation. However, Guttentag wondered whether VR could ever provide an alternative to real travel, emphasizing a point made in Cheong (1995) that VR may never be able to substitute basic sensory experiences – “the smell of ocean spray” or make virtual surfing feel like the real thing. In other words, at the end of the day will VR ever be technically up to the mark in providing a genuine substitute for the real experience?

In this section, we do not attempt to answer this question, since the answer cannot be known. Rather, we describe what has already been accomplished in this realm across a variety of applications that require some kind of travel. Perhaps, VR is not meant to be a substitute for real travel but just another form of travel, no less valid in its own terms than all that physically boarding the real aeroplane entails.

6.2. Remote Collaboration

The contribution of travel to the world economy is colossal. According to the World Travel and Tourism Council ( WTTC, 2015 ), travel and tourism generated $7.6 trillion in 2014, amounting to 10% of global GDP. It also accounted for 10% of all jobs (277 million), with the travel economy growing faster than other sectors such as health, financial services, and automotive. See also the extensive statistics produced by the World Tourism Organization UNWTO. 83 On the other side, travel comes with significant costs ( Reford and Leston, 2011 ). The first obvious one is the potentially disastrous impact on the planet’s environment ( Zhou and Levy, 2007 ) including the negative impact on health of air pollutants – e.g., Curtis et al. (2006) and Kampa and Castanas (2008) – see, for example, a meta-analysis by Mustafić et al. (2012) that reports a clear relationship between many of the associated pollutants and the near-term risk of heart disease. A second problem is especially in regard to business travel. In the US alone, $283B was spent on business travel in 2014. 84 However, such travel can be disruptive both to the business and the personal life of the traveler ( Gustafson, 2012 ) including contributing to family conflict and burnout ( Jensen, 2014 ). Nevertheless, for business (let alone personal and family relationships) face-to-face contact is thought to be essential. Even if face-to-face meetings can be substituted by one of the various forms of teleconferencing systems available, it has been suggested that these types of virtual meetings may even generate greater physical travel ( Gustafson, 2012 ).

In an analysis of the relationship between air travel and the possibilities offered by videoconferencing in the past four decades Denstadli et al. (2013) did not find any clear picture and certainly not the case that videoconferencing might substitute air travel. Based on the analysis by Jones (2007) , it is argued that face-to-face meetings are important for completing projects across international sites, maintaining commitment to strategic plans and shared organizational culture, knowledge sharing, creativity, and new services. There are of course related issues such as trust, using business meetings to get away from the office from time to time, taking the opportunity to meet friends or relatives in remote locations, and so on. Hence, face-to-face meetings seem to be essential, and interestingly it is precisely those who travel the most who engage in most videoconferencing meetings. Hence, there is a complex relationship between the two. Nevertheless, in the study of Denstadli et al. (2013) ( n = 1413), of those who had access to videoconferencing tools one-third said that they believed that some air travel could be replaced by videoconferencing. For example, probably some readers of this article would have experienced the situation of several hours of travel to attend or speak at a 1-h meeting and then to travel home shortly afterward – sometimes wondering what the point of it all might have been. Can VR be of benefit in this domain?

In this section, we briefly review the possibilities offered by immersive VR as a means for enabling remote communication and collaboration. We consider a virtual environment that is shared between multiple participants. Each participant is represented by a virtual body (an “avatar”) and can see the representations of the others. Ideally participants’ movements are tracked, they can move through the virtual environment, and can talk to one another. Hence, they are in a 3D stereo surrounding space along with others. Of course, there are several technical issues involved in how to realize such a system ( Steed and Oliveira, 2009 ), such as how and where to distribute the computation (one master machine broadcasting to all the others or a distributed network?), how to keep the various participant environments synchronized with one another so that they are all able to perceive the same consistent environment etc., but these issues are not considered here. In its ideal form , such a system must be superior to videoconferencing – since for example, the latter cannot display spatial relationships, eye contact, and so on. However, an ideal form of a shared VR would require real-time full facial capture, eye tracking, real-time rendering of subtle emotional changes such as blushing and sweating, subtle facial muscle movements such as almost imperceptible eyebrow raising, the possibility of physical contact such as the ability to shake hands, or embrace, or even push, and so on. Such a system does not exist today, though it is one to strive for. Some of these capabilities might be realized with the type of VR referred to as 360° surround, but we defer the discussion of this to Section 7.2. In the following section, we review some of what has been achieved and what the likely prospects are.

6.3. Shared Virtual Environments

Probably, the first published work where more than one person could simultaneously inhabit the same virtual environment was presented by Blanchard et al. (1990) . This was the VPL system that allowed two people each with their own HMD (Eye Phone) and data glove to be simultaneously copresent in a virtual environment. Over the next few years, there were many systems that provided this and typically extending to multiple participants rather than two ( Greenhalgh and Benford, 1995 ; Frécon and Stenius, 1998 ; Frecon et al., 2001 ), and today it is a matter of course that VR systems support this capability ( Bierbaum et al., 2001 ; Tecchia et al., 2010 ), and VR development platforms of recent choice such as Unreal Engine or Unity3D are also multi-participant systems.

So, the capability for virtual environments shared by multiple participants has been around for a long time, supported by many platforms, and realized in massive online systems such as Second Life, although typically non-immersively. The work by Apostolellis and Bowman (2014) is a good recent illustration of collaboration in a learning context that was realized with screen-based displays. The early days of research in this area, apart from the technical issues of how to build systems, concentrated on exploiting the capabilities of VR to improve remote collaboration beyond what might be possible even in face-to-face communications – for example, the type of work reported in Benford and Fahlén (1993) and Koleva et al. (2001) . However, the primitive representations of people (very crude block-like characters) due to the relatively limited graphics and processing power at the time made this of interest only in a research context.

Later work concentrated on exploring social dynamics within shared virtual environments. For example, the research described in Tromp et al. (1998) ; Steed et al. (1999) ; Sadagic and Slater (2000) and Slater et al. (2000) had three-person groups carry out a task together although they were physically in different places (including even different countries). This also compared the group dynamics in VR to real encounters and found that the dynamics was greatly influenced by the computational power and type of immersion. For example, the group leader that would emerge in VR was the one with an HMD rather than those interacting with the others on screen, but this same person was less likely to be the leader when the group met for real. Also, people were quite respectful of each others’ avatars, notwithstanding their extreme simplicity – for example, avoiding collisions and apologizing when collisions invariably happened. Steed et al. (2003) carried this further by having pairs of people, one in London, UK, and the other in Gothenburg, Sweden, each in a Cave-like system spend around 3.5 h working together. Some of the pairs were friends, and some were strangers. They found that the partners could collaborate well on spatial tasks, where the avatars representing their whole bodies played an important role. However, on other negotiation tasks, where facial expression would be quite important to gage the intentions of the other, the friends did better together than the strangers. A review of this type of avatar-mediated communications can be found in Schroeder (2011) .

Although during the 2000s the graphics power to display more realistic human avatars in real time and in large numbers became available, the type of “ideal” system mentioned earlier still was far from possible. Nevertheless, researchers began to address critical aspects of non-verbal communications that can make remote face-to-face interactions in virtual environments effective, such as shaking hands ( Giannopoulos et al., 2011 ; Wang et al., 2011 ). Steptoe et al. (2008) introduced eye tracking as a way to determine the gaze of each individual avatar in virtual meetings between three remote participants (one in London, one in Salford, and the other in Reading, UK) each in a Cave-like system. Analysis suggested that participants automatically used gaze direction much as they would in a similar conversation in reality. This was followed up by Steptoe et al. (2010) who showed that eye tracking data that allowed avatars to be rendered showing gaze direction, blinking, and pupil size resulted in participants being able to better detect one another telling lies compared to a video conferencing system. This was between two participants in different physical places one using a Cave-like system and the other a power wall. Another recent idea for remote collaborative working is for each party to use a whiteboard, where they would see a silhouette of the remote person, like a shadow, on the white board. It was found that participants tended to act as if they were in the presence of the remote person ( Pizarro et al., 2015 ). Although a lot of work on such avatar-mediated communication during this period took place using projection systems such as Caves, Dodds et al. (2011) used HMDs to embody two remote people in the same environment. They found that body tracking, in particular showing arm gestures, played an important role in bidirectional communication between the partners. When, for example, the gestures of the avatar of one of the partners were replaced by prerecorded animations then the communication was not as successful in task achievement.

A combination of HMD and Cave system was used for a case study of remote acting, where two actors rehearsed a short scene using a script from The Maltese Falcon movie 85 ( Normand et al., 2012a ). One actor was in Barcelona wearing a full motion capture suit and a wide field-of-view high-resolution HMD. The other actor was in a Cave in London and had some level of body tracking (arm gestures). The two were in the same virtual environment and could see and hear the avatars representing the other. A director was in a separate room in London. He could see and hear the scenario on screen, and video of the director’s face was streamed in real time to both actors. Therefore, the director could communicate to the actors and tell them where to stand, what to say, how to improve their performance – generally act like a director. 86 The professional actor involved in London concluded that such a system could be used for remote acting rehearsal especially for aspects such as blocking concerned with spatial locations and movements of actors, lines of sight, and so on. This work was followed up by Steptoe et al. (2012) who used again an actor in Barcelona in VR who saw a virtual representation of the remote London scenario, and she was represented as a wall screen avatar with a spherical display to represent her head to the actor, and the director was in the Cave. See also Steed et al. (2012) for a description of the technology. Observers from the Royal Academy of Dramatic Art commented on the positive potential uses of such a system for rehearsal and blocking, which are the arrangements and lines-of-sights of actors at the different stages of a play. Of course, again the lack of facial expression shown on the avatars is a drawback in these types of system.

Another drawback is the lack of touch – if one participant touches the avatar of another then typically nothing would be felt. Bourdin et al. (2013) set up an application where two remote people wearing an HMD and body-tracking suit interacted with a third person (an experimenter) who was in a Cave, so that all three saw representations of one another in a shared virtual environment. The experimenter had the task of persuading the other two to sing together. As part of the persuasion, she could touch the avatars of the two participants on the shoulder, upon which they could feel a vibration from a small actuator located on their shoulder. Thus touch was used as part of the persuasion. 87 Earlier Bailenson et al. (2007) carried out experiments using haptic only virtual environments where they showed that touch helped in the communication of emotions between people, both with respect to recognizing emotions recorded as haptics earlier by others, and with respect to simultaneous communications between remote partners. Their paper also contains a review of the field and a theoretical model. Basdogan et al. (2000) using a haptic only environment carried out a series of experiments, which also found that haptic feedback could impart critical information in remote communications. This work culminated in a “hands across the Atlantic” experiment where remote participants, one in London, UK, and the other in Cambridge, MA, USA, carried out joint tasks together such as lifting an object that they saw on screen and using haptics to help in the communication between them ( Kim et al., 2004 ). Apart from describing the technological issues involved in setting up such a system, the results showed that the haptic feedback improved the sense of copresence, that is, that the remote participants felt that they were together.

6.4. Virtual Beaming

One obvious way to introduce haptics into remote VR-enabled communication is to actually use physical representations of people in the form of remotely controlled robots. This was envisaged and implemented in the very early days of VR. Fisher et al. (1987) described a telerobotic control system developed at NASA Ames (CA, USA), where the participant wearing a head-tracked HMD and other tracking, audio, and tactile feedback equipment received visual input from the cameras mounted on a remote robot. The robotic body essentially visually substituted the person’s own body, therefore appearing to be colocated somewhat like the discussion of embodiment in Section 2.1.1. Recently, this idea of the symbiosis between a person in VR being represented remotely as a humanoid robot has seen some new applications as a particularly exciting form of remote collaboration where the participants are given physical form in the remote place. Here, the participant uses VR to perceive the remote location in full stereo with head- and body-tracking but is represented as a humanoid robot in the remote location. The humanoid robot moves as a function of the real-time body tracking of the participant, who can speak (through the robot) to local people in the remote location. It is a further and up-to-date realization of what was presented in Fisher et al. (1987) except now for the purposes of remote collaboration.

An example was shown in a BBC interview. 88 The BBC interviewer in London (Technology Correspondent Rory Cellan-Jones) interviewed a scientist in Barcelona who was fitted with a wide field-of-view head-tracked HMD and a body-tracking suit. She was represented as a humanoid robot that was in the same room as the journalist in London. Her movements captured by the motion capture suit were transmitted across the Internet to the robot and applied to it so that it moved almost synchronously and in correspondence with her. A Skype connection allowed her to speak through the robot, whose mouth opened and closed in sync with her speech. Cameras fitted as the eyes of the robot transmitted video back to the HMD, so that she saw the surrounding London environment in stereo. Since the HMD head tracking data were transmitted and applied to the robot head, she could look around the room in London and converse with the BBC interviewer. The technology used was described in Spanlang et al. (2013) . The same technology was used to beam journalist Nonny de la Peña from Los Angeles (CA, USA) to Barcelona. In Los Angeles, she wore the body-tracking suit and HMD. She was represented as the humanoid robot in Barcelona. Embodied as the robot, she conducted a debate between three students on the issue of Catalan independence from Spain and also interviewed a scientist about his research on HIV. 89

The idea is reminiscent of “beaming” in Star Trek. Instead of a person being physically decomposed, transmitted to a remote place, and then recomposed there, a person in VR has their movements and speech transmitted to the remote place and applied to a humanoid robot, and sensory data – vision, sound, and touch – is transmitted back from the robot’s sensory apparatus to the person, that is perceived in VR. The locals in the remote place interact with the robot that is embodied by the beamer. The beamer, however, through the VR becomes present in the remote place. This has also been used by journalist Nonny de la Peña to beam from London, UK, to Barcelona to interview neuroscientist Dr. Perla Kaliman about food for the brain. 90 This journalism resulted in a news article about the results of the interview itself, rather than about the system used to realize it 91 ( Kishore et al., 2016 ).

The same kind of beaming setup has been used to create a shared environment between a small animal and a human. Normand et al. (2012b) showed a human participant in VR interacting with a virtual human, which in fact was a tracked rat in a cage 12 km away. Simultaneously, the rat interacted with a rat-sized robot, which in fact was moving determined by the tracked the movements of the remote human. Hence, each interacted with an entity at its own scale (the rat with a small robot, the human with a human-sized avatar), leading to interspecies communication. This type of setup is of value in ethology. In an article on animal geography and related issues, Hodgetts and Lorimer (2015) wrote in reference to this work that “… it is claimed that the human and the rat were able to participate in a purportedly playful meeting of species that seems straight from the pages of science fiction. Such experiments in adjusting scale do little to shift power dynamics in interspecies communication. Nor does the lab maze create anything more than a novel environment for encounter. Yet the prospect of engaging with animal worlds in more embodied, interactive and exploratory ways opens new avenues for developing richer accounts of animal lifeworlds.”

The issue of non-verbal communications is critical for face-to-face communications, and as we have mentioned above there are attempts to overcome this problem, for example, using eye tracking to animate the eyes of avatars. Telerobotics enables physical presence and to some extent the conveyance of body language, depending on the extent of body tracking and the capabilities of the robot; however, facial expression remains a problem, even though some robots can do this. Nevertheless, the subtle cues of which we are not consciously even aware in communication are not rendered. One way out of this problem has been explored through the combination of animatronics and “shader lamp” technology. Shader lamps project computer-generated images onto neutral objects so that observers would see the simple object as animated. In particular, an animated human face can be projected onto, for example, a spherical or egg-shaped object, thus making it appear as if the physical object were an animated face. Moreover, the face could be one that is captured by face-tracking or video from a remote person. Lincoln et al. (2009) proposed and implemented shader lamps for the faces of remote people projected onto animatronic puppets. The participant could be far away seeing the real surroundings of the puppet through a VR, and his or her face back-projected onto a shell, so that an observer of the puppet would see video of the real face of the distant person, and be able to interact with that person. 92 Some research has suggested that this type of technology, where faces are displayed on physical objects, in this case a spherical display, can improve the aspects of trust in remote communications ( Pan et al., 2014 ) (see Presentation S5 in Supplementary Material).

6.5. Interacting by Thought

The descriptions above of embodiment in remote robots through which social interaction can take place with distant people are reminiscent of movies such as Avatar (see text footnote 11) and Surrogates . 93 The fundamental difference is that whereas in the systems above people move their remote robotic bodies through their own deliberate movement (realized through real-time motion capture), in the vision presented in these movies, the remote representation is moved through a brain interface. The participant only has to think or imagine moving the remote body, and it moves the corresponding cyborg or robot body (in the movies perfectly) just as if they were moving their own real body. To a limited extent, this has been achieved today. For example, Millan et al. (2004) were able to control a mobile robot through non-invasive brain recordings or BCIs. Leeb et al. (2006) described their research with a tetraplegic patient who was able to use a BCI to navigate through a virtual environment presented in a Cave. He triggered his movement entirely by the voluntary production or halting of a specified electrical brain signal (EEG pattern). 94 The same motor-imagery paradigm was used for the voluntary control of an arm belonging to the participant’s virtual body ( Perez-Marcos et al., 2009 ), resulting in an illusion of ownership over the virtual arm. BCI was used in a telepresence application for disabled patients by Tonin et al. (2011) , although the patients did not see the remote environment via VR but rather video on a PC display. Nevertheless, this demonstrated the possibility. A survey of the use of BCI in VR and games was presented by Lécuyer et al. (2008) .

Martens et al. (2012) demonstrated that a number of whole body tasks could be realized by a participant wearing an HMD embodied in a remote robot controlled through various BCI paradigms. Participants could pick and place objects, and engage in a game. This study also illustrated how the BCI could be used to recognize the intentions of the participant (for example, pick up a glass) and the robot would execute and complete the intention (since non-invasive BCI today simply does not permit the fine control necessary).

The lack of fine motor control results from the fact that most BCI systems use non-invasive scalp electrodes that therefore record brain signals of low spatial resolution. For patients who cannot otherwise move, acting in the world through the motor control of a robot is a possibility that may justify (invasive) brain implants. Small electrodes placed in the cortical tissue record the activity of groups of neurons with higher spatial resolution, allowing the control of finer movements. Wessberg et al. (2000) first showed that direct recording from the neurons in monkeys enables them to control quite sophisticated movements of a remote robot arm without using their own real arm. A similar approach has been used in people with tetraplegia that could successfully control robotic arms through brain implants ( Hochberg et al., 2006 , 2012 ). Moreover, depending on what the actuators may encounter, feedback can be used to stimulate appropriate groups of neurons that cause different tactile sensations. This was realized in monkeys by O’Doherty et al. (2011) where they were able to move a virtual arm that touched virtual objects distinguished only by their texture. Such technology could be used to drive prostheses that replace missing limbs, or exoskeletons that move actual but paralyzed limbs, or virtual bodies experienced in immersive VR or remote physical robots or cyborgs.

The latter possibility is the vision of Avatar and Surrogates . In each case, people perceive through the senses of their remotely embodied cyborg or robot and act in the world through those bodies. In John Scalzi’s novel Lock In 95 people suffering from “locked in syndrome” are present in the world through such robot embodiment. Although these are works of science fiction they are beginning now to be technically feasible and almost surely are going to be realized with the advance of neuroscience, VR, and robotic technology. For example, Kishore et al. (2014) showed how BCI could be used to embody people in a remote robot through which they could gesture and maintain a conversation with the people there. 96 , 97

The “Embodiment Station” reported by Leonardis et al. (2014) was inspired by the setup in Surrogates . The Embodiment Station is a large chair that is a mobile platform that can induce force feedback (see text footnote 97 from minute 2:50). The participant is fitted with an HMD and has a multitude of physiological responses recorded and various different types of stimulation applied to his or her body. The participant may be embodied in a virtual body or remote physical body.

People in Avatar are shut into a tubular structure that monitors their brain and provides feedback so that they become embodied into a remote genetically engineered cyborg body. Cohen et al. (2014b) [see also Cohen et al. (2012) ] show how to use real-time fMRI to decode particular thoughts of participants so that they are able to embody a virtual character 98 and control a remote robot thousands of kilometers away ( Cohen et al., 2014a ). 99 Although of course the degree of control and the level of embodiment are generations away from what is depicted in Avatar, it is nevertheless a clear step along the road toward this vision (see Presentations S6 and S7 in Supplementary Material).

6.6. Industrial Applications and Design

During the 25 years when VR was supposedly dead, or at best confined to University laboratories, industry was busy using it to develop products, inventing new methods of manufacturing, assembly and training, maintenance, and shopping. We briefly review some work in this area.

In a major review of the use of VR in car manufacture, Lawson et al. (2016) pointed out that VR can be used for design, avoiding the complex and expensive procedure of building physical mockups. With a mockup, any small change can result in major new work. Of course, VR is far more flexible in this regard. VR is also used for virtual manufacturing, that is part of the preparation, planning, and risk assessment in the manufacturing process, and clearly also invaluable for training. VR can be used for learning the assembly and disassembly of parts. Data from an in-depth survey revealed that VR was being used for a number of aspects in the design, manufacture, and evaluation – to examine the look of the vehicle including product reviews with clients, motion capture of manufacturing procedures, reviews relating to ergonomic use of the vehicle.

There has been significant work on industrial assembly, training for maintenance and remote maintenance – for example, Gavish et al. (2011 , 2015) and Seth et al. (2011) . This is also enhanced by the possibility of mixed reality where a participant in a VR can see their own hands incorporated into the virtual environment ( Tecchia et al., 2014 ; Sportillo et al., 2015 ). 100 Immersive VR is also being used for automobile testing. 101

In another context, Tiainen et al. (2014) found that customers were equally at home in evaluating furniture presented virtually as physically. Indeed, they made more suggestions for design improvements in evaluations of the virtual products. Customers designing aspects of the interior of automobiles is also being prototyped using HMD-based VR. 102

Virtual reality has also been used in the clothing industry where powerful computer graphics-based cloth simulators are used to allow customers to virtually try on clothes on virtual representations of their own bodies ( Hauswiesner et al., 2011 ; Magnenat-Thalmann et al., 2011 ; Sun et al., 2015 ). Although not yet used in an immersive way, such systems are bound eventually to be a normal part of shopping – as we will have our own body representations, trying on clothing in the comfort of our homes without the inconvenience of traveling, queues, and fitting rooms would be a possible major application.

A final example is a highly innovative potential application in the food industry. Ruppert (2011) describes how VR is used to study the behavior of shoppers in response to different kinds of packaging and layout in supermarkets. It is suggested that where consumers want to buy healthier products that experimentation with different types of presentation could result in knowledge about how to best present such products so that they stand out for these types of consumer.

As argued by Lawson et al. (2016) , VR can improve the prototyping, production, evaluation processes in manufacture, it can also be part of the design process, and ultimately for marketing. It also offers the possibility of consumers being involved in design and even designing aspects of the products that they will buy. In fact, VR combined with 3D printing could totally revolutionize how products are designed, manufactured, and delivered, giving enormous new power and possibilities to consumers 103 (see Presentations S8 and S9 in Supplementary Material).

7. News and Entertainment

We have already mentioned the potential benefits of VR for travel, for visiting remote relatives, and so on. Moreover, the use of VR in games is obviously going to be a huge area of application and one of the driving forces of the industry. 104 , 105 There is a clear role also for immersive movies, where the participant plays a role within the story, somewhere between a game and a movie. These are such obvious applications of VR we are not going to discuss them further here. The chances are that any person first learning of VR in 2016 will do so because of a game or movie. In this section, we therefore concentrate on a quite novel field that VR opens up, which is the immersive presentation of news. This is usually called “immersive journalism.” However, it is important to note that it is not the journalism that is immersive but the presentation of its results through immersive media, leading to the creation of a genuine new type of media for news reporting. We will consider the issues involved, including ethical issues, and finally discuss the differences between computer graphics-based VR and 360° video.

7.1. News and Immersive Journalism

The idea of immersive journalism is “the production of news in a form in which people can gain first-person experiences of the events or situation described in news stories” ( de la Peña et al., 2010 ). Let’s consider the main headlines (online) of the Los Angeles Times on January 23, 2016 and see what this might mean.

7.1.1. Los Angeles Times January 23, 2016

www.frontiersin.org

If we compare the report with the VR version we can see that they reflect quite different purposes. In each row, the left side is the reporting of “news” (“Newly received or noteworthy information, especially about recent events,” Oxford English Dictionary). There are masses of academic research studies and theories of what makes it into “The News” (as reported by newspapers, radio, TV, and of course now myriad online outlets). Interested readers could read, for example, a classic analysis by Galtung and Ruge (1965) who identify a number of factors that influence what events typically get into the news, and a follow-up study by Harcup and O’Neill (2001) who examined the earlier theory in the light of a content analysis of stories in three British newspapers. The theory includes factors such as those events involving elite nations or persons are more like to be newsworthy than non-elites. For example, news in Western media is more likely to report on events in the USA, Europe, China, and Russia than in the Seychelles, except, for example, when events in other places directly affect those countries (e.g., events in the Middle East). The divorce of a movie star is far more likely to make it into the news than the divorce of your next-door neighbor (unless you happen to live next to a movie star). However, who decides what is important? This reflects another aspect of news, which is that there are not events just “out there” floating around, and they just happen and then are selected by journalists according to some criteria and then reported factually, but it is an active process where what is news is defined by journalists and multifarious interests and ideologies that make up particular media cultures ( O’Neill and Harcup, 2008 ). For example, a President attends an important international event. If the President is a man, the reporting may focus on the event and its background. If the President is a woman, a great deal of attention may be instead paid to her clothing. 106 , 107 News values can differ enormously between different organizations. What makes it into the equivalent of the left side of each row in the table above, and how it is reported, are not simply matters of fact.

Now considering the possible immersive VR versions there is quite a difference – the goal is not so much the presentation of “what happened” but to give people experiential, non-analytic insight into the events, to give them the illusion of being present in them. That presence may lead to another understanding of the events, perhaps an understanding that cannot be well expressed verbally or even in pictures. It reflects the fundamental capability of what you can experience in VR – to be there and to experience a situation from different perspectives. This is no more or less “objective” than news in traditional forms – what is selected, and how it is presented inevitably will reflect the interests, culture, political views of the journalists involved, and perhaps even more importantly their news organizations. There is no way around that, since what might be “news” is infinite, and something has to be selected.

Moreover, how news in VR will be understood will also be actively shaped by the participant. Recall that in VR there are neither “users” nor “observers” but participants or consumer-participants . Even if you are just an observer without the actual ability to intervene, presence in VR is such that you will likely have the perception that ongoing events could affect you. Hence, the consumer of a news story in one medium becomes a participant in the virtual story in the other, the “immersive journalism” that creates a scenario to represent aspects of the news story in VR. However, there is a difference. Let’s go back to the woman President attending an event. A VR rendition of this puts you in the scene in the 1PP of someone who attended and who was greeted by the President. She moves over to you, smiles, and says some words of greeting: to you. Assuming that the journalist had made every effort in visual reconstruction to be faithful to the original event, whether the clothes that the President is wearing stand out or not depend wholly on you, the perceiver. You may pay attention to them or not, you may see them as remarkable or not. If the journalist wanted to really point out to participants the clothing worn by the President, this is of course entirely possible in VR – whether openly or surreptitiously. However, if the goal is to try to be objective, then how certain aspects of the events are interpreted will depend more on the perceiver than on the designer. We will come back to some of these points later.

The first immersive journalism piece was developed in 2010 in Barcelona, Spain, and directed by journalist Nonny de la Peña with the help of digital artist Peggy Weil. It followed on from the idea of their 2009 interactive Second Life piece that portrayed a virtual Guantánamo Bay prison. 108 The immersive news story was displayed in a Wide5 HMD by Fakespace for the display (see text footnote 18) and incorporated body tracking. It established a pattern that was to be used by Nonny de la Peña in later productions, which was to use a mix of data from actual events combined with a computer graphics-based reconstruction. It relied on transcripts of the interrogation of Detainee 063, Mohammed Al Qahtani, at Guantánamo Bay Prison 2002–2003. The scenario was in a single cell-like room, and the participant was embodied in a virtual character wearing an orange “jump-suit.” From a 1PP, the participant’s virtual body posture was shown in a stress position – one reportedly used for “harsh interrogations.” The participant could see the virtual body either directly looking toward his own body and in a virtual mirror. However, in fact the participant was seated comfortably in a chair. The participant would hear an interrogation as if coming from a cell next door. 109 A case study ( de la Peña et al., 2010 ) with three participants was carried out who were interviewed after their experience. All reported that even though they were seated comfortably, they felt uncomfortable, even pain, from the posture of their virtual body. This result that the posture of the virtual body can actually influence feelings of comfort or discomfort of participants has recently found new evidence ( Bergström et al., 2016 ) (see text footnote 54). The three participants felt a foreboding that the interrogation in the next cell would soon shift to them. Although the participants had not been given any forewarning of the meaning of the event that they were to experience, one of them said: “During the experience I was kind of reminded of the news that I heard about the Guantánamo prisoners and how they feel and I really felt like if I were a prisoner in Iraq or some… war place and I was being interrogated.” It illustrates the difference between the left column (traditional reporting of news) and right column (news in VR) in the Table above. The left column might be a written piece about harsh interrogation methods, or a TV news piece illustrating aspects of this. But, on the right hand side there is experience. Of course, this is not the real experience, but may give participants insight into how some aspects of the situations depicted might have been.

“Hunger in Los Angeles” 110 was a subsequent piece by Nonny de la Peña. This puts participants in a food line in Los Angeles where one of the people in the queue faints due to diabetes, and the various characters around react. It was based on an actual event and blended real sound recordings with computer graphics. The virtual characters in the food line were animated through the motion capture of actors. It was experienced by hundreds of people at the Sundance Film Festival in 2012. The 2014 World Economic Forum featured “Project Syria” by de la Peña, which depicted a bomb explosion in a Syrian town and its aftermath (see text footnote 110). This followed the same pattern of being based on an actual event and starting from video and audio from the real scenario. Further pieces on the same lines are “One Dark Night” 111 about the shooting of teenager Travyon Martin and “Kiya” about an incident of domestic violence and murder 112 (recall the fifth item in the table above).

An alternative to using computer graphics to reconstruct events is the use of 360° video. A scenario is captured by using a special camera and subsequent software to patch video together to form a completely surrounding scene that can be displayed in an HMD. Due to head tracking, the viewer can look all around the scene, and depending on how it has been captured, it can also be displayed in stereo. We will return to the technology in Section 7.2. This is therefore an alternative way of displaying events immersively.

“Waves of Grace” 113 by Gabo Arora (Senior Advisor and Filmmaker, United Nations) and Chris Milk (Vrse.works) use this technique to recreate the true story of a survivor of Ebola in Liberia. They also created “Clouds over Sidra,” a documentary about a child refugee in the Syrian war. 114 Louis Jebb founder and Edward Miller head of visuals of Immersiv.ly use 360° video to create immersive news events. Some examples have been the coverage of unrest in Hong Kong 115 and a 360° VR experience of the paintings of the artist Gretchen Andrew on a self-guided interactive tour of a computer-generated recreation of the De Re Galler in Los Angeles. 116 The Des Moines Register working with Dan Pacheco produced a documentary that combined both computer graphics-generated VR and 360°, which can be viewed in an Oculus HMD that provided an in-depth study of the situation of farmers in Iowa, called “Harvest of Change.” 117 The New York Times has started VR news based on 360°, using Google Cardboard as the means of display and has created a number of stories with this technology. 118 The BBC is also experimenting with 360° HMD-based news, 119 for example, providing experience of the refugee crisis. 120

At the same time as the great enthusiasm of VR in this domain, 121 there are also warnings about its ethics. For example, in an excellent and comprehensive article on potential problems, Tom Kent (Standards Editor, Associated Press and Columbia University) urges “an ethical reality check for virtual reality journalism.” 122 The first point concerns the depiction of reality. For example, “Hunger in Los Angeles” was a reconstruction using computer graphics for the display. It was not the real thing. It is important for consumer-participants to always be made aware of this, and it should form part of the ethics code being devised by digital journalists. 123 However, it is important to note that all journalistic reporting necessarily involves transformation and cannot possibly ever depict every aspect of reality. At the moment that the news camera focuses on the face of a politician, it of course misses everything else that is happening at the same time, some of which may change the meaning of the facial expression. Depicting any event with its infinite aspects and nuances in any media whatsoever necessarily involves a transformation. As we argued above, starting from what is selected to how it is portrayed involves myriads of choices. VR is no different in this regard. It can be argued that in VR a journalist could, for example, deliberately change the facial expression of a protagonist from a friendly smile (as it was in reality) to an arrogant grin. This could happen deliberately or by accident. However, how different is this from taking a small sentence in a speech of a politician out of context, thus distorting its meaning away from that intended? The use of VR requires ethical standards no more or less than conventional news reporting.

Another point relates to 360° video-based pieces, where there is an issue of image integrity. Since the Associated Press does not allow manipulation of images should particularly disturbing parts of a scene on a battlefield or bomb site be left in or not? Again, this is nothing special for VR. Of course a 360° view is less selective than a single camera shot or normal video shot. There are conventions where images are “distorted” though – such as blurring the faces of vulnerable people in order to protect them. It is not clear why such conventions could not be applied in the same way. This is nothing really to do with VR. As we argued in Section 1.1, VR is a media where conventional approaches will eventually be overtaken by a new paradigm. Today, shooting a 3D movie inevitably draws on the conventions of traditional movie making, so that problems of inclusion are paramount, since 360° in principle shows “everything.” New paradigms will eventually overcome this problem.

The third point is that there may be competing views of what happened in any event, so VR portraying one version may not reflect the diversity of views. This also has nothing to do with VR. In fact, VR may have an advantage that it is possible to relive a scenario from multiple points of view – from the viewpoints of different protagonists, which may sometimes even explain why they describe an event quite differently. The 1950 Japanese movie “Rashomon” 124 received international acclaim for doing this – depicting a story from the multiple points of view of the characters involved. Another version was released in 1964 called “The Outrage.” 125 VR could excel in such multi-viewpoint recreations.

Tom Kent argues that since VR is excellent for producing empathy, and identification with characters who may be experienced as being physically close to consumer-participants, that journalists have a special responsibility to make sure that their piece is balanced. For example, if they have the goal of producing sympathy toward particular people or situations they could emphasize aspects that provoke empathy or leave out balancing information that could be inconvenient to their story. This is of course true but again it applies no less than to conventional media. It could be argued though that VR is particularly adept at raising emotions and therefore unwitting consumer-participants might be more easily manipulated. This may be true. For example, we have seen in Section 2.1.2 how embodying White people in a Black body appears to reduce their implicit racial bias against Black people ( Peck et al., 2013 ). However, we also saw in Section 4.4 that in a fight between two virtual characters about soccer teams, only participants who supported the same team as the victim tended to try to intervene to stop the fight ( Slater et al., 2013 ). People did not change their behavior simply as a result of being near a virtual character that was attacked by another. In other words, people are not like sponges and just soak up whatever emotion is poured into them. In the racial bias example, participants were generally not explicitly biased, so in reducing their implicit (i.e., largely non-conscious) bias perhaps they were being helped toward realizing their own non-biased preferences. Imagine a VR scenario that placed a United States Democrat supporter into a Republican rally or an English vociferously anti-European voter into the heart of the Brussels decision-making community. Are either of these likely to change their views as a result? Of course, research is needed on this issue, but people should not be considered as empty vessels ready to be filled by whatever propaganda comes along. At the end of the day if a journalist wants to present a particular viewpoint they will do so with whatever means they have, so that the critical requirement is openness, information about potential distortions, and appropriate ethical standards.

The final main point made in the article by Tom Kent is that the virtual environment is a circumscribed world, and of course the scenario is embedded in a wider world in which other related events may be happening. On the one side, the VR gives the impression to participants that they can freely go wherever they want, but of course the specific virtual environment has boundaries outside of which nothing can be perceived. This is a problem of selection, applying no less to other news media. When you are reading a story in a newspaper is it the whole story? Of course not, and it never can be.

Arguments about the ethics of VR miss the point that it is not the only way or even the “best” way to deliver news (or indeed any story at all, whether supposedly real or fictional). Just as VR is not going to replace novels in the form of books, it is not going to replace traditional media. It is another medium, another method for the production and display of narrative, providing a different kind of “information,” providing a different kind of emotional engagement. These are not “better” or “worse” but just different. You can read about the refugee camp at Calais in France full of people wanting to enter the UK, or you can visit there virtually, 126 or really go there. Each of these will provide quite different information and responses. One may give facts and figures and talk about policy and implications for the future of the European Union, another may show the physical and emotional plight of particular people in that camp. Visiting the camp virtually might lead someone already so inclined to do something to try to help the individuals concerned, but not necessarily result in a change in their political convictions about immigration. What is important is that all types of journalism follow ethical standards, and this applies no matter what the medium (see Presentation S10 in Supplementary Material).

7.2. 360° and VR

There is some discussion about whether 360° video as has been used in some of the pieces described above is “really” VR. For example, Will Smith in an article in Wired 127 argued that systems such as 360° video as might be seen through Google Cardboard should not be called “VR,” the main argument being that the relationship between head moves and image changes are more likely to lead to simulator sickness in 360°. However, this battle has already been lost. Mainstream media are already referring to 360° video as VR, and that is not going to change.

In order to consider this question, we return to the concept of “immersion” discussed in Section 1.3. Immersion refers exclusively to the technical affordances of a system. Different types of immersion may give rise to different types of subjective experience, but this is a different issue. One system is “more immersive” than another if the first can be used to simulate the second. This can classify all systems into what mathematicians call a “partial order.” It is partial because that not all pairs can be classified in this way – there may be two systems where neither can be used to simulate the other. 128 Now, if we consider 360° VR as video captured in a real setting and displayed in a head-tracked HMD then that can, in principle, be entirely simulated by a computer graphics rendering of the same scene, but not vice versa . By a graphics rendering of the scene we mean one based on a computer model (the model ultimately describes all the geometry, material properties, lighting, and dynamics of objects in the scene). Since there is a model, participants can change their point of view to anywhere within the scene. For example, they can move close to any object and then circle around it while observing it. If the viewpoint is restricted to only a few specific points, where from those points the viewer can turn around and look 360° then this is equivalent to “360-degree” VR. However, 360° VR cannot allow participants the full range of movement through the scene, to be able to observe any object arbitrarily from any angle.

In normal vision based on natural sensorimotor contingencies, when we see one object obscuring another, we can move our head and in principle see completely behind the obscuring object. This can be done with correct perspective and head movement parallax in graphics-based VR. This cannot be done, or to a very limited extent in 360° video. Graphics-based VR can be restricted to simulate the 360° simulation, but not vice versa . Therefore, there is a fundamental technical difference that will always persist by definition between 360° and model-based VR. Model-based VR can simulate 360°, but not vice versa . Therefore, technically it has a greater immersion in this classification of systems.

Ultimately, this means that they are useful for different purposes. If the VR is meant to depict something up-close and personal, such as interaction with a virtual character where the participant and virtual character might be arbitrarily changing their positions in the space, then this cannot be accomplished by 360°, since this type of parallax effect (e.g., just moving the head to see behind the character) just is not possible, unless every possible move that the participant was going to make was determined in advance and camera data made available for these possibilities. On the other hand, for a large-scale scene such as witnessing street protests as in Immersiv.ly’s Hong Kong protests mentioned above, then 360° is sufficient. Provided that the designers did not intend the possibility for a participant to move up close to any arbitrary protestor for one-on-one unplanned interaction then this is fine.

Therefore, we would conclude that model or graphics-based VR and 360° VR are different possibilities in the domain that is referred to as “virtual reality,” and designers and application builders will use the type of system that fits best with their goals. For close-up interaction, 360° will quickly break the natural sensorimotor contingencies that are necessary for the generation of presence. On the other hand, for large-scale scenes looking at objects far enough away, 360° is not only the simpler form of construction and rendering, but it is good enough in terms of sensorimotor contingencies. It is not either one or the other, both have their role. A major worry of Will Smith is that one would be confused with the other, and that people with poor experiences in 360° will therefore label “virtual reality” as poor. Sensible and careful use of both types of technology where they are most appropriate would avoid this possibility.

It should be noted that it is not the model-based solution in itself that is important here, but what it offers in terms of natural sensorimotor contingencies for perception. There will eventually be other solutions that are not model-based but offer the same. One likely solution will be based on light fields ( Levoy and Hanrahan, 1996 ; Ng et al., 2005 ), which attempt to fully simulate the propagation of light through an environment, and therefore allow a viewer to dynamically move anywhere within a scene. The problem is that dynamic changes to objects, and especially changing lights, cannot easily be supported. Some recent developments for HMDs based on light field displays are discussed in Lanman and Luebke (2013) .

8. Conclusion

8.1. recent novel ideas and applications.

In this article, we have mainly reviewed developments in VR that have taken place since its origins in the 1980s, focusing on applications, and especially those with outcomes that have some level of research support. The field is changing extremely rapidly, and the inventiveness of people is amazing, with new ideas and projects emerging daily. Here, we briefly list some recent ideas that have caught our attention (as of May 2016). Mostly, these are ideas in progress, with no results, or maybe not even any level of implementation. They are presented in random order.

Mark Zuckerberg: Virtual Reality Might Be Coming to Your Baby Photos

https://www.youtube.com/watch?v=rACZOac1w8w

The idea that VR may be used to share photos immersively.

Dreams of Dali

http://thedali.org/dreams-of-dali/

A VR experience based on Dali’s 1935 painting Archeological Reminiscence of Millet’s “Angelus.”

Visualizing Big Data

http://www.mastersofpie.com/project/winners-of-the-big-data-vr-challenge-set-by-epic-games-wellcome-trust/

How “big data” in particular a longitudinal social survey can be explored in HMD-based VR.

Topshop – London Fashion Week

https://www.inition.co.uk/case_study/virtual-reality-catwalk-show-topshop/

Attend the show using VR.

A History of Cuban Dance

http://with.in/watch/a-history-of-cuban-dance/

A 360° VR documentary.

Second Life in VR

http://www.bizjournals.com/sanfrancisco/blog/techflash/2016/01/second-life-second-act-virtual-reality-sansar.html

San Francisco Business Times reports “In virtual reality, Second Life prepares for its second act.”

Megadeth in VR

https://www.youtube.com/watch?v=PnQAz8jWAh0

A YouTube documentary about Megadeth bringing heavy metal to VR.

In the eyes of the Animal

http://www.sundance.org/projects/in-the-eyes-of-the-animal

A Sundance Festival winner showing views of how the world might look to various animals

Virtual Reality in Court

http://www.popsci.com/jurors-may-one-day-visit-crime-scenes-using-forensic-holodecks

A Popular Science report “Scientists Want To Take Virtual Reality To Court – Jurors May One Day Visit Crime Scenes Using Forensic Holodecks.”

Project Nourished – A Gastronomical Virtual Reality Experience

http://www.projectnourished.com

“You can eat anything you want without regret.”

Curing Cataract Blindness

http://www.ndtv.com/world-news/virtual-reality-could-be-the-next-big-thing-in-curing-cataract-blindness-1269591

NDTV report “Virtual Reality Could Be The Next Big Thing In Curing Cataract Blindness.”

Oculus Quill

https://www.youtube.com/watch?v=kPHWHJNTlkg

Drawing in VR.

Producer of Acclaimed “First” Sets Sights on Anne Frank VR Experience

http://www.roadtovr.com/producer-of-acclaimed-first-sets-sights-on-anne-frank-vr-experience/

Plans for a historical VR reconstruction of aspects of the life of Anne Franke.

Step inside the Large Hadron Collider (360 video)—BBC News

https://www.youtube.com/watch?v=d_OeQxoKocU&index=1&list=PLS3XGZxi7cBXqnRTtKMU7Anm-R-kyhkyC

“A 360 tour of CERN that takes you deep inside the Large Hadron Collider—the world’s greatest physics experiment—with BBC Click’s Spencer Kelly.”

And so on…

8.2. General Considerations

We have reviewed numerous applications of VR many of which were already envisioned or developed in its earlier forms in the 1980–1990s and have been more extensively developed and tested in the last 25 years. In most cases, the societal reach has been restricted given that the VR systems used (in combination or not with robotics, tracking, etc.) were too costly to move out the research laboratories and reach consumers. There has nevertheless been significant testing and validation of potential applications in many different areas.

This article has shown that the applications of VR are very extensive and range across numerous domains of knowledge. This means that even though the most frequent use that the mass of people are going to experience as a consumer product will probably be for games and entertainment, all advances and developments in VR will also have an impact in more specialized research and professional fields. More affordable systems will facilitate not only the reach to final consumers but also to more developers and research groups, resulting in a much wider range of applications and generation of content for VR that will emerge in the near future.

Even though applications in psychology, medicine, education, or research will reach many, there are some sectors of the population that may be also directly benefited from VR: those with reduced mobility for any reason, lesions, neurological disorders, or aging. To such people VR may provide a new space to move freely, interact, or work. This could be achieved by acting in VR through various means including motor action, BCIs, eye tracking, or physiological responses.

Finally, we also point out that since the use of VR in these many application realms should be evidence-based, that scientific papers should adhere to the highest standards of rigor and reporting. In the hundreds of papers we have reviewed in the preparation of this article, there are many that do not even say what type of equipment was being used. The term “virtual reality” has been overused, when scientific papers are often simply talking about a PC display with a mouse, and the reader has to look very hard through the paper in order to discover that – if is stated at all.

8.3. Speculations – “I’ve seen things …”

“I’ve seen things you people wouldn’t believe; attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost, in time, like tears in rain. Time to die.” (Replicant Roy Batty, near the closing scene of the movie Blade Runner). 129

In the introduction to this article, we defined our notion of “immersion” as the “physics” of a system – how well it can afford people real-world sensorimotor contingencies for perception and action. We pointed out that this also offers a way of ordering systems – where one system is “more immersive” than a second if the first can be used to simulate experiences on the second, but not vice versa . We used this classification, for example, to show that model-based VR is “more immersive” than 360° VR, so that these have different functionality and uses.

Yet, this raises a paradox. Immersive VR simulates experiences of physical reality. Does that mean that VR is more “immersive” than reality? Like any paradox, this helps us to understand the underlying concepts. There must always be some aspect of the VR that does not conform with reality . This is certain. Why? Because were it not the case then what the participant experiences would be his or her reality! This is not word play but rather illustrates a fundamental aspect of VR. The reader may respond – “Yes, but it is only a matter of time before the graphics, sound, tracking, haptics, etc. become so advanced that people will not be able to distinguish a VR experience from a real one, just like nowadays it is becoming difficult to distinguish pictures or videos that are photographs of real world scenes from those that are wholly generated by graphics.” However, in order for the VR to be indistinguishable from reality, the participant would have to not remember that they had “ gone into ” a VR system . Even if the devices become almost completely transparent and just a part of normal clothing, still the participant has to not know, in other words, has to forget that this is VR, has to forget pressing the button, or having the right thought in a BCI that commands: “Now put me into VR.” If it goes so far that they do not remember getting into VR and they consider that they are directly perceiving physical reality, then they are perceiving their own physical reality .

When we think of VR we are typically thinking about experiences in the visual and auditory domains, rather than haptics (touch and force feedback). The field of haptics has excellent solutions for specific types of interaction, such as pushing a needle through soft tissue (as in medical applications), or using an exoskeleton to apply force feedback to an arm. However, unlike the visual and auditory fields, there is no generalized solution. By a generalized solution we mean a single device whereby participants in a VR can feel anything (just as a display can be programmed to display anything), for example, feel something when their virtual body accidentally brushes against a virtual wall or fall backwards when hit by a tidal wave of virtual water. As argued by Slater (2014) , solutions to such issues may well have to go down the route of direct brain interfaces to solve such fundamental problems in a general way that can never be solved with external devices, which in the haptics domain always provide very specific stimuli. VR would become an applied branch of neuroscience in this view. Since as we and others have argued before our notion of reality is a constructed one, by activating the appropriate brain areas, our perception in this type of VR based on direct neural intervention would be indistinguishable from perception of “reality.” As the philosopher Thomas Metzinger has pointed out 130 we are about to embark on an enormous process of new learning through mass availability of VR: “The real news, however, may be that the general public will gradually acquire a new and intuitive understanding of what their very own conscious experience really is and what it always has been” – that our conscious experience is one possible model – an interpretation – of the world.

Now, let us imagine the perfect VR system with perfect immersion , so perfect that for most people it is completely indistinguishable from reality – it is their reality (recall that they must not remember that they “went into VR” and likewise they must not know when they “come out of VR”). Again seemingly paradoxically in such a situation the notion of presence vanishes. There is no sense of presence in physical reality. Presence is the feeling of being transported to another place. This is why our notion of “place illusion” as “being there” includes the rider “… in spite of the fact that you know for sure that you are not actually there .” It contains an element of surprise: “I know I am at home wearing a HMD, but I feel as if I am in the Himalayas.” In physical reality, there is no perceptual surprise, no feeling “Wow! Look at that, it is amazing that I am here!” (except, for example, as a way of expressing good fortune at being in a fabulous place). We are just “here.” We do not comment on it or think about it from the perceptual point of view – only sometimes at the content of our perception – the scenery or surprising events. There is no special or remarkable feeling associated with being in a place. It is how things always are. The only time we might feel something unusual is when some aspect of our perception breaks – for example, through mental illness, hallucinogens, the aftermath of an injury – where we find ourselves outside of the reference frame of our normal perception. In the movie The Matrix, 131 almost everyone was living in perfect immersion, perfect VR. They only became aware of “presence” (i.e., that their world was illusory) at moments when the system failed.

Hence, the illusion of presence actually represents the non-perfection of immersion. On one side, as we improve immersion more and more through technical advances what this means in terms of “presence” is that the “wow” factor, the sensation of the difference between where we know ourselves to be, but where we feel ourselves to be, i.e., the level of illusion, will become stronger and stronger. The shock of putting on the HMD and seeing an alternate reality in high-resolution, all around, with fantastic vision, sound, haptics, smell, taste, and full body tracking will become overwhelming. But, on the other side, when immersion becomes perfect – to the point that we do not in any way distinguish between perception of reality and VR even to the extent of not knowing when we are perceiving from one rather than the other – then presence will disappear.

However, it is also possible that the surprise element of “presence” will disappear for another reason. Imagine the generation that grows up where VR is just as much part of their lives as cell phones are today. Although they will distinguish reality from VR, their illusion of presence may diminish because the surprise element will disappear through acclimatization. Older generations today still marvel at being able to have real-time video connections at virtually zero added cost with people half way around the world, but a younger generation that is growing up with that find it completely unremarkable. So, this new generation that grows up with VR will of course have the illusion of “being there” in VR, but it will be nothing special, and therefore there will be all the more reason that they will tend to behave the same in VR as they do in reality in similar circumstances. It will be like: Now I am at home. Now I am at school. Now I am in place X in VR. They will become equivalent perceptually, cognitively, and behaviorally. But, just as kids learn “Don’t run in the school corridor,” “Don’t shout in the classroom,” so they will learn different forms of behavior that apply to different places in different modes of reality. VR will have its own customs, norms of behavior, and politeness. Today all we can say is that however we imagine this might be – it won’t be like that, since it will be the result of an unpredictable and complex product of technological advance and social evolution.

We have used the term “presence” slightly loosely here. Recall that there are two components: PI (resting as a necessary condition on sensorimotor contingencies) and Psi (the illusion that events are real). The latter is just as critical and maybe more difficult to get right in many applications. For example, in a real street we might avoid parking our car because we see a police officer standing nearby. On closer inspection we realize that the police officer is actually a manikin dummy. So we park. This is a failure of Psi of the dummy. In VR, we are enjoying talking to a very nice virtual person. Eventually, we realize that the virtual person is going through some repetitive actions and is not actually aware of what we are doing. We move away. This is a failure of Psi, even though our illusion of being in the place is intact. Both PI and Psi are critical components of successful VR applications.

Virtual reality, however, can deliver forms of Psi that have never existed in reality and yet still lead to the illusion of these happening. In Slater et al. (1996) , we put people in a VR where they could play 3D chess (like in Star Trek). Not one person was shocked or made any comment about the fact that when they touched the chess pieces these would float in the virtual space to their next location. When asked about this one participant said: “Oh that’s just how things behave in this reality.” So Psi is a difficult concept. In some circumstances, expectations cannot be broken. In others VR can create new expectations that seem completely natural even though they could never happen in physical reality. This is something really worth understanding, and it is connected to our final point.

Virtual Reality encompasses virtual un reality. Almost all the applications we have reviewed, and a lot of what we see, translate something from reality into VR. A fear of heights application puts people … on a height. A fear of public speaking application puts people … in front of an audience. These are fine. However, maybe there are completely new ways to think about these types of applications that make use of the amazing power to put people outside of the bounds of reality and have a positive effect . Even though VR has been around for half a century, still not enough is known about it. The goal is to shape it to create moments that enhance the lives of people and maybe help secure the future of the planet.

And those moments need not be lost. 132

Author Contributions

All the authors listed have made substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of Interest Statement

The authors were approached by the company Facebook to write an article on potential applications of VR. After completion, the article was subject to a review by the Facebook legal team. There was neither implicit nor explicit encouragement to promote or favor any Facebook products or services. The authors were free to write about virtual reality as they wished. The work is a review of virtual reality in general and not related to any particular products, software, or services.

Acknowledgments

Thanks to James Hairston of Oculus for his support of this work. In addition, the authors thank the following people who have provided images or video that appear in the Supplementary Presentations: Abderrahmane Kheddar, Aitor Rovira, Albert ‘Skip’ Rizzo, Anatole Lécuyer, Angus Antley, Anthony Steed, Antonio Frisoli, Barbara Rothbaum, Christoph Guger, Daniel Freeman, Doron Friedman, Emmanuele Tidoni, Ferran Argelaguet, Franck Multon, Franco Tecchia, Greg Welch, Henry Fuchs, Henry Markram, Hunter Hoffman, Jeremy Bailenson, Jordi Moyes Ardiaca, Larry Hodges, Louis Jebb, Lucia Valmaggia, Mark Huckvale, Nonny de la Peña, Pablo Bermell, Pere Brunet, Rafi Malach, Robert Riener, Salvatore Aglioti, Stephen Ellis, Sylvie Delacroix, Will Steptoe, Xueni (Sylvia) Pan, Yiorgos Chrysanthou, and Zillah Watson.

This work was funded by Oculus VR, LLC, a Facebook Company.

Supplementary Material

The Supplementary Material for this article can be found online at http://journal.frontiersin.org/article/10.3389/frobt.2016.00074/full#supplementary-material .

  • ^ http://www.technologyreview.com/view/421293/whatever-happened-to-virtual-reality/ though see also http://science.nasa.gov/science-news/science-at-nasa/2004/21jun_vr/ from NASA Ames, 2004.
  • ^ https://www.youtube.com/watch?v=NtwZXGprxag&feature=youtu.be
  • ^ http://humansystems.arc.nasa.gov/groups/acd/projects/hmd_dev.php
  • ^ http://www.mortonheilig.com/InventorVR.html
  • ^ https://www.youtube.com/watch?v=3L0N7CKvOBA
  • ^ https://www.youtube.com/watch?v=fs3AhNr5o6o
  • ^ https://www.youtube.com/watch?v=ACeoMNux_AU
  • ^ Though see Project Nourished: http://www.projectnourished.com
  • ^ https://www.youtube.com/watch?v=QEKxyhSPiVg
  • ^ https://www.youtube.com/watch?v=lmHEQRVJzBI
  • ^ http://www.avatarmovie.com/index.html
  • ^ http://www.gutenberg.org/files/5200/5200-h/5200-h.htm
  • ^ https://youtu.be/x5-TPXIzKuI
  • ^ https://www.youtube.com/watch?v=TCQbygjG0RU
  • ^ https://www.youtube.com/watch?v=4PQAc_Z2OfQ
  • ^ https://www.youtube.com/watch?v=ee4-grU_6vs
  • ^ https://www.youtube.com/watch?v=EyujFtuFWvo
  • ^ http://www.fakespacelabs.com/Wide5.html
  • ^ https://www.youtube.com/watch?v=3wg14z5O9Ug
  • ^ https://www.youtube.com/watch?v=ydzSgLim5Y4
  • ^ https://www.youtube.com/watch?v=HliN3iOX090
  • ^ https://www.youtube.com/watch?v=8Oy83OVgbSM
  • ^ https://www.youtube.com/watch?v=sn-UNGcbi2Q
  • ^ http://www.cog.brown.edu/research/ven_lab/research.html
  • ^ http://www0.cs.ucl.ac.uk/research/equator/projects/escience/
  • ^ https://www.youtube.com/watch?v=tFtpmOBt7jY
  • ^ https://www.youtube.com/watch?v=_UFOSHZ22q4
  • ^ https://www.youtube.com/watch?v=ldXEuUVkDuw
  • ^ https://www.youtube.com/watch?v=_N-BAv3Hz8k
  • ^ http://www.goodreads.com/book/show/83539.Fantastic_Voyage
  • ^ http://www.imdb.com/title/tt0060397/
  • ^ http://www.goodreads.com/book/show/83545.Fantastic_Voyage_II
  • ^ https://www.youtube.com/watch?v=PLqlTaT3Bgk
  • ^ https://www.youtube.com/watch?v=UxUZIHAJ2H4
  • ^ https://www.youtube.com/watch?v=iiGzNGlnYJ4
  • ^ https://www.youtube.com/watch?v=sSRzeGkhUic
  • ^ https://www.youtube.com/watch?v=iK3GsAcwKaI
  • ^ https://www.youtube.com/watch?v=JEsV5rqbVNQ
  • ^ https://www.youtube.com/watch?v=mlYJdZeA9w4
  • ^ https://www.youtube.com/watch?v=m4Oeu4SLCgY
  • ^ http://www.o2.co.uk/sponsorship/rugby/wear-the-rose
  • ^ http://news.sky.com/story/1222817/oculus-rift-headset-may-help-sports-training
  • ^ http://www.telegraph.co.uk/technology/news/10621480/Virtual-reality-headset-recreates-England-rugby-squad-training-experience.html
  • ^ http://www.telegraph.co.uk/technology/technology-topics/10681570/Virtual-reality-training-session-with-England-rugby-squad.html
  • ^ http://bleacherreport.com/articles/2563010-stanfords-new-virtual-reality-system-is-changing-sports-forever
  • ^ https://www.youtube.com/watch?v=hXOQsXFcWnk
  • ^ https://www.youtube.com/watch?v=RM9IT_N6jFE
  • ^ https://archive.org/details/SciterianTechnologiesMars3D_CahokiaPanorama-VirtualReality
  • ^ https://www.youtube.com/watch?v=xDqYz5pKA_o
  • ^ An online search of “Oculus” and “Mars” will find many “prototype” examples of people experimenting with rendering and walking through a Mars terrain in VR.
  • ^ https://www.youtube.com/watch?v=sKz0FVIeEFI
  • ^ https://www.youtube.com/watch?v=Wy4Ku2iZjQM
  • ^ https://www.youtube.com/watch?v=cN7W0VBi0jo
  • ^ https://www.youtube.com/watch?v=P9OXRDc3flU
  • ^ https://youtu.be/D4KgWpta7YI
  • ^ https://www.youtube.com/watch?v=NrRRKZRGZbE (“Can virtual reality be used to tackle racism?” Report by Melissa Hogenboom, BBC Click).
  • ^ E.g., http://nymag.com/scienceofus/2015/10/theres-a-new-film-about-the-milgram-experiment.html
  • ^ In the period of January 1 to May 2, 2016 there were more than 100 articles published that reference the Milgram work.
  • ^ E.g., https://www.youtube.com/watch?v=fCVlI-_4GZQ
  • ^ https://youtu.be/RjUNg3pkEag
  • ^ https://en.wikipedia.org/wiki/Murder_of_Kitty_Genovese . See also a recent New York Times article following the death in prison of the murderer http://www.nytimes.com/2016/04/05/nyregion/winston-moseley-81-killer-of-kitty-genovese-dies-in-prison.html?_r=0
  • ^ https://www.youtube.com/watch?v=yspbUFhzGC0 (experiment scenario – bleeped out swearing).
  • ^ https://www.youtube.com/watch?v=11NH0K23nEM (BBC TV report about bystander experiment).
  • ^ http://en.unesco.org/themes/protecting-our-heritage-and-fostering-creativity
  • ^ http://portal.unesco.org/en/ev.php-URL_ID=13637&URL_DO=DO_TOPIC&URL_SECTION=201.html
  • ^ http://whc.unesco.org/en/list/208
  • ^ http://whc.unesco.org/en/list/23
  • ^ http://whc.unesco.org/en/list/
  • ^ http://www.tholos254.gr/projects/miletus/index-en.html . (This also links to a 360° virtual tour).
  • ^ https://www.youtube.com/watch?v=U00bmFyipNw
  • ^ https://www.youtube.com/watch?v=DZx8NqjIgF4
  • ^ https://www.youtube.com/watch?v=e-l2BMStRcg
  • ^ https://graphics.stanford.edu/software/scanview/
  • ^ https://www.youtube.com/watch?v=iiuZznpHyPs&feature=youtu.be
  • ^ https://www.youtube.com/watch?v=bOpf6KcWYyw (a cartoon exposition of the trolley problem).
  • ^ http://www.moralsensetest.com/experiment/originaldilemmas.html (a survey at Harvard University).
  • ^ https://www.youtube.com/watch?v=yk_hftGBHy4
  • ^ http://www.bbc.co.uk/programmes/p00k9drg
  • ^ https://www.youtube.com/watch?v=M2aorOAY8o8
  • ^ https://www.youtube.com/watch?v=05jSp63-W7c&list=PLjjzAm1HXwJOFD6aG9vCYHL4cFoYef6ya
  • ^ https://www.youtube.com/watch?v=KhcnvdKbHrM&feature=youtu.be
  • ^ See an example from Marriott https://travel-brilliantly.marriott.com/our-innovations/oculus-get-teleported
  • ^ http://www2.unwto.org
  • ^ https://www.ustravel.org/research/travel-industry-answer-sheet
  • ^ http://www.imdb.com/title/tt0033870/
  • ^ https://www.youtube.com/watch?v=c9bLWQhbJz0
  • ^ https://www.youtube.com/watch?v=gc8ySZHZLC0
  • ^ http://www.bbc.com/news/technology-18017745
  • ^ https://www.youtube.com/watch?v=FFaInCXi9Go (in Catalan and English).
  • ^ https://www.youtube.com/watch?v=I58wF9f3_a0
  • ^ The news article was published in Latino LA and focused solely on the substantive issue of food for the brain, rather than the system that was used for the interview: http://latinola.com/story.php?story=12654
  • ^ https://www.youtube.com/watch?v=eQLr83Co-GI
  • ^ https://www.youtube.com/watch?v=UGwQ74cH5O0
  • ^ https://www.youtube.com/watch?v=cu7ouYww1RA
  • ^ http://us.macmillan.com/lockin/johnscalzi
  • ^ https://www.youtube.com/watch?v=iGurLgspQxA
  • ^ https://www.youtube.com/watch?v=XUg990uZjEo
  • ^ https://www.youtube.com/watch?v=PeujbA6p3mU
  • ^ https://www.youtube.com/watch?v=pFzfHnzjdo4
  • ^ https://www.youtube.com/watch?v=3Q3ZC124Qbc
  • ^ https://www.youtube.com/watch?v=EP0olmaL4Xs
  • ^ https://www.youtube.com/watch?v=TOx4q711dY8
  • ^ https://www.youtube.com/watch?v=6nHw4RsNJ3Q
  • ^ http://www.cnet.com/news/virtual-reality-is-taking-over-the-video-game-industry/
  • ^ https://storystudio.oculus.com/en-us/henry/
  • ^ http://www.telegraph.co.uk/news/worldnews/europe/germany/9427863/Double-take-Angela-Merkel-steps-out-in-same-dress-she-wore-to-same-event-four-years-ago.html
  • ^ http://www.ft.com/intl/cms/s/2/10369810-aeaf-11e3-aaa6-00144feab7de.html#slide0
  • ^ http://www.immersivejournalism.com/gone-gitmo/
  • ^ https://www.youtube.com/watch?v=_z8pSTMfGSo
  • ^ https://www.youtube.com/watch?v=SSLG8auUZKc
  • ^ http://www.emblematicgroup.com/#/one-dark-night/
  • ^ http://www.emblematicgroup.com/#/kiya/
  • ^ http://vrse.works/creators/chris-milk/work/waves-of-grace/
  • ^ https://www.youtube.com/watch?v=FFnhMX6oR1Q
  • ^ http://www.hongkongunrest.com/vr-player.html
  • ^ http://virtualrealityderegallery.com
  • ^ http://www.desmoinesregister.com/pages/interactives/harvest-of-change/
  • ^ http://www.nytimes.com/newsgraphics/2015/nytvr/
  • ^ http://bbcnewslabs.co.uk/projects/360-video-and-vr/
  • ^ http://www.bbc.co.uk/taster/projects/we-wait
  • ^ http://www.nytimes.com/2016/01/21/opinion/sundance-new-frontiers-virtual-reality.html?hp&action=click&pgtype=Homepage&clickSource=story-heading&module=mini-moth®ion=top-stories-below&WT.nav=top-stories-below&_r=0 (NYT Feature “Where Virtual Reality Takes Us”).
  • ^ https://medium.com/@tjrkent/an-ethical-reality-check-for-virtual-reality-journalism-8e5230673507#.ftgz6i1v3
  • ^ https://ethics.journalism.wisc.edu/resources/digital-media-ethics/
  • ^ http://www.imdb.com/title/tt0042876/
  • ^ http://www.imdb.com/title/tt0058437/
  • ^ http://www.fastcompany.com/3053219/fast-feed/virtual-reality-journalism-is-coming-to-the-associated-press
  • ^ http://www.wired.com/2015/11/360-video-isnt-virtual-reality
  • ^ For example, we can say that coordinate ( x, y ) is “less than” ( z, w ) if x < z and y < w . This defines a partial order over the set of all such coordinates. (1, 2) is less than (3, 4), but there is no order between (1, 2) and (0, 3).
  • ^ http://www.warnerbros.com/blade-runner
  • ^ http://edge.org/response-detail/26699 Edge “Virtual Reality Goes Mainstream: A Complex Convolution.”
  • ^ http://www.warnerbros.com/matrix
  • ^ https://www.youtube.com/watch?v=NoAzpa1x7jU&feature=youtu.be (“I’ve seen things …” Blade Runner).

Abulrub, A.-H. G., Attridge, A. N., and Williams, M. (2011). “Virtual reality in engineering education: the future of creative learning,” in Global Engineering Education Conference (EDUCON), 2011 IEEE (Amman: IEEE), 751–757.

Google Scholar

Ahn, S. J., Le, A. M. T., and Bailenson, J. (2013). The effect of embodied experiences on self-other merging, attitude, and helping behavior. Media Psychol. 16, 7–38. doi: 10.1080/15213269.2012.755877

CrossRef Full Text | Google Scholar

Alaraj, A., Lemole, M. G., Finkle, J. H., Yudkowsky, R., Wallace, A., Luciano, C., et al. (2011). Virtual reality training in neurosurgery: review of current status and future applications. Surg. Neurol. Int. 2:52. doi:10.4103/2152-7806.80117

Al-Kadi, A. S., Donnon, T., Paolucci, E. O., Mitchell, P., Debru, E., and Church, N. (2012). The effect of simulation in improving students’ performance in laparoscopic surgery: a meta-analysis. Surg. Endosc. 26, 3215–3224. doi:10.1007/s00464-012-2327-z

Anderson-Hanley, C., Arciero, P. J., Brickman, A. M., Nimon, J. P., Okuma, N., Westen, S. C., et al. (2012). Exergaming and older adult cognition: a cluster randomized clinical trial. Am. J. Prev. Med. 42, 109–119. doi:10.1016/j.amepre.2011.10.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Anderson-Hanley, C., Snyder, A. L., Nimon, J. P., and Arciero, P. J. (2011). Social facilitation in virtual reality-enhanced exercise: competitiveness moderates exercise effort of older adults. Clin. Interv. Aging 6, 275. doi:10.2147/CIA.S25337

Andújar, C., Chica, A., and Brunet, P. (2012). User-interface design for the Ripoll Monastery exhibition at the National Art Museum of Catalonia. Comput. Graph. 36, 28–37. doi:10.1016/j.cag.2011.10.005

Apostolellis, P., and Bowman, D. A. (2014). “Evaluating the effects of orchestrated, game-based learning in virtual environments for informal education,” in Proceedings of the 11th Conference on Advances in Computer Entertainment Technology (Funchal: ACM), 4.

Argelaguet Sanz, F., Multon, F., and Lécuyer, A. (2015). A methodology for introducing competitive anxiety and pressure in VR sports training. Front. Robot. AI 2:10. doi:10.3389/frobt.2015.00010

Aristidou, A., Stavrakis, E., and Chrysanthou, Y. (2014). “Motion analysis for folk dance evaluation,” in GCH ‘14 Proceedings of the Eurographics Workshop on Graphics and Cultural Heritage (Darmstadt: ACM), 55–64.

Armel, K. C., and Ramachandran, V. S. (2003). Projecting sensations to external objects: evidence from skin conductance response. Proc. R. Soc. Lond. B 270, 1499–1506. doi:10.1098/rspb.2003.2364

Aronov, D., and Tank, D. W. (2014). Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system. Neuron 84, 442–456. doi:10.1016/j.neuron.2014.08.042

Arora, A., Lau, L. Y., Awad, Z., Darzi, A., Singh, A., and Tolley, N. (2014). Virtual reality simulation training in otolaryngology. Int. J. Surg. 12, 87–94. doi:10.1016/j.ijsu.2013.11.007

Bailenson, J. N., and Beall, A. C. (2006). “Transformed social interaction: exploring the digital plasticity of avatars,” in Avatars at Work and Play: Collaboration and Interaction in Shared Virtual Environments , Vol. 34 (Netherlands: Springer), 1–16.

Bailenson, J. N., Blascovich, J., and Beall, A. C. (2001). Equilibrium theory revisited: mutual gaze and personal space in virtual environments. Presence 10, 583–598. doi:10.1162/105474601753272844

Bailenson, J. N., Blascovich, J., Beall, A. C., and Loomis, J. (2003). Interpersonal distance in immersive virtual environments. Pers. Soc. Psychol. Bull. 29, 1–15. doi:10.1177/0146167203029007002

Bailenson, J. N., Yee, N., Blascovich, J., Beall, A. C., Lundblad, N., and Jin, M. (2008). The use of immersive virtual reality in the learning sciences: digital transformations of teachers, students, and social context. J. Learn. Sci. 17, 102–141. doi:10.1080/10508400701793141

Bailenson, J. N., Yee, N., Brave, S., Merget, D., and Koslow, D. (2007). Virtual interpersonal touch: expressing and recognizing emotions through haptic devices. Hum. Comput. Interact. 22, 325–353. doi:10.1080/07370020701493509

Banakou, D., Groten, R., and Slater, M. (2013). Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes. Proc. Natl. Acad. Sci. U.S.A. 110, 12846–12851. doi:10.1073/pnas.1306779110

Banakou, D., Hanumanthu, P. D., and Slater, M. (2016). Virtual embodiment of white people in a black virtual body leads to a sustained reduction in their implicit racial bias. Front. Hum. Neurosci. 10:601. doi:10.3389/fnhum.2016.00601

Banakou, D., and Slater, M. (2014). Body ownership causes illusory self-attribution of speaking and influences subsequent real speaking. Proc. Natl. Acad. Sci. U.S.A. 111, 17678–17683. doi:10.1073/pnas.1414936111

Barfield, W., and Hendrix, C. (1995). The effect of update rate on the sense of presence within virtual environments. Virtual Real. 1, 3–16. doi:10.1007/BF02009709

Barlow, J., Dyson, E., Leary, T., Bricken, W., Robinett, W., Lanier, J., et al. (1990). “Hip, hype and hope – the three faces of virtual worlds (panel session),” in ACM SIGGRAPH 90 Panel Proceedings (Dallas, TX: ACM), 1001–1029.

Barnsley, N., McAuley, J. H., Mohan, R., Dey, A., Thomas, P., and Moseley, G. L. (2011). The rubber hand illusion increases histamine reactivity in the real arm. Curr. Biol. 23, R945–R946. doi:10.1016/j.cub.2011.10.039

Basdogan, C., Ho, C.-H., Srinivasan, M. A., and Slater, M. (2000). “An experimental study on the role of touch in shared virtual environments,” in ACM Transactions on Computer-Human Interaction (TOCHI) – Special issue on Human-Computer Interaction and Collaborative Virtual Environments TOCHI , Vol. 7 (New York, NY: ACM), 443–460.

Bem, D. J. (1972). Self-perception theory. Adv. Exp. Soc. Psychol. 6, 1–62. doi:10.1016/S0065-2601(08)60024-6

Benford, S., and Fahlén, L. (1993). “A spatial model of interaction in large virtual environments,” in Proceedings of the Third European Conference on Computer-Supported Cooperative Work 13–17 September 1993, ECSCW’93 (Milan, Italy: Springer), 109–124.

Bergström, I., Kilteni, K., and Slater, M. (2016). First-person perspective virtual body posture influences stress: a virtual reality body ownership study. PLoS ONE 11:e0148060. doi:10.1371/journal.pone.0148060

Bertella, L., Marchi, S., and Riva, G. (2001). Virtual environment for topographical orientation (VETO): clinical rationale and technical characteristics. Presence 10, 440–449. doi:10.1162/1054746011470280

Besora, I., Brunet, P., Chica, A., and Moyés, J. (2008). “Real-time exploration of the virtual reconstruction of the entrance of the Ripoll monastery,” in CEIG 2008 Conference Proceedings (Barcelona: Eurographics Association), 219–224.

Bideau, B., Kulpa, R., Ménardais, S., Fradet, L., Multon, F., Delamarche, P., et al. (2003). Real handball goalkeeper vs. virtual handball thrower. Presence 12, 411–421. doi:10.1162/105474603322391631

Bideau, B., Kulpa, R., Vignais, N., Brault, S., Multon, F., and Craig, C. (2010). Using virtual reality to analyze sports performance. Comput. Graph. Appl. 30, 14–21. doi:10.1109/MCG.2009.134

Bierbaum, A., Just, C., Hartling, P., Meinert, K., Baker, A., and Cruz-Neira, C. (2001). “VR Juggler: a virtual platform for virtual reality application development,” in IEEE Virtual Reality Proceedings (Yokohama: IEEE), 89–96.

Blanchard, C., Burgess, S., Harvill, Y., Lanier, J., Lasko, A., Oberman, M., et al. (1990). Reality built for two: a virtual reality tool. ACM SIGGRAPH Comput. Graph. 24, 35–36. doi:10.1145/91394.91409

Blanke, O. (2012). Multisensory brain mechanisms of bodily self-consciousness. Nat. Rev. Neurosci. 13, 556–571. doi:10.1038/nrn3292

Blanke, O., Slater, M., and Serino, A. (2015). Behavioral, neural, and computational principles of bodily self-consciousness. Neuron 88, 145–166. doi:10.1016/j.neuron.2015.09.029

Blascovich, J., Loomis, J., Beall, A. C., Swinth, K., Hoyt, C., and Bailenson, J. N. (2002). Immersive virtual environment technology as a methodological tool for social psychology. Psychol. Inquiry 13, 103–124. doi:10.1207/S15327965PLI1302_01

Bleakley, C. M., Charles, D., Porter-Armstrong, A., Mcneill, M. D., McDonough, S. M., and McCormack, B. (2013). Gaming for health a systematic review of the physical and cognitive effects of interactive computer games in older adults. J. Appl. Gerontol. 34, N166–N189. doi:10.1177/0733464812470747

Blom, K. J., Arroyo-Palacios, J., and Slater, M. (2014). The effects of rotating the self out of the body in the full virtual body ownership illusion. Perception 43, 275–294. doi:10.1068/p7618

Bolton, J., Lambert, M., Lirette, D., and Unsworth, B. (2014). “PaperDude: a virtual reality cycling exergame,” in CHI’14 Extended Abstracts on Human Factors in Computing Systems (New York: ACM), 475–478.

Borland, D., Peck, T., and Slater, M. (2013). An evaluation of self-avatar eye movement for virtual embodiment. IEEE Trans. Vis. Comput. Graph. 19, 591–596. doi:10.1109/TVCG.2013.24

Botvinick, M., and Cohen, J. (1998). Rubber hands ‘feel’ touch that eyes see. Nature 391, 756–756. doi:10.1038/35784

Bourdin, P., Sanahuja, J. M. T., Moya, C. C., Haggard, P., and Slater, M. (2013). “Persuading people in a remote destination to sing by beaming there,” in Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology (Singapore: ACM), 123–132.

Brault, S., Bideau, B., Kulpa, R., and Craig, C. (2009). Detecting deceptive movement in 1 vs. 1 based on global body displacement of a rugby player. Int. J. Virtual Real. 8, 31–36.

Brooks, F. P. Jr. (1999). What’s real about virtual reality? Comput. Graph. Appl. 19, 16–27. doi:10.1109/38.799723

Brotons-Mas, J. R., O’Mara, S., and Sanchez-Vives, M. V. (2006). Neural processing of spatial information: what we know about place cells and what they can tell us about presence. Presence 15, 485–499. doi:10.1162/pres.15.5.485

Buckley, C. E., Kavanagh, D. O., Traynor, O., and Neary, P. C. (2014). Is the skillset obtained in surgical simulation transferable to the operating theatre? Am. J. Surg. 207, 146–157. doi:10.1016/j.amjsurg.2013.06.017

Burger, J. M. (2009). Replicating Milgram – would people still obey today? Am. Psychol. 64, 1–11. doi:10.1037/a0010932

Cali, C., Baghabra, J., Boges, D. J., Holst, G. R., Kreshuk, A., Hamprecht, F. A., et al. (2015). Three-dimensional immersive virtual reality for studying cellular compartments in 3D models from EM preparations of neural tissues. J. Comp. Neurol. 524, Sc1. doi:10.1002/cne.23935

Çaliskan, O. (2011). Virtual field trips in education of earth and environmental sciences. Proc. Soc. Behav. Sci. 15, 3239–3243. doi:10.1016/j.sbspro.2011.04.278

Callieri, M., Chica, A., Dellepiane, M., Besora, I., Corsini, M., Moyés, J., et al. (2011). Multiscale acquisition and presentation of very large artifacts: the case of portalada. J. Comput. Cult. Herit. 3, 1–20. doi:10.1145/1957825.1957827

Carrozzino, M., and Bergamasco, M. (2010). Beyond virtual museums: experiencing immersive virtual reality in real museums. J. Cult. Herit. 11, 452–458. doi:10.1016/j.culher.2010.04.001

Casu, A., Spano, L. D., Sorrentino, F., and Scateni, R. (2015). “RiftArt: bringing masterpieces in the classroom through immersive virtual reality,” in Smart Tools and Apps for Graphics-Eurographics Italian Chapter Conference . Verona: Eurographics Association.

Cendan, J., and Lok, B. (2012). The use of virtual patients in medical school curricula. Adv. Physiol. Educ. 36, 48–53. doi:10.1152/advan.00054.2011

Cheetham, M., Pedroni, A. F., Antley, A., Slater, M., and Jáncke, L. (2009). Virtual Milgram: empathic concern or personal distress? Evidence from functional MRI and dispositional measures. Front. Hum. Neurosci. 3:29. doi:10.3389/neuro.09.029.2009

Cheong, R. (1995). The virtual threat to travel and tourism. Tour. Manag. 16, 417–422. doi:10.1016/0261-5177(95)00049-T

Chernyak, D., and Stark, L. W. (2001). Top-down guided eye movements. IEEE Trans. Syst. Man Cybern. B Cybern. 31, 514–522. doi:10.1109/3477.938257

Ch’ng, E. (2009). Experiential archaeology: is virtual time travel possible? J. Cult. Herit. 10, 458–470. doi:10.1016/j.culher.2009.02.001

Claessen, M. H., Van Der Ham, I. J., Jagersma, E., and Visser-Meily, J. M. (2015). Navigation strategy training using virtual reality in six chronic stroke patients: a novel and explorative approach to the rehabilitation of navigation impairment. Neuropsychol. Rehabil. 26, 822–846. doi:10.1080/09602011.2015.1045910

Codd, A. M., and Choudhury, B. (2011). Virtual reality anatomy: is it comparable with traditional methods in the teaching of human forearm musculoskeletal anatomy? Anat. Sci. Educ. 4, 119–125. doi:10.1002/ase.214

Cohen, O., Druon, S., Lengagne, S., Mendelsohn, A., Malach, R., Kheddar, A., et al. (2012). “fMRI robotic embodiment: a pilot study,” in 5th International Conference on Biomedical Robotics and Biomechatronic (Rome: IEEE), 314–319.

Cohen, O., Druon, S., Lengagne, S., Mendelsohn, A., Malach, R., Kheddar, A., et al. (2014a). fMRI-based robotic embodiment: controlling a humanoid robot by thought using real-time fMRI. Presence 23, 229–241. doi:10.1162/PRES_a_00191

Cohen, O., Koppel, M., Malach, R., and Friedman, D. (2014b). Controlling an avatar by thought using real-time fMRI. J. Neural Eng. 11, 035006. doi:10.1088/1741-2560/11/3/035006

Colcombe, S., and Kramer, A. F. (2003). Fitness effects on the cognitive function of older adults a meta-analytic study. Psychol. Sci. 14, 125–130. doi:10.1111/1467-9280.t01-1-01430

Conn, C., Lanier, J., Minsky, M., Fisher, S., and Druin, A. (1989). “Virtual environments and interactivity: windows to the future,” in ACM SIGGRAPH Computer Graphics (Boston, MA: ACM), 7–18.

Connolly, M., Seligman, J., Kastenmeier, A., Goldblatt, M., and Gould, J. C. (2014). Validation of a virtual reality-based robotic surgical skills curriculum. Surg. Endosc. 28, 1691–1694. doi:10.1007/s00464-013-3373-x

Cook, D. A., Erwin, P. J., and Triola, M. M. (2010). Computerized virtual patients in health professions education: a systematic review and meta-analysis. Acad. Med. 85, 1589–1602. doi:10.1097/ACM.0b013e3181edfe13

Craig, C. (2013). Understanding perception and action in sport: how can virtual reality technology help? Sports Technol. 6, 161–169. doi:10.1080/19346182.2013.855224

Cruz-Neira, C., Sandin, D. J., and Defanti, T. A. (1993). “Surround-screen projection-based virtual reality: the design and implementation of the CAVE,” in SIGGRAPH ‘93 Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques (ACM), 135–142.

Cruz-Neira, C., Sandin, D. J., Defanti, T. A., Kenyon, R. V., and Hart, J. C. (1992). The CAVE: audio visual experience automatic virtual environment. Commun. ACM 35, 64–72. doi:10.1145/129888.129892

Curtis, L., Rea, W., Smith-Willis, P., Fenyves, E., and Pan, Y. (2006). Adverse health effects of outdoor air pollutants. Environ. Int. 32, 815–830. doi:10.1016/j.envint.2006.03.012

Cushman, L. A., Stein, K., and Duffy, C. J. (2008). Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality. Neurology 71, 888–895. doi:10.1212/01.wnl.0000326262.67613.fe

Darken, R., and Goerger, S. R. (1999). The transfer of strategies from virtual to real environments: an explanation for performance differences? Simul. Series 31, 159–164.

Darken, R. P., Allard, T., and Achille, L. B. (1998). Spatial orientation and wayfinding in large-scale virtual spaces: an introduction. Presence 7, 101–107. doi:10.1162/105474698565604

Darley, J. M., and Latané, B. (1968). Bystander intervention in emergencies – diffusion of responsibility. J. Pers. Soc. Psychol. 8, 377–383. doi:10.1037/h0025589

Dawson, D. L. (2006). Training in carotid artery stenting: do carotid simulation systems really help? Vascular 14, 256–263. doi:10.2310/6670.2006.00045

de la Peña, N., Weil, P., Llobera, J., Giannopoulos, E., Pomés, A., Spanlang, B., et al. (2010). Immersive journalism: immersive virtual reality for the first-person experience of news. Presence 19, 291–301. doi:10.1162/PRES_a_00005

Dede, C., Salzman, M., Loftin, R. B., and Ash, K. (1997). Using Virtual Reality Technology to Convey Abstract Scientific Concepts [Online] . Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.136.4289&rep=rep1&type=pdf

Denstadli, J. M., Gripsrud, M., Hjorthol, R., and Julsrud, T. E. (2013). Videoconferencing and business air travel: do new technologies produce new interaction patterns? Transport. Res. C Emerg. Technol. 29, 1–13. doi:10.1016/j.trc.2012.12.009

Dodds, T. J., Mohler, B. J., and Bülthoff, H. H. (2011). Talk to the virtual hands: self-animated avatars improve communication in head-mounted display virtual environments. PLoS ONE 6:e25759. doi:10.1371/journal.pone.0025759

Dunn, S., Woolford, K., Norman, S.-J., White, M., and Barker, L. (2012). “Motion in place: a case study of archaeological reconstruction using motion capture,” in Revive the Past: Proceedings of the 39th Conference in Computer Applications and Quantitative Methods in Archaeology (Southampton: Amsterdam University Press), 98–106.

Ehrsson, H. H. (2007). The experimental induction of out-of-body experiences. Science 317, 1048. doi:10.1126/science.1142175

Ehrsson, H. H. (2009). How many arms make a pair? Perceptual illusion of having an additional limb. Perception 38, 310. doi:10.1068/p6304

Ehrsson, H. H. (2012). “The concept of body ownership and its relation to multisensory integration,” in The New Handbook of Multisensory Processes , ed. B. E. Stein (Cambridge, MA: MIT Press), 775–792.

Ehrsson, H. H., Kito, T., Sadato, N., Passingham, R. E., and Naito, E. (2005). Neural substrate of body size: illusory feeling of shrinking of the waist. PLoS Biol. 3:e412. doi:10.1371/journal.pbio.0030412

Ellis, S. R. (1996). Presence of mind: a reaction to Thomas Sheridan’s “further musings on the psychophysics of presence”. Presence 5, 247–259. doi:10.1162/pres.1996.5.2.247

Ewert, D., Schuster, K., Johansson, D., Schilberg, D., and Jeschke, S. (2014). “Intensifying learner’s experience by incorporating the virtual theatre into engineering education,” in Automation, Communication and Cybernetics in Science and Engineering 2013/2014 , eds S. Jeschke, I. Isenhardt, F. Hees, and K. Henning (Cham: Springer International Publishing), 255–267.

Finkelstein, S., and Suma, E. (2011). Astrojumper: motivating exercise with an immersive virtual reality exergame. Presence 20, 78–92. doi:10.1162/pres_a_00036

Fischer, P., Krueger, J. I., Greitemeyer, T., Vogrincic, C., Kastenmüller, A., Frey, D., et al. (2011). The bystander-effect: a meta-analytic review on bystander intervention in dangerous and non-dangerous emergencies. Psychol. Bull. 137, 517–537. doi:10.1037/a0023304

Fisher, S. S., McGreevy, M., Humphries, J., and Robinett, W. (1987). “Virtual environment display system,” in Proceedings of the 1986 Workshop on Interactive 3D Graphics (Chapel Hill, NC: ACM), 77–87.

Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Rev. 5, 187.

Fox, J., and Bailenson, J. N. (2009). Virtual self-modeling: the effects of vicarious reinforcement and identification on exercise behaviors. Media Psychol. 12, 1–25. doi:10.1080/15213260802669474

Frecon, E., Smith, G., Steed, A., Stenius, M., and Stahl, O. (2001). An overview of the COVEN platform. Presence 10, 109–127. doi:10.1162/105474601750182351

Frécon, E., and Stenius, M. (1998). DIVE: a scaleable network architecture for distributed virtual environments. Distrib. Syst. Eng. 5, 91. doi:10.1088/0967-1846/5/3/002

Freina, L., and Ott, M. (2015). “A literature review on immersive virtual reality in education: state of the art and perspectives,” in Proceedings of eLearning and Software for Education (eLSE), 2015 April 23–24 . Bucharest.

Friedman, D., Leeb, R., Guger, C., Steed, A., Pfurtscheller, G., and Slater, M. (2007). Navigating virtual reality by thought: what is it like? Presence 16, 100–110. doi:10.1162/pres.16.1.100

Friedman, D., Pizarro, R., Or-Berkers, K., Neyret, S., Pan, X., and Slater, M. (2014). A method for generating an illusion of backwards time travel using immersive virtual reality – an exploratory study. Front. Psychol. 5:943. doi:10.3389/fpsyg.2014.00943

Friedman, D., Steed, A., and Slater, M. (2007). “Spatial social behavior in second life,” in Intelligent Virtual Agents 7th International Conference, IVA 2007 Paris, Fran , Vol. 4722, eds C. Pelachaud, J-C. Martin, E. André, G. Chollet, K. Karpouzis, and D. Pelé (Berlin Heidelberg: Springer), 252–263.

Gaitatzes, A., Christopoulos, D., and Roussou, M. (2001). “Reviving the past: cultural heritage meets virtual reality,” in Proceedings of the 2001 Conference on Virtual Reality, Archeology, and Cultural Heritage (Athens: ACM), 103–110.

Gallagher, A. G., and Cates, C. U. (2004). Approval of virtual reality training for carotid stenting: what this means for procedural-based medicine. JAMA 292, 3024–3026. doi:10.1001/jama.292.24.3024

Gallagher, A. G., Ritter, E. M., Champion, H., Higgins, G., Fried, M. P., Moses, G., et al. (2005). Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann. Surg. 241, 364. doi:10.1097/01.sla.0000151982.85062.80

Galtung, J., and Ruge, M. H. (1965). The structure of foreign news the presentation of the Congo, Cuba and Cyprus crises in four Norwegian newspapers. J. Peace Res. 2, 64–90. doi:10.1177/002234336500200104

Garcia, S. M., Weaver, K., Moskowitz, G. B., and Darley, J. M. (2002). Crowded minds: the implicit bystander effect. J. Pers. Soc. Psychol. 83, 843. doi:10.1037/0022-3514.83.4.843

Gavish, N., Gutiérrez, T., Webel, S., Rodríguez, J., Peveri, M., Bockholt, U., et al. (2015). Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks. Interact. Learn. Environ. 23, 778–798. doi:10.1080/10494820.2013.815221

Gavish, N., Gutierrez, T., Webel, S., Rodriguez, J., and Tecchia, F. (2011). “Design guidelines for the development of virtual reality and augmented reality training systems for maintenance and assembly tasks,” in BIO Web of Conferences (EDP Sciences), 00029. doi:10.1051/bioconf/20110100029

Giannopoulos, E., Wang, Z., Peer, A., Buss, M., and Slater, M. (2011). Comparison of people’s responses to real and virtual handshakes within a virtual environment. Brain Res. Bull. 85, 276–282. doi:10.1016/j.brainresbull.2010.11.012

González-Franco, M., Peck, T. C., Rodríguez-Fornells, A., and Slater, M. (2013). A threat to a virtual hand elicits motor cortex activation. Exp. Brain Res. 232, 875–887. doi:10.1007/s00221-013-3800-1

Gould, N. F., Holmes, M. K., Fantie, B. D., Luckenbaugh, D. A., Pine, D. S., Gould, T. D., et al. (2007). Performance on a virtual reality spatial memory navigation task in depressed patients. Am. J. Psychiatry 164, 516–519. doi:10.1176/ajp.2007.164.3.516

Greenhalgh, C., and Benford, S. (1995). MASSIVE: a collaborative virtual environment for teleconferencing. ACM Trans. Comput. Hum. Interact. 2, 239–261. doi:10.1145/210079.210088

Greenwald, A. G., and Krieger, L. H. (2006). Implicit bias: scientific foundations. Calif. Law Rev. 94, 945–967. doi:10.2307/20439056

Greenwald, A. G., McGhee, D. E., and Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: the implicit association test. J. Pers. Soc. Psychol. 74, 1464. doi:10.1037/0022-3514.74.6.1464

Greenwald, A. G., Poehlman, T. A., Uhlmann, E. L., and Banaji, M. R. (2009). Understanding and using the Implicit Association Test: III. Meta-analysis of predictive validity. J. Pers. Soc. Psychol. 97, 17. doi:10.1037/a0015575

Groom, V., Bailenson, J. N., and Nass, C. (2009). The influence of racial embodiment on racial bias in immersive virtual environments. Soc. Influence 4, 231–248. doi:10.1080/15534510802643750

Gustafson, P. (2012). Managing business travel: developments and dilemmas in corporate travel management. Tour. Manag. 33, 276–284. doi:10.1016/j.tourman.2011.03.006

Guterstam, A., Petkova, V. I., and Ehrsson, H. H. (2011). The illusion of owning a third arm. PLoS ONE 6:e17208. doi:10.1371/journal.pone.0017208

Gutierrez, D., Sundstedt, V., Gomez, F., and Chalmers, A. (2008). Modeling light scattering for virtual heritage. J. Comput. Cult. Herit. 1, 8. doi:10.1145/1434763.1434765

Guttentag, D. A. (2010). Virtual reality: applications and implications for tourism. Tour. Manag. 31, 637–651. doi:10.1016/j.tourman.2009.07.003

Guye-Vuilleme, A., Capin, T. K., Pandzic, S., Thalmann, N. M., and Thalmann, D. (1999). Nonverbal communication interface for collaborative virtual environments. Virtual Real. 4, 49–59. doi:10.1007/BF01434994

Hagberg, L., Lindahl, B., Nyberg, L., and Hellénius, M. L. (2009). Importance of enjoyment when promoting physical exercise. Scand. J. Med. Sci. Sports 19, 740–747. doi:10.1111/j.1600-0838.2008.00844.x

Hall, E. T. (1969). The hidden dimension. New York: Anchor Books.

Happa, J., Mudge, M., Debattista, K., Artusi, A., Gonçalves, A., and Chalmers, A. (2010). Illuminating the past: state of the art. Virtual Real. 14, 155–182. doi:10.1007/s10055-010-0154-x

Harcup, T., and O’Neill, D. (2001). What is news? Galtung and Ruge revisited. Journal. Stud. 2, 261–280. doi:10.1080/14616700118449

Hartley, T., Maguire, E. A., Spiers, H. J., and Burgess, N. (2003). The well-worn route and the path less traveled: distinct neural bases of route following and wayfinding in humans. Neuron 37, 877–888. doi:10.1016/S0896-6273(03)00095-3

Harvey, C. D., Collman, F., Dombeck, D. A., and Tank, D. W. (2009). Intracellular dynamics of hippocampal place cells during virtual navigation. Nature 461, 941–946. doi:10.1038/nature08499

Haslam, S. A., and Reicher, S. D. (2012). Contesting the “nature” of conformity: what Milgram and Zimbardo’s studies really show. PLoS Biol. 10:e1001426. doi:10.1371/journal.pbio.1001426

Hauser, M., Cushman, F., Young, L., Kang-Xing Jin, R., and Mikhail, J. (2007). A dissociation between moral judgments and justifications. Mind Lang. 22, 1–21. doi:10.1111/j.1468-0017.2006.00297.x

Hauser, M. D. (2006). Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong . New York, NY: Ecco/HarperCollins.

Hauswiesner, S., Straka, M., and Reitmayr, G. (2011). “Free viewpoint virtual try-on with commodity depth cameras,” in Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry (New York, NY: ACM), 23–30.

Heeter, C. (1992). Being there: the subjective experience of presence. Presence 1, 262–271. doi:10.1162/pres.1992.1.2.262

Held, R., and Hein, A. (1963). Movement-produced stimulation in the development of visually guided behavior. J. Comp. Physiol. Psychol. 56, 872. doi:10.1037/h0040546

Held, R. M., and Durlach, N. I. (1992). Telepresence. Presence 1, 109–112. doi:10.1162/pres.1992.1.1.109

Hershfield, H. E., Goldstein, D. G., Sharpe, W. F., Fox, J., Yeykelis, L., Carstensen, L. L., et al. (2011). Increasing saving behavior through age-progressed renderings of the future self. J. Mark. Res. 48, S23–S37. doi:10.1509/jmkr.48.SPL.S23

Hochberg, L. R., Bacher, D., Jarosiewicz, B., Masse, N. Y., Simeral, J. D., Vogel, J., et al. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485, 372–375. doi:10.1038/nature11076

Hochberg, L. R., Serruya, M. D., Friehs, G. M., Mukand, J. A., Saleh, M., Caplan, A. H., et al. (2006). Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442, 164–171. doi:10.1038/nature04970

Hodgetts, T., and Lorimer, J. (2015). Methodologies for animals’ geographies: cultures, communication and genomics. Cult. Geogr. 22, 285–295. doi:10.1177/1474474014525114

Hopkins, N., Reicher, S., Harrison, K., Cassidy, C., Bull, R., and Levine, M. (2007). Helping to improve the group stereotype: on the strategic dimension of prosocial behavior. Pers. Soc. Psychol. Bull. 33, 776–788. doi:10.1177/0146167207301023

Hwang, W.-Y., and Hu, S.-S. (2013). Analysis of peer learning behaviors using multiple representations in virtual reality and their impacts on geometry problem solving. Comput. Educ. 62, 308–319. doi:10.1016/j.compedu.2012.10.005

Ijsselsteijn, W., De Kort, Y., and Haans, A. (2006). Is this my hand I see before me? The rubber hand illusion in reality, virtual reality and mixed reality. Presence 15, 455–464. doi:10.1162/pres.15.4.455

Jensen, K., Ringsted, C., Hansen, H. J., Petersen, R. H., and Konge, L. (2014). Simulation-based training for thoracoscopic lobectomy: a randomized controlled trial. Surg. Endosc. 28, 1821–1829. doi:10.1007/s00464-013-3392-7

Jensen, M. T. (2014). Exploring business travel with work – family conflict and the emotional exhaustion component of burnout as outcome variables: the job demands–resources perspective. Eur. J. Work Organ. Psychol. 23, 497–510. doi:10.1080/1359432X.2013.787183

Jonas, J. B., Rabethge, S., and Bender, H. J. (2003). Computer-assisted training system for pars plana vitrectomy. Acta Ophthalmol. Scand. 81, 600–604. doi:10.1046/j.1395-3907.2003.0078.x

Jones, A. (2007). More than ‘managing across borders?’ The complex role of face-to-face interaction in globalizing law firms. J. Econ. Geogr. 7, 223–246. doi:10.1093/jeg/lbm003

Jost, J. T., Rudman, L. A., Blair, I. V., Carney, D. R., Dasgupta, N., Glaser, J., et al. (2009). The existence of implicit bias is beyond reasonable doubt: a refutation of ideological and methodological objections and executive summary of ten studies that no manager should ignore. Res. Organ. Behav. 29, 39–69. doi:10.1016/j.riob.2009.10.001

Kahana, M. J., Sekuler, R., Caplan, J. B., Kirschen, M., and Madsen, J. R. (1999). Human theta oscillations exhibit task dependence during virtual maze navigation. Nature 399, 781–784. doi:10.1038/21645

Kalivarapu, V., Macallister, A., Hoover, M., Sridhar, S., Schlueter, J., Civitate, A., et al. (2015). “Game-day football visualization experience on dissimilar virtual reality platforms,” in Proc. SPIE 9392, The Engineering Reality of Virtual Reality 2015 (San Francisco, CA: International Society for Optics and Photonics), 939202–939214.

Kampa, M., and Castanas, E. (2008). Human health effects of air pollution. Environ. Pollut. 151, 362–367. doi:10.1016/j.envpol.2007.06.012

Kastanis, I., and Slater, M. (2012). Reinforcement learning utilizes proxemics: an avatar learns to manipulate the position of people in immersive virtual reality. Trans. Appl. Percept. 9:3. doi:10.1145/2134203.2134206

Kateros, S., Georgiou, S., Papaefthymiou, M., Papagiannakis, G., and Tsioumas, M. (2015). A comparison of gamified, immersive VR curation methods for enhanced presence and human-computer interaction in digital humanities. Int. J. Herit. Digit. Era 4, 221–233. doi:10.1260/2047-4970.4.2.221

Kaufmann, H., Schmalstieg, D., and Wagner, M. (2000). Construct3D: a virtual reality application for mathematics and geometry education. Educ. Inform. Technol. 5, 263–276. doi:10.1023/A:1012049406877

Kilteni, K., Bergstrom, I., and Slater, M. (2013). Drumming in immersive virtual reality: the body shapes the way we play. IEEE Trans. Vis. Comput. Graph. 19, 597–605. doi:10.1109/TVCG.2013.29

Kilteni, K., Normand, J.-M., Sanchez-Vives, M. V., and Slater, M. (2012). Extending body space in immersive virtual reality: a very long arm illusion. PLoS ONE 7:e40867. doi:10.1371/journal.pone.0040867

Kim, J., Kim, H., Tay, B. K., Muniyandi, M., Srinivasan, M. A., Jordon, J., et al. (2004). Transatlantic touch: a study of haptic collaboration over long distance. Presence 13, 328–337. doi:10.1162/1054746041422370

King, T. J., Warren, I., and Palmer, D. (2008). “Would Kitty Genovese have been murdered in second life? Researching the “bystander effect” using online technologies,” in TASA 2008: Re-Imagining Sociology: The Annual Conference of the Australian Sociological Association (Melbourne: University of Melbourne), 1–23.

Kishore, S., González-Franco, M., Hintemüller, C., Kapeller, C., Guger, C., Slater, M., et al. (2014). Comparison of SSVEP BCI and eye tracking for controlling a humanoid robot in a social environment. Presence 23, 242–252. doi:10.1162/PRES_a_00192

Kishore, S., Navarro, X., Dominguez, E., De La Peña, N., and Slater, M. (2016). Beaming into the news: a system for and case study of tele-immersive journalism. IEEE Comput. Graph. Appl. doi:10.1109/MCG.2016.44

Kleinsmith, A., Rivera-Gutierrez, D., Finney, G., Cendan, J., and Lok, B. (2015). Understanding empathy training with virtual patients. Comput. Human Behav. 52, 151–158. doi:10.1016/j.chb.2015.05.033

Kober, S. E., Wood, G., Hofer, D., Kreuzig, W., Kiefer, M., and Neuper, C. (2013). Virtual reality in neurologic rehabilitation of spatial disorientation. J. Neuroeng. Rehabil. 10, 17. doi:10.1186/1743-0003-10-17

Koenig, S. T., Crucian, G. P., Dalrymple-Alford, J. C., and Dunser, A. (2009). Virtual reality rehabilitation of spatial abilities after brain damage. Stud. Health Technol. Inform. 144, 105–107. doi:10.3233/978-1-60750-017-9-105

Kokkinara, E., Kilteni, K., Blom, K. J., and Slater, M. (2016). First person perspective of seated participants over a walking virtual body leads to illusory agency over the walking. Sci. Rep. 6, 28879. doi:10.1038/srep28879

Kokkinara, E., and Slater, M. (2014). Measuring the effects through time of the influence of visuomotor and visuotactile synchronous stimulation on a virtual body ownership illusion. Perception 43, 43–58. doi:10.1068/p7545

Koleva, B., Taylor, I., Benford, S., Fraser, M., Greenhalgh, C., Schnadelbach, H., et al. (2001). “Orchestrating a mixed reality performance,” in CHI ‘01 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, WA), 38–45.

Kozlov, M. D., and Johansen, M. K. (2010). Real behavior in virtual environments: psychology experiments in a simple virtual-reality paradigm using video games. Cyberpsychol. Behav. Soc. Netw. 13, 711–714. doi:10.1089/cyber.2009.0310

Krueger, M. W. (1991). Artificial Reality II . Reading, MA: Addison-Wesley Professional.

Krueger, M. W., Gionfriddo, T., and Hinrichsen, K. (1985). “VIDEOPLACE – an artificial reality,” in ACM SIGCHI Bulletin (New York, NY: ACM), 35–40.

Krummel, T. M. (1998). Surgical simulation and virtual reality: the coming revolution. Ann. Surg. 228, 635–637. doi:10.1097/00000658-199811000-00002

Lackner, J. R. (1988). Some proprioceptive influences on the perceptual representation of body shape and orientation. Brain 111, 281–297. doi:10.1093/brain/111.2.281

Lanier, J. (2006). Homuncular flexibility. Edge 26, 2012. Available at: https://www.edge.org/response-detail/11182

Lanier, J. (2010). You Are Not a Gadget: A Manifesto . New York: Random House.

Lanman, D., and Luebke, D. (2013). Near-eye light field displays. ACM Trans. Graph. 32, 220. doi:10.1145/2508363.2508366

Latane, B., and Darley, J. M. (1968). Group inhibition of bystander intervention in emergencies. J. Pers. Soc. Psychol. 10, 215–221. doi:10.1037/h0026570

Latane, B., and Rodin, J. (1969). A lady in distress: inhibiting effects of friends and strangers on bystander intervention. J. Exp. Soc. Psychol. 5, 189–202. doi:10.1016/0022-1031(69)90046-8

Lawson, G., Salanitri, D., and Waterfield, B. (2016). Future directions for the development of virtual reality within an automotive manufacturer. Appl. Ergon. 53, 323–330. doi:10.1016/j.apergo.2015.06.024

Lécuyer, A., Lotte, F., Reilly, R. B., Leeb, R., Hirose, M., and Slater, M. (2008). Brain-computer interfaces, virtual reality, and videogames. Computer 41, 66–72. doi:10.1109/MC.2008.410

Leeb, R., Friedman, D., Muller-Putz, G. R., Scherer, R., Slater, M., and Pfurtscheller, G. (2007). Self-paced (asynchronous) BCI control of a wheelchair in virtual environments: a case study with a tetraplegic. Comput. Intell. Neurosci. 2007:79642. doi:10.1155/2007/79642

Leeb, R., Keinrath, C., Friedman, D., Guger, C., Scherer, R., Neuper, C., et al. (2006). Walking by thinking: the brainwaves are crucial, not the muscles! Presence 15, 500–514. doi:10.1162/pres.15.5.500

Leinen, P., Green, M. F., Esat, T., Wagner, C., Tautz, F. S., and Temirov, R. (2015). Virtual reality visual feedback for hand-controlled scanning probe microscopy manipulation of single molecules. Beilstein J. Nanotechnol. 6, 2148–2153. doi:10.3762/bjnano.6.220

Lenggenhager, B., Tadi, T., Metzinger, T., and Blanke, O. (2007). Video ergo sum: manipulating bodily self-consciousness. Science 317, 1096–1099. doi:10.1126/science.1143439

Leonardis, D., Frisoli, A., Barsotti, M., Carrozzino, M., and Bergamasco, M. (2014). Multisensory feedback can enhance embodiment within an enriched virtual walking scenario. Presence 23, 253–266. doi:10.1162/PRES_a_00190

Leonardis, D., Frisoli, A., Solazzi, M., and Bergamasco, M. (2012). “Illusory perception of arm movement induced by visuo-proprioceptive sensory stimulation and controlled by motor imagery,” in Haptics Symposium (HAPTICS), 2012 IEEE (Vancouver, BC: IEEE), 421–424.

Levine, M., and Crowther, S. (2008). The responsive bystander: how social group membership and group size can encourage as well as inhibit bystander intervention. J. Pers. Soc. Psychol. 95, 1429–1439. doi:10.1037/a0012634

Levine, M., Prosser, A., Evans, D., and Reicher, S. (2005). Identity and emergency intervention: how social group membership and inclusiveness of group boundaries shape helping behavior. Pers. Soc. Psychol. Bull. 31, 443–453. doi:10.1177/0146167204271651

Levoy, M., and Hanrahan, P. (1996). “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (New York, NY: ACM), 31–42.

Levoy, M., Pulli, K., Curless, B., Rusinkiewicz, S., Koller, D., Pereira, L., et al. (2000). “The digital Michelangelo project: 3D scanning of large statues,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (New Orleans: ACM Press/Addison-Wesley Publishing Co.), 131–144.

Li, Y., Shark, L.-K., Hobb, S. J., and Ingham, J. (2010). “Real-time immersive table tennis game for two players with motion tracking,” in Information Visualisation (IV), 2010 14th International Conference (New York, NY: IEEE), 500–505.

Lin, H., Chen, M., Lu, G., Zhu, Q., Gong, J., You, X., et al. (2013). Virtual Geographic Environments (VGEs): a new generation of geographic analysis tool. Earth Sci. Rev. 126, 74–84. doi:10.1016/j.earscirev.2013.08.001

Lincoln, P., Welch, G., Nashel, A., Ilie, A., and Fuchs, H. (2009). “Animatronic shader lamps avatars,” in ISMAR 2009. 8th IEEE International Symposium on Mixed and Augmented Reality, 2009 (Orlando, FL), 27–33.

Llobera, J., Sanchez-Vives, M. V., and Slater, M. (2013). The relationship between virtual body ownership and temperature sensitivity. J. R. Soc. Interface 10, 1742–5662. doi:10.1098/rsif.2013.0300

Llobera, J., Spanlang, B., Ruffini, G., and Slater, M. (2010). Proxemics with multiple dynamic characters in an immersive virtual environment. ACM Trans. Appl. Percept. 8, 3. doi:10.1145/1857893.1857896

Loizides, F., El Kater, A., Terlikas, C., Lanitis, A., and Michael, D. (2014). “Presenting Cypriot cultural heritage in virtual reality: a user evaluation,” in Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection (Limassol: Springer), 572–579.

Loomis, J. M. (1992). Distal attribution and presence. Presence 1, 113–119. doi:10.1162/pres.1992.1.1.113

Loomis, J. M., Blascovich, J. J., and Beall, A. C. (1999). Immersive virtual environment technology as a basic research tool in psychology. Behav. Res. Methods Instrum. Comput. 31, 557–564. doi:10.3758/BF03200735

Lorello, G., Cook, D., Johnson, R., and Brydges, R. (2014). Simulation-based training in anaesthesiology: a systematic review and meta-analysis. Br. J. Anaesth. 112, 231–245. doi:10.1093/bja/aet414

Lovden, M., Schaefer, S., Noack, H., Bodammer, N. C., Kuhn, S., Heinze, H. J., et al. (2012). Spatial navigation training protects the hippocampus against age-related changes during early and late adulthood. Neurobiol. Aging 33, e629–e620. doi:10.1016/j.neurobiolaging.2011.02.013

Madary, M., and Metzinger, T. (2016). Real virtuality: a code of ethical conduct recommendations for good scientific practice and the consumers of VR-technology. Front. Robot. AI 3:3. doi:10.3389/frobt.2016.00003

Magnenat-Thalmann, N., Kevelham, B., Volino, P., Kasap, M., and Lyard, E. (2011). 3d web-based virtual try on of physically simulated clothes. Comput. Aided Des. Appl. 8, 163–174. doi:10.3722/cadaps.2011.163-174

Maguire, E. A., Gadian, D. G., Johnsrude, I. S., Good, C. D., Ashburner, J., Frackowiak, R. S., et al. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proc. Natl. Acad. Sci. U.S.A. 97, 4398–4403. doi:10.1073/pnas.070039597

Maister, L., Sebanz, N., Knoblich, G., and Tsakiris, M. (2013). Experiencing ownership over a dark-skinned body reduces implicit racial bias. Cognition 128, 170–178. doi:10.1016/j.cognition.2013.04.002

Maister, L., Slater, M., Sanchez-Vives, M. V., and Tsakiris, M. (2015). Changing bodies changes minds: owning another body affects social cognition. Trends Cogn. Sci. 19, 6–12. doi:10.1016/j.tics.2014.11.001

Manning, R., Levine, M., and Collins, A. (2007). The Kitty Genovese murder and the social psychology of helping: the parable of the 38 witnesses. Am. Psychol. 62, 555. doi:10.1037/0003-066X.62.6.555

Marescaux, J., Clement, J. M., Tassetti, V., Koehl, C., Cotin, S., Russier, Y., et al. (1998). Virtual reality applied to hepatic surgery simulation: the next revolution. Ann. Surg. 228, 627–634. doi:10.1097/00000658-199811000-00001

Markram, H., Muller, E., Ramaswamy, S., Reimann, M. W., Abdellah, M., Sanchez, C. A., et al. (2015). Reconstruction and simulation of neocortical microcircuitry. Cell 163, 456–492. doi:10.1016/j.cell.2015.09.029

Martens, N., Jenke, R., Abu-Alqumsan, M., Kapeller, C., Hintermüller, C., Guger, C., et al. (2012). “Towards robotic re-embodiment using a brain-and-body-computer interface,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference (Vilamoura-Algarve: IEEE), 5131–5132.

Maselli, A., and Slater, M. (2013). The building blocks of the full body ownership illusion. Front. Hum. Neurosci. 7:83. doi:10.3389/fnhum.2013.00083

Maselli, A., and Slater, M. (2014). Sliding perspectives: dissociating ownership from self-location during full body illusions in virtual reality. Front. Hum. Neurosci. 8:693. doi:10.3389/fnhum.2014.00693

McBride, G., King, M. G., and James, J. W. (1965). Social proximity effects on galvanic skin responses in adult humans. J. Psychol. 61, 153. doi:10.1080/00223980.1965.10544805

McCall, C., Blascovich, J., Young, A., and Persky, S. (2009). Proxemic behaviors as predictors of aggression towards Black (but not White) males in an immersive virtual environment. Soc. Influence 4, 138–154. doi:10.1080/15534510802517418

McGhee, J., Thompson-Butel, A. G., Faux, S., Bou-Haidar, P., and Bailey, J. (2015). “The fantastic voyage: an arts-led approach to 3D virtual reality visualization of clinical stroke data,” in Proceedings of the 8th International Symposium on Visual Information Communication and Interaction (New York, NY: ACM), 69–74.

Meehan, M., Insko, B., Whitton, M. C., and Brooks, F. P. (2002). “Physiological measures of presence in stressful virtual environments,” in SIGGRAPH ‘02 Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Technique (San Antonio, TX), 645–652.

Merchant, Z., Goetz, E. T., Cifuentes, L., Keeney-Kennicutt, W., and Davis, T. J. (2014). Effectiveness of virtual reality-based instruction on students’ learning outcomes in K-12 and higher education: a meta-analysis. Comput. Educ. 70, 29–40. doi:10.1016/j.compedu.2013.07.033

Mestre, D., Dagonneau, V., and Mercier, C.-S. (2011). Does virtual reality enhance exercise performance, enjoyment, and dissociation? An exploratory study on a stationary bike apparatus. Presence 20, 1–14. doi:10.1162/pres_a_00031

Mikropoulos, T. A., and Natsis, A. (2011). Educational virtual environments: a ten-year review of empirical research (1999–2009). Comput. Educ. 56, 769–780. doi:10.1016/j.compedu.2010.10.020

Miles, H. C., Pop, S. R., Watt, S. J., Lawrence, G. P., and John, N. W. (2012). A review of virtual environments for training in ball sports. Comput. Graph. 36, 714–726. doi:10.1016/j.cag.2012.04.007

Milgram, S. (1974). Obedience to Authority . Boston, MA: McGraw Hill.

Millan, J. R., Renkens, F., Mouriòo, J., and Gerstner, W. (2004). Noninvasive brain-actuated control of a mobile robot by human EEG. IEEE Trans. Biomed. Eng. 51, 1026–1033. doi:10.1109/TBME.2004.827086

Miller, A. G. (2009). Reflections on “Replicating Milgram” (Burger, 2009). Am. Psychol. 64, 20–27. doi:10.1037/a0014407

Minsky, M. (1980). Telepresence. Omni 45–52. Available at: http://web.media.mit.edu/~minsky/papers/Telepresence.html

Moseley, G. L., Gallace, A., and Spence, C. (2012). Bodily illusions in health and disease: physiological and clinical perspectives and the concept of a cortical ‘body matrix’. Neurosci. Biobehav. Rev. 36, 34–46. doi:10.1016/j.neubiorev.2011.03.013

Moseley, G. L., Olthof, N., Venema, A., Don, S., Wijers, M., Gallace, A., et al. (2008). Psychologically induced cooling of a specific body part caused by the illusory ownership of an artificial counterpart. Proc. Natl. Acad. Sci. U.S.A. 105, 13169–13173. doi:10.1073/pnas.0803768105

Multon, F., Kulpa, R., and Bideau, B. (2011). Special issue: virtual reality and sports guest editors’ introduction. Presence 20, iii–iv. doi:10.1162/pres_e_00029

Müns, A., Meixensberger, J., and Lindner, D. (2014). Evaluation of a novel phantom-based neurosurgical training system. Surg. Neurol. Int. 5, 173. doi:10.4103/2152-7806.146346

Mustafić, H., Jabre, P., Caussin, C., Murad, M. H., Escolano, S., Tafflet, M., et al. (2012). Main air pollutants and myocardial infarction: a systematic review and meta-analysis. JAMA 307, 713–721. doi:10.1001/jama.2012.126

Navarrete, C. D., McDonald, M. M., Mott, M. L., and Asher, B. (2012). Virtual morality: emotion and action in a simulated three-dimensional “trolley problem”. Emotion 12, 364–370. doi:10.1037/a0025561

Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., and Hanrahan, P. (2005). Light field photography with a hand-held plenoptic camera. Stanford Tech Report CTSR 2005-02.

Nicholson, D. T., Chalk, C., Funnell, W. R., and Daniel, S. J. (2006). Can virtual reality improve anatomy education? A randomised controlled study of a computer-generated three-dimensional anatomical ear model. Med. Educ. 40, 1081–1087. doi:10.1111/j.1365-2929.2006.02611.x

Noë, A. (2004). Action in Perception . Cambridge, MA: MIT Press.

Normand, J. M., Giannopoulos, E., Spanlang, B., and Slater, M. (2011). Multisensory stimulation can induce an illusion of larger belly size in immersive virtual reality. PLoS ONE 6:e16128. doi:10.1371/journal.pone.0016128

Normand, J.-M., Spanlang, B., Tecchia, F., Carrozzino, M., Swapp, D., and Slater, M. (2012a). Full body acting rehearsal in a networked virtual environment – a case study. Presence 21, 229–243. doi:10.1162/PRES_a_00089

Normand, J. M., Sanchez-Vives, M. V., Waechter, C., Giannopoulos, E., Grosswindhager, B., Spanlang, B., et al. (2012b). Beaming into the rat world: enabling real-time interaction between rat and human each at their own scale. PLoS ONE 7:e48331. doi:10.1371/journal.pone.0048331

Norrby, M., Grebner, C., Eriksson, J., and Bostrom, J. (2015). Molecular rift: virtual reality for drug designers. J. Chem. Inf. Model. 55, 2475–2484. doi:10.1021/acs.jcim.5b00544

Noton, D., and Stark, L. (1971). Eye movements and visual perception. Sci. Am. 224, 35–43.

O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., et al. (2011). Active tactile exploration using a brain-machine-brain interface. Nature 479, 228–231. doi:10.1038/nature10489

O’Neill, D., and Harcup, T. (2008). “News values and selectivity,” in The Handbook of Journalism Studies (Routledge), 161–174.

O’Regan, J. K., and Noë, A. (2001a). A sensorimotor account of vision and visual consciousness. Behav. Brain Sci. 24, 939–1031. doi:10.1017/S0140525X01000115

O’Regan, J. K., and Noë, A. (2001b). What it is like to see: a sensorimotor theory of perceptual experience. Synthese 129, 79–103. doi:10.1023/A:1012699224677

Osimo, S. A., Pizarro, R., Spanlang, B., and Slater, M. (2015). Conversations between self and self as Sigmund Freud – a virtual body ownership paradigm for self counselling. Sci. Rep. 5, 13899. doi:10.1038/srep13899

Packer, R., and Jordan, K. (2002). Multimedia: From Wagner to Virtual Reality . New York, NY: WW Norton & Company.

Page, R. L. (2000). “Brief history of flight simulation,” in SimTecT 2000 Proceedings (Sydney: Simulation Australia), 11–17.

Pan, X., and Slater, M. (2011). “Confronting a moral dilemma in virtual reality: a pilot study,” in BCS-HCI’11 Proceedings of the 25th BCS Conference on Human-Computer Interaction (Newcastle-upon-Tyne: BCS), 46–51.

Pan, X., Slater, M., Beacco, A., Navarro, X., Swapp, D., Hale, J., et al. (2016). The responses of medical general practitioners to unreasonable patient demand for antibiotics – a study of medical ethics using immersive virtual reality. PLoS ONE 11:e0146837. doi:10.1371/journal.pone.0146837

Pan, Y., Steptoe, W., and Steed, A. (2014). “Comparing flat and spherical displays in a trust scenario in avatar-mediated interaction,” in Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems (New York, NY: ACM), 1397–1406.

Pausch, R., Snoddy, J., Taylor, R., Watson, S., and Haseltine, E. (1996). “Disney’s Aladdin: first steps toward storytelling in virtual reality,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (New Orleans: ACM), 193–203.

Peck, T. C., Seinfeld, S., Aglioti, S. M., and Slater, M. (2013). Putting yourself in the skin of a black avatar reduces implicit racial bias. Conscious. Cogn. 22, 779–787. doi:10.1016/j.concog.2013.04.016

Perez-Marcos, D., Slater, M., and Sanchez-Vives, M. V. (2009). Inducing a virtual hand ownership illusion through a brain–computer interface. Neuroreport 20, 589–594. doi:10.1097/WNR.0b013e32832a0a2a

Petkova, V. I., and Ehrsson, H. H. (2008). If I were you: perceptual illusion of body swapping. PLoS ONE 3:e3832. doi:10.1371/journal.pone.0003832

Petkova, V. I., Khoshnevis, M., and Ehrsson, H. H. (2011). The perspective matters! Multisensory integration in ego-centric reference frames determines full-body ownership. Front. Psychol. 2:35. doi:10.3389/fpsyg.2011.00035

Pfurtscheller, G., Leeb, R., Keinrath, C., Friedman, D., Neuper, C., Guger, C., et al. (2006). Walking from thought. Brain Res. 1071, 145–152. doi:10.1016/j.brainres.2005.11.083

Pizarro, R., Hall, M., Bermell-Garcia, P., and Gonzalez-Franco, M. (2015). “Augmenting remote presence for interactive dashboard collaborations,” in Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces (New York, NY: ACM), 235–240.

Pomes, A., and Slater, M. (2013). Drift and ownership towards a distant virtual body. Front. Hum. Neurosci. 7:908. doi:10.3389/fnhum.2013.00908

Prabhat, Forsberg, A., Katzourin, M., Wharton, K., and Slater, M. (2008). A comparative study of Desktop, Fishtank, and Cave systems for the exploration of volume rendered confocal data sets. IEEE Trans. Vis. Comput. Graph. 14, 551–563. doi:10.1109/TVCG.2007.70433

Rauter, G., Sigrist, R., Koch, C., Crivelli, F., van Raai, M., Riener, R., et al. (2013). Transfer of Complex Skill Learning from Virtual to Real Rowing. PLoS ONE 8:e82145. doi:10.1371/journal.pone.0082145

Razzaque, S., Kohn, Z., and Whitton, M. C. (2001). “Redirected walking,” in Proceedings of Eurographics (Manchester: Eurographics Association), 289–294.

Razzaque, S., Swapp, D., Slater, M., Whitton, M. C., and Steed, A. (2002). “Redirected walking in place,” in Proceedings of the Workshop on Virtual Environments 2002 (Barcelona: Eurographics Association), 123–130.

Reford, L., and Leston, J. (2011). WWF UK Policy Position Statement on Business Travel . Worldwide Wildlife Foundation. Available at: http://assets.wwf.org.uk/downloads/business_travel_ps_0709.pdf

Reger, G. M., Gahm, G. A., Rizzo, A. A., Swanson, R., and Duma, S. (2009). Soldier evaluation of the virtual reality Iraq. Telemed. J. E. Health 15, 101–104. doi:10.1089/tmj.2008.0050

Reicher, S., Cassidy, C., Wolpert, I., Hopkins, N., and Levine, M. (2006). Saving Bulgaria’s Jews: an analysis of social identity and the mobilisation of social solidarity. Eur. J. Soc. Psychol. 36, 49–72. doi:10.1002/ejsp.291

Reicher, S. D., Haslam, S. A., and Smith, J. R. (2012). Working toward the experimenter reconceptualizing obedience within the Milgram paradigm as identification-based followership. Perspect. Psychol. Sci. 7, 315–324. doi:10.1177/1745691612448482

Rizzo, A. S., Difede, J., Rothbaum, B. O., Reger, G., Spitalnick, J., Cukor, J., et al. (2010). Development and early evaluation of the virtual Iraq/Afghanistan exposure therapy system for combat-related PTSD. Ann. N. Y. Acad. Sci. 1208, 114–125. doi:10.1111/j.1749-6632.2010.05755.x

Rohde, M., Wold, A., Karnath, H.-O., and Ernst, M. O. (2013). The human touch: skin temperature during the rubber hand illusion in manual and automated stroking procedures. PLoS ONE 8:e80688. doi:10.1371/journal.pone.0080688

Rooth, D.-O. (2010). Automatic associations and discrimination in hiring: real world evidence. Labour Econ. 17, 523–534. doi:10.1016/j.labeco.2009.04.005

Rothman, D. B., and Warren, W. H. (2006). Wormholes in virtual reality and the geometry of cognitive maps. J. Vis. 6, 143–143. doi:10.1167/6.6.143

Roussou, M. (2009). A VR playground for learning abstract mathematics concepts. IEEE Comput. Graph. Appl. 29, 82–85. doi:10.1109/MCG.2009.1

Roussou, M., Oliver, M., and Slater, M. (2006). The virtual playground: an educational virtual reality environment for evaluating interactivity and conceptual learning. Virtual Real. 10, 227–240. doi:10.1007/s10055-006-0035-5

Rovira, A., Swapp, D., Spanlang, B., and Slater, M. (2009). The use of virtual reality in the study of people’s responses to violent incidents. Front. Behav. Neurosci. 3:59. doi:10.3389/neuro.08.059.2009

Rua, H., and Alvito, P. (2011). Living the past: 3D models, virtual reality and game engines as tools for supporting archaeology and the reconstruction of cultural heritage – the case-study of the Roman villa of Casal de Freiria. J. Archaeol. Sci. 38, 3296–3308. doi:10.1016/j.jas.2011.07.015

Ruddle, R. A., and Lessels, S. (2009). The benefits of using a walking interface to navigate virtual environments. ACM Trans. Comput. Hum. Interact. 16, 5. doi:10.1145/1502800.1502805

Ruddle, R. A., Payne, S. J., and Jones, D. M. (1999). Navigating large-scale virtual environments: what differences occur between helmet-mounted and desk-top displays? Presence 8, 157–168. doi:10.1162/105474699566143

Ruddle, R. A., Volkova, E., and Bülthoff, H. H. (2011a). Walking improves your cognitive map in environments that are large-scale and large in extent. ACM Trans. Comput. Hum. Interact. 18, 10. doi:10.1145/1970378.1970384

Ruddle, R. A., Volkova, E., Mohler, B., and Bülthoff, H. H. (2011b). The effect of landmark and body-based sensory information on route knowledge. Mem. Cognit. 39, 686–699. doi:10.3758/s13421-010-0054-z

Ruddle, R. A., Volkova, E., and Bülthoff, H. H. (2013). Learning to walk in virtual reality. ACM Trans. Appl. Percept. 10, 11. doi:10.1145/2465780.2465785

Ruffaldi, E., Filippeschi, A., Avizzano, C. A., Bardy, B., Gopher, D., and Bergamasco, M. (2011). Feedback, affordances, and accelerators for training sports in virtual environments. Presence 20, 33–46. doi:10.1162/pres_a_00034

Ruppert, B. (2011). New directions in the use of virtual reality for food shopping: marketing and education perspectives. J. Diabetes Sci. Technol. 5, 315–318. doi:10.1177/193229681100500217

Sadagic, A., and Slater, M. (2000). Dynamic polygon visibility ordering for head-slaved viewing in virtual environments. Comput. Graph. Forum 19, 111–122. doi:10.1111/1467-8659.00448

Salomon, R., Lim, M., Pfeiffer, C., Gassert, R., and Blanke, O. (2013). Full body illusion is associated with widespread skin temperature reduction. Front. Behav. Neurosci. 7:65. doi:10.3389/fnbeh.2013.00065

Sanchez-Vives, M. V., and Slater, M. (2005). From presence to consciousness through virtual reality. Nat. Rev. Neurosci. 6, 332–339. doi:10.1038/nrn1651

Sanchez-Vives, M. V., Spanlang, B., Frisoli, A., Bergamasco, M., and Slater, M. (2010). Virtual hand illusion induced by visuomotor correlations. PLoS ONE 5:e10381. doi:10.1371/journal.pone.0010381

Sauzéon, H., Pala, P. A., Larrue, F., Wallet, G., Déjos, M., Zheng, X., et al. (2015). The use of virtual reality for episodic memory assessment. Exp. Psychol. 59, 99–108. doi:10.1027/1618-3169/a000131

Schaaff, A., Berthier, J., Da Rocha, J., Deparis, N., Derriere, S., Gaultier, P., et al. (2015). “Astronomical data analysis software an systems XXIV (ADASS XXIV),” in Proceedings of a Conference held 5-9 October 2014 at Calgary, Alberta Canada , eds A. R. Taylor and E. Rosolowsky (San Francisco: Astronomical Society of the Pacific), 125–128.

Schnapp, B., and Warren, W. (2007). Wormholes in virtual reality: what spatial knowledge is learned for navigation? J. Vis. 7, 758–758. doi:10.1167/7.9.758

Schroeder, R. (2011). “Comparing avatar and video representations,” in Reinventing Ourselves: Contemporary Concepts of Identity in Virtual Worlds (London: Springer Verlag), 235–251.

Seixas-Mikelus, S. A., Adal, A., Kesavadas, T., Baheti, A., Srimathveeravalli, G., Hussain, A., et al. (2010). Can image-based virtual reality help teach anatomy? J. Endourol. 24, 629–634. doi:10.1089/end.2009.0556

Seth, A., Vance, J. M., and Oliver, J. H. (2011). Virtual reality for assembly methods prototyping: a review. Virtual Real. 15, 5–20. doi:10.1007/s10055-009-0153-y

Seymour, N. E., Gallagher, A. G., Roman, S. A., O’Brien, M. K., Bansal, V. K., Andersen, D. K., et al. (2002). Virtual reality training improves operating room performance: results of a randomized, double-blinded study. Ann. Surg. 236, 458–463; discussion 463–454. doi:10.1097/00000658-200210000-00008

Shaw, L., Wünsche, B., Lutteroth, C., Marks, S., Buckley, J., and Corballis, P. (2015a). “Development and evaluation of an exercycle game using immersive technologies,” in Proceedings of the 8th Australasian Workshop on Health Informatics and Knowledge Management . University of Western Sydney.

Shaw, L. A., Wünsche, B. C., Lutteroth, C., Marks, S., and Callies, R. (2015b). “Challenges in virtual reality exergame design,” in Proceedings of the 16th Australasian User Interface Conference (AUIC 2015) (Sydney: Australian Computer Society Inc. (ACS)), 61–68.

Sheridan, T. B. (1992). Musings on telepresence and virtual presence. Presence 1, 120–126. doi:10.1162/pres.1992.1.1.120

Sheridan, T. B. (1996). Further musings on the psychophysics of presence. Presence 5, 241–246. doi:10.1162/pres.1996.5.2.241

Singh, S., Sedlack, R. E., and Cook, D. A. (2014). Effects of simulation-based training in gastrointestinal endoscopy: a systematic review and meta-analysis. Clin. Gastroenterol. Hepatol. 12, 1611.e–1623.e. doi:10.1016/j.cgh.2014.01.037

Skulmowski, A., Bunge, A., Kaspar, K., and Pipa, G. (2014). Forced-choice decision-making in modified trolley dilemma situations: a virtual reality and eye tracking study. Front. Behav. Neurosci. 8:426. doi:10.3389/fnbeh.2014.00426

Slater, M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R Soc. Lond. 364, 3549–3557. doi:10.1098/rstb.2009.0138

Slater, M. (2014). Grand challenges in virtual environments. Front. Robot. AI Virtual Environ. 1:3. doi:10.3389/frobt.2014.00003

Slater, M., Antley, A., Davison, A., Swapp, D., Guger, C., Barker, C., et al. (2006). A virtual reprise of the Stanley Milgram obedience experiments. PLoS ONE 1:e39. doi:10.1371/journal.pone.0000039

Slater, M., Linakis, V., Usoh, M., and Kooper, R. (1996). “Immersion, presence, and performance in virtual environments: an experiment with tri-dimensional chess,” in ACM Virtual Reality Software and Technology (VRST) (Hong Kong: ACM), 163–172.

Slater, M., Perez-Marcos, D., Ehrsson, H. H., and Sanchez-Vives, M. (2008). Towards a digital body: the virtual arm illusion. Front. Hum. Neurosci. 2:6. doi:10.3389/neuro.09.006.2008

Slater, M., Perez-Marcos, D., Ehrsson, H. H., and Sanchez-Vives, M. V. (2009). Inducing illusory ownership of a virtual body. Front. Neurosci. 3:214–220. doi:10.3389/neuro.01.029.2009

Slater, M., Rovira, A., Southern, R., Swapp, D., Zhang, J. J., Campbell, C., et al. (2013). Bystander responses to a violent incident in an immersive virtual environment. PLoS ONE 8:e52766. doi:10.1371/journal.pone.0052766

Slater, M., Sadagic, A., Usoh, M., and Schroeder, R. (2000). Small-group behavior in a virtual and real environment: a comparative study. Presence 9, 37–51. doi:10.1162/105474600566600

Slater, M., and Sanchez-Vives, M. V. (2014). Transcending the self in immersive virtual reality. Computer 47, 24–30. doi:10.1109/MC.2014.198

Slater, M., Spanlang, B., and Corominas, D. (2010a). Simulating virtual environments within virtual environments as the basis for a psychophysics of presence. ACM Trans. Graph. 29, 92. doi:10.1145/1778765.1778829

Slater, M., Spanlang, B., Sanchez-Vives, M. V., and Blanke, O. (2010b). First person experience of body transfer in virtual reality. PLoS ONE 5:e10564. doi:10.1371/journal.pone.0010564

Slater, M., and Steed, A. (2000). A virtual presence counter. Presence 9, 413–434. doi:10.1162/105474600566925

Slater, M., Usoh, M., and Steed, A. (1995). Taking steps: the influence of a walking technique on presence in virtual reality. ACM Trans. Comput. Hum. Interact. 2, 201–219. doi:10.1145/210079.210084

Slater, M., and Wilbur, S. (1997). A framework for immersive virtual environments (FIVE): speculations on the role of presence in virtual environments. Presence 6, 603–616. doi:10.1162/pres.1997.6.6.603

Solina, F., Batagelj, B., and Glamočanin, S. (2008). “Virtual skiing as an art installation,” in ELMAR, 2008. 50th International Symposium (Zadar: IEEE), 507–510.

Sommer, I. E., and Kahn, R. S. (2015). The magic of movement; the potential of exercise to improve cognition. Schizophr. Bull. 41, 776–778. doi:10.1093/schbul/sbv031

Spanlang, B., Fröhlich, T., Descalzo, F., Antley, A., and Slater, M. (2007). “The making of a presence experiment: responses to virtual fire,” in PRESENCE 2007 – The 10th Annual International Workshop on Presence (Barcelona). Available at: http://astro.temple.edu/~lombard/ISPR/Proceedings/2007/Spanlang,%20et%20al.pdf

Spanlang, B., Navarro, X., Normand, J.-M., Kishore, S., Pizarro, R., and Slater, M. (2013). “Real time whole body motion mapping for avatars and robots,” in Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology – VRST’13 (Singapore: ACM), 175–178.

Spanlang, B., Normand, J.-M., Borland, D., Kilteni, K., Giannopoulos, E., Pomes, A., et al. (2014). How to build an embodiment lab: achieving body representation illusions in virtual reality. Front. Robot. AI 1:9. doi:10.3389/frobt.2014.00009

Sportillo, D., Avveduto, G., Tecchia, F., and Carrozzino, M. (2015). “Training in VR: a preliminary study on learning assembly/disassembly sequences,” in Augmented and Virtual Reality: Second International Conference, AVR 2015 (Lecce: Springer), 332–343.

Stark, L. W. (1995). “How virtual reality works! The illusions of vision in real and virtual environments,” in Proc. SPIE 2411, Human Vision, Visual Processing, and Digital Display VI , Vol. 277 (San Jose, CA), 5–10.

Steed, A., and Oliveira, M. F. (2009). Networked Graphics: Building Networked Games and Virtual Environments . Burlington, MA: Elsevier.

Steed, A., Slater, M., Sadagic, A., Bullock, A., and Tromp, J. (1999). “Leadership and collaboration in shared virtual environments,” in Virtual Reality, 1999. Proceedings (Houston, TX: IEEE), 112–115.

Steed, A., Spante, M., Heldal, I., Axelsson, A.-S., and Schroeder, R. (2003). “Strangers and friends in caves: an exploratory study of collaboration in networked IPT systems for extended periods of time,” in Proceedings of the 2003 Symposium on Interactive 3D Graphics (Monterey, CA: ACM), 51–54.

Steed, A., Steptoe, W., Oyekoya, W., Pece, F., Weyrich, T., Kautz, J., et al. (2012). Beaming: an asymmetric telepresence system. IEEE Comput. Graph. Appl. 32, 10–17. doi:10.1109/MCG.2012.110

Stenico, C., and Greitemeyer, T. (2014). The others will help: the presence of multiple video game characters reduces helping after the game is over. J. Soc. Psychol. 154, 101–104. doi:10.1080/00224545.2013.864595

Steptoe, W., Normand, J. M., Oyekoya, O., Pece, F., Giannopoulos, E., Tecchia, F., et al. (2012). Acting in collaborative multimodal mixed reality environments. Presence 21, 406–422. doi:10.1162/PRES_a_00109

Steptoe, W., Steed, A., Rovira, A., and Rae, J. (2010). “Lie tracking: social presence, truth and deception in avatar-mediated telecommunication,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY: ACM), 1039–1048.

Steptoe, W., Steed, A., and Slater, M. (2013). Human tails: ownership and control of extended humanoid avatars. IEEE Trans. Vis. Comput. Graph 19, 583–590. doi:10.1109/TVCG.2013.32

Steptoe, W., Wolff, R., Murgia, A., Guimaraes, E., Rae, J., Sharkey, P., et al. (2008). “Eye-tracking for avatar eye-gaze and interactional analysis in immersive collaborative virtual environments,” in Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work (New York, NY: ACM), 197–200.

Steuer, J. (1992). Defining virtual reality: dimensions determining telepresence. J. Commun. 42, 73–93. doi:10.1111/j.1460-2466.1992.tb00812.x

Suma, E. A., Azmandian, M., Grechkin, T., Phan, T., and Bolas, M. (2015). “Making small spaces feel large: infinite walking in virtual reality,” in ACM SIGGRAPH 2015 Emerging Technologies (New York, NY: ACM), 16.

Sun, Q., Mirhosseini, S., Gutenko, I., Park, J. H., Papadopoulos, C., Laha, B., et al. (2015). “Buyers satisfaction in a virtual fitting room scenario based on realism of avatar,” in 3D User Interfaces (3DUI), 2015 IEEE Symposium (Arles: IEEE), 183–184.

Sundstedt, V., Chalmers, A., and Martinez, P. (2004). “High fidelity reconstruction of the ancient Egyptian temple of Kalabsha,” in Proceedings of the 3rd International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa (New York, NY: ACM), 107–113.

Sutherland, I. E. (1965). The ultimate display. Proc. IFIP Congr. 2, 506–508.

Sutherland, I. E. (1968). “A head-mounted three dimensional display,” in Proceedings of the December 9–11, 1968, Fall Joint Computer Conference, Part I (New York, NY: ACM), 757–764.

Tecchia, F., Avveduto, G., Brondi, R., Carrozzino, M., Bergamasco, M., and Alem, L. (2014). “I’m in VR!: using your own hands in a fully immersive MR system,” in Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology (New York, NY: ACM), 73–76.

Tecchia, F., Carrozzino, M., Bacinelli, S., Rossi, F., Vercelli, D., Marino, G., et al. (2010). A flexible framework for wide-spectrum VR development. Presence 19, 302–312. doi:10.1162/PRES_a_00002

Terrazas, A., Krause, M., Lipa, P., Gothard, K. M., Barnes, C. A., and McNaughton, B. L. (2005). Self-motion and the hippocampal spatial metric. J. Neurosci. 25, 8085–8096. doi:10.1523/JNEUROSCI.0693-05.2005

Thomson, J. J. (1976). Killing, letting die, and the trolley problem. Monist. 59, 204–217. doi:10.5840/monist197659224

Tiainen, T., Ellman, A., and Kaapu, T. (2014). Virtual prototypes reveal more development ideas: comparison between customers’ evaluation of virtual and physical prototypes: this paper argues that virtual prototypes are better than physical prototypes for consumers-involved product development. Virtual Phys. Prototyping 9, 169–180. doi:10.1080/17452759.2014.934573

Tidoni, E., Fusco, G., Leonardis, D., Frisoli, A., Bergamasco, M., and Aglioti, S. M. (2015). Illusory movements induced by tendon vibration in right-and left-handed people. Exp. Brain Res. 233, 375–383. doi:10.1007/s00221-014-4121-8

Tiozzo, E., Youbi, M., Dave, K., Perez-Pinzon, M., Rundek, T., Sacco, R. L., et al. (2015). Aerobic, resistance, and cognitive exercise training poststroke. Stroke 46. doi:10.1161/STROKEAHA.114.006649

Tonin, L., Carlson, T., Leeb, R., Del, R., and Millán, J. (2011). “Brain-controlled telepresence robot by motor-disabled people,” in Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Boston, MA: IEEE Engineering in Medicine and Biology Society), 4227–4230.

Tromp, J., Bullock, A., Steed, A., Sadagic, A., Slater, M., and Frecon, E. (1998). Small group behavior experiments in the Coven project. IEEE Comput. Graph. Appl. 18, 53–63. doi:10.1109/38.734980

Usoh, M., Arthur, K., Whitton, M. C., Bastos, R., Steed, A., Slater, M., et al. (1999). “Walking > walking-in-place > flying, in virtual environments,” in Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH) . Los Angeles, CA: ACM.

van Dam, A., Laidlaw, D., and Simpson, R. (2002). Experiments in immersive virtual reality for scientific visualization. Comput. Graph. 26, 535–555. doi:10.1016/S0097-8493(02)00113-9

van der Hoort, B., Guterstam, A., and Ehrsson, H. H. (2011). Being Barbie: the size of one’s own body determines the perceived size of the world. PLoS ONE 6:e20195. doi:10.1371/journal.pone.0020195

van Dongen, K. W., Ahlberg, G., Bonavina, L., Carter, F. J., Grantcharov, T. P., Hyltander, A., et al. (2011). European consensus on a competency-based virtual reality training program for basic endoscopic surgical psychomotor skills. Surg. Endosc. 25, 166–171. doi:10.1007/s00464-010-1151-6

Vignais, N., Bideau, B., Craig, C., Brault, S., Multon, F., and Kulpa, R. (2009). Virtual environments for sport analysis: perception-action coupling in handball goalkeeping. Int. J. Virtual Real. 8, 43–48.

von Zitzewitz, J., Wolf, P., Novaković, V., Wellner, M., Rauter, G., Brunschweiler, A., et al. (2008). Real-time rowing simulator with multimodal feedback. Sports Technol. 1, 257–266. doi:10.1080/19346182.2008.9648483

Wang, Z., Giannopoulos, E., Slater, M., Peer, A., and Buss, M. (2011). Handshake: realistic human-robot interaction in haptic enhanced virtual reality. Presence 20, 371–392. doi:10.1162/PRES_a_00061

Webel, S., Olbrich, M., Franke, T., and Keil, J. (2013). “Immersive experience of current and ancient reconstructed cultural attractions,” in Digital Heritage International Congress (DigitalHeritage), 2013 (Granada: IEEE), 395–398.

Wellner, M., Sigrist, R., and Riener, R. (2010a). Virtual competitors influence rowers. Presence 19, 313–330. doi:10.1162/PRES_a_00004

Wellner, M., Sigrist, R., Von Zitzewitz, J., Wolf, P., and Riener, R. (2010b). Does a virtual audience influence rowing? Proc. Inst. Mech. Eng. P 224, 117–128. doi:10.1243/17543371JSET33

Wessberg, J., Stambaugh, C. R., Kralik, J. D., Beck, P. D., Laubach, M., Chapin, J. K., et al. (2000). Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature 408, 361–365. doi:10.1038/35042582

Wilcox, L. M., Allison, R. S., Elfassy, S., and Grelik, C. (2006). Personal space in virtual reality. ACM Trans. Appl. Percept. 3, 412–428. doi:10.1145/1190036.1190041

Wirth, W., Hartmann, T., BãCking, S., Vorderer, P., Klimmt, C., Schramm, H., et al. (2007). A process model of the formation of spatial presence experiences. Media Psychol. 9, 493–525. doi:10.1080/15213260701283079

Wojciechowski, R., Walczak, K., White, M., and Cellary, W. (2004). “Building virtual and augmented reality museum exhibitions,” in Proceedings of the Ninth International Conference on 3D Web Technology (New York, NY: ACM), 135–144.

Won, A. S., Bailenson, J., Lee, J., and Lanier, J. (2015a). Homuncular flexibility in virtual reality. J. Comput. Mediat. Commun. 20, 241–259. doi:10.1111/jcc4.12107

Won, A. S., Bailenson, J. N., and Lanier, J. (2015b). Homuncular flexibility: the human ability to inhabit nonhuman avatars. Emerg. Trends Soc. Behav. Sci. doi:10.1002/9781118900772.etrds0165

WTTC. (2015). Travel and Tourism – Economic Impact 2015 – World . Available at: https://www.wttc.org/-/media/files/reports/economic impact research/regional 2015/world2015.pdf

Yee, N., and Bailenson, J. N. (2007). The Proteus effect: the effect of transformed self-representation on behavior. Hum. Commun. Res. 33, 271–290. doi:10.1111/j.1468-2958.2007.00299.x

Yee, N., Bailenson, J. N., and Ducheneaut, N. (2009). The Proteus effect: implications of transformed digital self-representation on online and offline behavior. Communic. Res. 36, 285–312. doi:10.1177/0093650208330254

Yu, F., Thomas, W., Nelson, N. W., Bronas, U. G., Dysken, M., and Wyman, J. F. (2015). Impact of 6-month aerobic exercise on Alzheimer’s symptoms. J. Appl. Gerontol. 34, 484–500. doi:10.1177/0733464813512895

Zeltzer, D. (1992). Autonomy, interaction, and presence. Presence 1, 127–132. doi:10.1162/pres.1992.1.1.127

Zendejas, B., Brydges, R., Hamstra, S. J., and Cook, D. A. (2013). State of the evidence on simulation-based training for laparoscopic surgery: a systematic review. Ann. Surg. 257, 586–593. doi:10.1097/SLA.0b013e318288c40b

Zhou, Y., and Levy, J. I. (2007). Factors influencing the spatial extent of mobile source air pollution impacts: a meta-analysis. BMC Public Health 7:89. doi:10.1186/1471-2458-7-89

Ziegert, J. C., and Hanges, P. J. (2005). Employment discrimination: the role of implicit attitudes, motivation, and a climate for racial bias. J. Appl. Psychol. 90, 553. doi:10.1037/0021-9010.90.3.553

Keywords: virtual reality, presence, immersion, place illusion, plausibility, moral dilemmas, embodiment, immersive journalism

Citation: Slater M and Sanchez-Vives MV (2016) Enhancing Our Lives with Immersive Virtual Reality. Front. Robot. AI 3:74. doi: 10.3389/frobt.2016.00074

Received: 17 October 2016; Accepted: 15 November 2016; Published: 19 December 2016

Reviewed by:

Copyright: © 2016 Slater and Sanchez-Vives. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Mel Slater, melslater@ub.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Research Communities by Springer Nature

Is virtual reality bad for our health studies point to physical and mental impacts of vr usage.

Go to the profile of Dmytro Spilka

Share this post

Choose a social network to share with, or copy the shortened URL to share elsewhere

Share with...

...or copy the link.

Is Virtual Reality Bad for Our Health? Studies Point to Physical and Mental Impacts of VR Usage

communities.springernature.com

During the early iterations of virtual reality headsets, there were cases of users reporting headaches, eye strain, and dizziness. Can VR have a long-term health impact?

The well-documented arrival of the Apple Vision Pro has heralded a new frontier for virtual reality technology. But could this brave new world of VR come at a cost to our physical and mental health? 

There’s plenty of impressive technology packed under the Apple Vision Pro, which the leading tech firm calls its “first spatial computer.” 1 With an ultra-high resolution display boasting 23 million megapixels across multiple displays, the Vision Pro represents the most sophisticated iteration of consumer virtual reality to date. 

With Apple’s Vision Pro intent on setting a new standard in how consumers use VR in their everyday lives, it’s once again important to look at prolonged exposure to virtual reality and how it can impact our physical and mental well-being. 

During the early iterations of virtual reality headsets entering the market in the 2010s, there were cases of users reporting headaches, eye strain, dizziness, and nausea after using their headsets, which attempt to offer immersive experiences that closely replicate our real-world senses. 2

These symptoms can be triggered by a response to ‘VR illusion’, which makes user's eyes focus on objects perceived to be far away when in actual fact they’re projected a matter of millimeters away on a display. 

With this in mind, let’s take a look at what the emergence of the latest generation of virtual reality headsets may mean for the health of wearers, based on data recorded throughout global studies:

Measuring the Physical Impact of Virtual Reality

In a 2020 research paper from the UK Department for Business, Energy & Industrial Strategy, the findings show that much existing research into the use of domestic VR systems focuses on cybersickness as an adverse effect of usage. 3

Cybersickness, which is acknowledged as a form of motion sickness, can generate physiological effects such as a loss of spatial awareness, nausea, dizziness, and disorientation. The paper also acknowledges short-term effects including eye soreness and trouble focusing, impaired hand-eye coordination, impacted depth perception, weakening reaction times, and a loss of balance. 

When it comes to measuring the duration and strength of cybersickness, the paper states that such side effects can vary based on both the individual and the VR stimuli that they’re exposed to. 

The notion that cybersickness can be triggered by VR illusion appear to be corroborated by the 2022 paper on ‘Augmented Reality and Mixed Reality Measurement Under Different Environments: A Survey on Head-Mounted Devices’, in which authors Hung-Jui Guo et al noted that in mixed reality environments, “they found both overestimation (at 25 to 200 meters) and underestimation (at 300 to 500 meters) of distances.” 4

In exploring the phenomenon of VR illusion, Leeds University, UK, found that just 20 minutes of exposure to virtual reality could begin to impact the ability of children to understand the distance of objects. 5

It’s these concerning early impacts on the physiological functionality of children that have led organizations like the NSPCC to offer guidance for parents on the VR usage of their kids. 6

Addressing Cybersickness

Instances of cybersickness have been problems that became prevalent with older generations of virtual reality hardware. 

“Older virtual reality technology had significant impacts on users and often the after-effects lasted up to 24 hours,” notes Shamus Smith, a researcher from the University of Newcastle, Australia, who had been exploring the impact of VR on the cognitive function of individuals. 7

In his 2020 research paper, ‘Testing the impact of virtual reality experiences on peoples’ reaction times’, Smith and co-author Liz Burd found that many manufacturers of the time offered only very general information about cybersickness within their products, with only a lack of evidential data to support guidelines. However, there’s some evidence that this may be changing. 

In launching their Vision Pro product, David Reid, professor of AI and spatial computing at Liverpool Hope University, has highlighted that Apple has made some key adjustments to counter instances of cybersickness. 

“It is better, but it still isn’t ideal. The main problem with VR motion sickness is Vergence-accommodation conflict (VAC),” Reid explained. “With Apple, they’ve tried to reduce the motion sickness as much as possible. By reducing lag and delay and utilizing high-quality displays, Apple has made a headset that is still best in class for motion sickness.” 8

research on virtual reality

Should Apple identify cybersickness and Vergence-accommodation conflict (VAC) as an issue to tackle head-on, consumers may be more emboldened to enjoy virtual reality with fewer short-term physiological effects, but it remains to be seen just how efficient the Vision Pro will be in cutting out motion sickness among users altogether. 

Addressing the Threat of Injury and Long-Term Ailments

While there’s little data available that address the long-term effects of virtual reality, there are concerns that prolonged exposure to screens at such short distances could cause more cases of myopia among wearers–with forecasts already anticipating that as much as 50% of the global population could be affected by the condition by 2050. 9

Myopia is a form of low vision that can negatively impact individuals as a result of sustained exposure to screens, and with limited options for successful treatments, could reach endemic proportions with the emergence of VR devices that make little consideration for the health of wearers. 10

It’s also essential to consider the impact that accidents can have on wearers of virtual reality headsets. “I see more falling than anything else,” noted Marientina Gotsis, associate professor of research at the Interactive Media and Games Division of the University of Southern California. “You can trip and hit your head or break a limb and get seriously hurt, so someone needs to watch over you when you are using VR. That’s mandatory.” 11

Analyzing the Psychological Impact of Exposure to VR

Worryingly, neurological tests in rats have shown that virtual reality can lead to significant cognitive alterations within the brain. 

According to a 2014 study at the University of California which involved testing rats in virtual reality environments, the neurons in a brain region associated with spatial learning behaved entirely differently in virtual reality in comparison to the real world. Results showed that more than half of the neurons in the brain shut down while in virtual reality. 12

While the test failed to address the human implications of the experiment, it stated a clear need for more research on the long-term effects of VR on human users. 

The necessity of such research is high. According to 2019 figures from Anses, the average virtual reality session for users lasts for over an hour. 13 While the same findings suggest that 12-13 years is the most popular age range for VR exposure among children with video games becoming a dominant pastime among users. 

Could Virtual Reality Amplify Our Emotions?

The topic of video games as a popular use case of VR is significant. While we’re used to playing video games on television or computer screens, evidence suggests that the added immersive layers offered by virtual reality could lead to a far stronger imprint on our brains. 

According to a 1994 study on the impact of early virtual reality systems, it was found that addictive gaming and virtual reality were leading to ‘elevated levels of aggression or hostile thoughts’, though at the time there was little evidence that VR had amplified such emotions beyond that of the 2D games of the time. 14

However, a 2020 journal article entitled ‘Virtual experience, real consequences: the potential negative emotional consequences of virtual reality gameplay,’ found that a correlation could be made between virtual reality and amplified negative emotions among players. 15

In comparing virtual reality users and laptop users during gameplay scenarios, the study found that “intensified negative emotions resulting from VR had a significant positive correlation with negative rumination (i.e., harmful self-related thoughts related to distress).”

Given the all-encompassing immersive qualities of virtual reality, this may call for video game manufacturers to reconsider their approach to adapting original titles for VR environments, given the vastly different sensory reactions they could create among users. 

In one pre-existing use case documented by Tech Monitor author Greg Noone, one user who frequently played extended sessions on the post-apocalyptic game Fallout 4 in virtual reality became so accustomed to their virtual surroundings that it began to blur the boundaries between VR and reality. 16

In discussing leaving the house, Noone’s interviewee claims that he would act as though he was still in a simulation. “I’m just saying things to myself like, ‘Oh, these graphics are really good,’” said the anonymized individual. “And, I’m pantomiming these things in VR, like hovering my hand over something to learn more about it.”

The individual would also discuss experiencing problems in reintegrating with the ‘real world’ following long gaming sessions, noting that “I was just completely unable to hold a conversation,” when recalling a time meeting friends.

With the virtual reality market expected to nearly double to a value of $22 billion dollars by 2025, we’re likely to see levels of public adoption for virtual reality headsets that could call many of the negative effects of such immersive products into question. 17

As the market has been invigorated by the entrance of Apple into the race for VR dominance, it’s imperative for manufacturers to cater for the physical and psychological wellbeing of their customers. 

VR as a Force for Mindfulness?

Although there are certainly psychological impacts associated with the emergence of more immersive VR solutions that must be explored and addressed, it’s worth highlighting that research shows virtual reality can also be a force for good in fighting against depression in some users. 

While in their research article ‘Social Virtual Reality (VR) Involvement Affects Depression When Social Connectedness and Self-Esteem Are Low: A Moderated Mediation on Well-Being’, Lee Hyun-Woo et al acknowledged that excessive play can lead to adverse psychological effects, the article also highlights that playing social games can be a means of boosting mental health and combatting stress. 18

It’s the prospect of connecting with people online in social environments that could help virtual reality to become a force for mindfulness–particularly among users who live away from friends or family. 

In a 2018 study 19 , 77% of VR users claimed that they desired more social engagement in virtual reality. This clear desire for users to socialize and interact with others in VR environments can represent an opportunity for developers to help to improve the ability of users to interact with one another and to mitigate the negative emotions that may arise with more complex and immersive video games. 

The Road Ahead for Virtual Reality

Apple’s Vision Pro is unlikely to be the tech giant’s most sophisticated attempt in the virtual reality space, and over the coming years, consumers can enjoy a litany of high-end, high-performance reality headsets that can offer more immersive experiences and better measures to tackle feelings of cybersickness. 

With these more intuitive products, the emphasis will switch to ensuring that users avoid the negative psychological effects of isolation and the impact of hyper-realistic gaming experiences. 

Although more social tools can help to promote healthy engagement with friends and family on a more meaningful level in VR, the biggest challenge of the future will revolve around toeing the line between healthy gaming usage and the sensory experiences leveraged by games. 

Toeing the line between realism and customer wellness will become the balancing act that virtual reality firms must undertake to keep users safe and happy. For all the power of the Vision Pro, it’ll be the welfare of customers that could decide which VR device rules the market of the future.

1 “Introducing Apple Vision Pro.” 2023. Apple. https://www.apple.com/uk/newsroom/2023/06/introducing-apple-vision-pro/ .

2 Shields, Jon. n.d. “Are VR headsets bad for your health?” BBC Science Focus. Accessed July 11, 2023. https://www.sciencefocus.com/future-technology/are-vr-headsets-bad-for-your-health/ .

3 “The safety of domestic virtual reality systems.” 2020. GOV.UK. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/923616/safety-domestic-vr-systems.pdf .

4 Hung-Jui, Guo. 2020. “Augmented Reality and Mixed Reality Measurement Under Different Environments: A Survey on Head-Mounted Devices.” JOURNAL OF LATEX CLASS FILES 19, no. 9 (September): 1. 2210.16463.

5 Shields, Jon. n.d. “Are VR headsets bad for your health?” BBC Science Focus. Accessed July 11, 2023. https://www.sciencefocus.com/future-technology/are-vr-headsets-bad-for-your-health/ .

6 “Virtual Reality Headsets.” n.d. NSPCC. Accessed July 11, 2023. https://www.nspcc.org.uk/keeping-children-safe/online-safety/virtual-reality-headsets/ .

7 Smith, S.P. and E.L. Burd: “Response activation and inhibition after exposure to virtual reality,” Array (2020)

8 Adorno, José, Chris Smith, Joe Wituschek, Andy Meek, Joshua Hawkins, and Jacob Siegal. 2023. “How Apple Vision Pro will prevent motion sickness.” BGR. https://bgr.com/tech/how-apple-vision-pro-will-prevent-motion-sickness/ .

9 Capogna, Laurie. 2023. “Is VR Bad for Your Eyes? | Niagara Falls.” Eye Wellness. https://myeyewellness.com/is-vr-bad-for-your-eyes/ .

10 “Low Vision Causes and Treatments.” n.d. Eyeglasses. Accessed July 11, 2023. https://blog.eyeglasses.com/vision-magazine/low-vision-causes-and-treatments/ .

11 LaMotte, Sandee. 2017. “Virtual reality has some very real health dangers.” CNN. https://edition.cnn.com/2017/12/13/health/virtual-reality-vr-dangers-safety/index.html .

12 Gent, Edd. 2016. “Are Virtual Reality Headsets Safe for Children?” Scientific American. https://www.scientificamerican.com/article/are-virtual-reality-headsets-safe-for-children/ .

13 “What are the risks of virtual reality and augmented reality, and what good practices does ANSES recommend?” 2021. Anses. https://www.anses.fr/en/content/what-are-risks-virtual-reality-and-augmented-reality-and-what-good-practices-does-anses .

14 Berger, Bennat. 2021. “The Psychological Implications of Virtual Reality.” HackerNoon. https://hackernoon.com/the-psychological-implications-of-virtual-reality .

15 Lavoie, R., Main, K., King, C. et al. Virtual experience, real consequences: the potential negative emotional consequences of virtual reality gameplay. Virtual Reality 25, 69–81 (2021). https://doi.org/10.1007/s10055-020-00440-y

16 Noone, Greg. 2022. “Is virtual reality bad for our mental health?” Tech Monitor. https://techmonitor.ai/technology/emerging-technology/is-virtual-reality-bad-for-mental-health .

17 Alsop, Thomas. 2023. “Virtual reality (VR) - statistics & facts.” Statista. https://www.statista.com/topics/2532/virtual-reality-vr/#topicOverview .

18 Lee Hyun-Woo, Kim Sanghoon, Uhm Jun-Phil, Social Virtual Reality (VR) Involvement Affects Depression When Social Connectedness and Self-Esteem Are Low: A Moderated Mediation on Well-Being. Frontiers in Psychology, Volume 12, 2021 ( https://www.frontiersin.org/articles/10.3389/fpsyg.2021.753019 )

19 Koetsier, John. 2018. “VR Needs More Social: 77% of Virtual Reality Users Want More Social Engagement.” Forbes. https://www.forbes.com/sites/johnkoetsier/2018/04/30/virtual-reality-77-of-vr-users-want-more-social-engagement-67-use-weekly-28-use-daily/?sh=72f676c818fc .

Contributors

Go to the profile of Dmytro Spilka

Dmytro Spilka

CEO & Founder, Solvid

Contributing author

Please sign in or register for free.

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Recommended content, the poisoned chalice: is our passion for remote work negatively impacting our mental health.

The Poisoned Chalice: Is Our Passion for Remote Work Negatively Impacting Our Mental Health?

How the pandemic is continually impacting our perception of sustainability

How the pandemic is continually impacting our perception of sustainability

Assessing key lifestyle changes driven by nationwide lockdowns

Assessing key lifestyle changes driven by nationwide lockdowns

We use cookies to ensure the functionality of our website, to personalize content and advertising, to provide social media features, and to analyze our traffic. If you allow us to do so, we also inform our social media, advertising and analysis partners about your use of our website. You can decide for yourself which categories you want to deny or allow. Please note that based on your settings not all functionalities of the site are available.

Further information can be found in our privacy policy .

Cookie Control

Customise your preferences for any tracking technology

The following allows you to customize your consent preferences for any tracking technology used to help us achieve the features and activities described below. To learn more about how these trackers help us and how they work, refer to the cookie policy. You may review and change your preferences at any time.

These trackers are used for activities that are strictly necessary to operate or deliver the service you requested from us and, therefore, do not require you to consent.

These trackers help us to deliver personalized marketing content and to operate, serve and track ads.

These trackers help us to deliver personalized marketing content to you based on your behaviour and to operate, serve and track social advertising.

These trackers help us to measure traffic and analyze your behaviour with the goal of improving our service.

These trackers help us to provide a personalized user experience by improving the quality of your preference management options, and by enabling the interaction with external networks and platforms.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

healthcare-logo

Article Menu

research on virtual reality

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Unveiling the evolution of virtual reality in medicine: a bibliometric analysis of research hotspots and trends over the past 12 years.

research on virtual reality

1. Introduction

2. materials and methods, 3.1. distribution of articles using publication year, 3.2. countries or regions, 3.3. institutions, 3.4. research categories, 3.5. keywords, 3.6. high-impact papers, 4. discussion, 4.1. principal results, 4.2. research hotspots, 4.2.1. clusters of categories, 4.2.2. burst keywords, 4.3. present limitations of vr in medicine, 4.4. limitations of this bibliometric study, 5. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Steuer, J. Defining Virtual Reality: Dimensions Determining Telepresence. J. Commun. 1992 , 42 , 73–93. [ Google Scholar ] [ CrossRef ]
  • Nakai, K.; Terada, S.; Takahara, A.; Hage, D.; Tubbs, R.S.; Iwanaga, J. Anatomy education for medical students in a virtual reality workspace: A pilot study. Clin. Anat. 2021 , 35 , 40–44. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mitha, A.P.; Almekhlafi, M.A.; Janjua, M.J.J.; Albuquerque, F.C.; McDougall, C.G. Simulation and Augmented Reality in Endovascular Neurosurgery. Neurosurgery 2013 , 72 , A107–A114. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bai, Z.; Blackwell, A.F.; Coulouris, G. Using Augmented Reality to Elicit Pretend Play for Children with Autism. IEEE Trans. Vis. Comput. Graph. 2015 , 21 , 598–610. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bao, X.; Mao, Y.; Lin, Q.; Qiu, Y.; Chen, S.; Li, L.; Cates, R.S.; Zhou, S.; Huang, D. Mechanism of Kinect-based virtual reality training for motor functional recovery of upper limbs after subacute stroke. Neural Regen. Res. 2013 , 8 , 2904–2913. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hannigan, B.; van Deursen, R.; Barawi, K.; Kitchiner, N.; Bisson, J.I. Factors associated with the outcomes of a novel virtual reality therapy for military veterans with PTSD: Theory development using a mixed methods analysis. PLoS ONE 2023 , 18 , e0285763. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zhang, S.; Li, F.; Zhao, Y.; Xiong, R.; Wang, J.; Gan, Z.; Xu, X.; Wang, Q.; Zhang, H.; Zhang, J.; et al. Mobile internet-based mixed-reality interactive telecollaboration system for neurosurgical procedures: Technical feasibility and clinical implementation. Neurosurg. Focus 2022 , 52 , E3. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Alonso-Felipe, M.; Aguiar-Pérez, J.M.; Pérez-Juárez, M.; Baladrón, C.; Peral-Oliveira, J.; Amat-Santos, I.J. Application of Mixed Reality to Ultrasound-guided Femoral Arterial Cannulation during Real-time Practice in Cardiac Interventions. J. Healthc. Inform. Res. 2023 , 7 , 527–541. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Wang, S.; Parsons, M.; Stone-McLean, J.; Rogers, P.; Boyd, S.; Hoover, K.; Meruvia-Pastor, O.; Gong, M.; Smith, A. Augmented Reality as a Telemedicine Platform for Remote Procedural Training. Sensors 2017 , 17 , 2294. [ Google Scholar ] [ CrossRef ]
  • Persky, S.; Colloca, L. Medical Extended Reality Trials: Building Robust Comparators, Controls, and Sham. J. Med. Internet Res. 2023 , 25 , e45821. [ Google Scholar ] [ CrossRef ]
  • Wang, G.; Badal, A.; Jia, X.; Maltz, J.S.; Mueller, K.; Myers, K.J.; Niu, C.; Vannier, M.; Yan, P.; Yu, Z.; et al. Development of metaverse for intelligent healthcare. Nat. Mach. Intell. 2022 , 4 , 922–929. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bansal, G.; Rajgopal, K.; Chamola, V.; Xiong, Z.; Niyato, D. Healthcare in Metaverse: A Survey on Current Metaverse Applications in Healthcare. IEEE Access 2022 , 10 , 119914–119946. [ Google Scholar ] [ CrossRef ]
  • Khan, S.; Ullah, S.; Khan, H.U.; Rehman, I.U. Digital-Twins-Based Internet of Robotic Things for Remote Health Monitoring of COVID-19 Patients. IEEE Internet Things J. 2023 , 10 , 16087–16098. [ Google Scholar ] [ CrossRef ]
  • Moztarzadeh, O.; Jamshidi, M.; Sargolzaei, S.; Keikhaee, F.; Jamshidi, A.; Shadroo, S.; Hauer, L. Metaverse and Medical Diagnosis: A Blockchain-Based Digital Twinning Approach Based on MobileNetV2 Algorithm for Cervical Vertebral Maturation. Diagnostics 2023 , 13 , 1485. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chick, R.C.; Clifton, G.T.; Peace, K.M.; Propper, B.W.; Hale, D.F.; Alseidi, A.A.; Vreeland, T.J. Using Technology to Maintain the Education of Residents during the COVID-19 Pandemic. J. Surg. Educ. 2020 , 77 , 729–732. [ Google Scholar ] [ CrossRef ]
  • Moro, C.; Štromberga, Z.; Raikos, A.; Stirling, A. The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anat. Sci. Educ. 2017 , 10 , 549–559. [ Google Scholar ] [ CrossRef ]
  • Munafo, J.; Diedrick, M.; Stoffregen, T.A. The virtual reality head-mounted display Oculus Rift induces motion sickness and is sexist in its effects. Exp. Brain Res. 2017 , 235 , 889–901. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chow, M.; Herold, D.K.; Choo, T.-M.; Chan, K. Extending the technology acceptance model to explore the intention to use Second Life for enhancing healthcare education. Comput. Educ. 2012 , 59 , 1136–1144. [ Google Scholar ] [ CrossRef ]
  • Küçük, S.; Kapakin, S.; Göktaş, Y. Learning anatomy via mobile augmented reality: Effects on achievement and cognitive load. Anat. Sci. Educ. 2016 , 9 , 411–421. [ Google Scholar ] [ CrossRef ]
  • Jang, S.; Vitale, J.M.; Jyung, R.W.; Black, J.B. Direct manipulation is better than passive viewing for learning anatomy in a three-dimensional virtual reality environment. Comput. Educ. 2017 , 106 , 150–165. [ Google Scholar ] [ CrossRef ]
  • Plancher, G.; Tirard, A.; Gyselinck, V.; Nicolas, S.; Piolino, P. Using virtual reality to characterize episodic memory profiles in amnestic mild cognitive impairment and Alzheimer’s disease: Influence of active and passive encoding. Neuropsychologia 2012 , 50 , 592–602. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ueki, S.; Kawasaki, H.; Ito, S.; Nishimoto, Y.; Abe, M.; Aoki, T.; Ishigure, Y.; Ojika, T.; Mouri, T. Development of a Hand-Assist Robot with Multi-Degrees-of-Freedom for Rehabilitation Therapy. IEEE/ASME Trans. Mechatron. 2012 , 17 , 136–146. [ Google Scholar ] [ CrossRef ]
  • Kamphuis, C.; Barsom, E.; Schijven, M.; Christoph, N. Augmented reality in medical education? Perspect. Med. Educ. 2014 , 3 , 300–311. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Brenner, M.; Hoehn, M.; Pasley, J.; Dubose, J.; Stein, D.; Scalea, T. Basic endovascular skills for trauma course: Bridging the gap between endovascular techniques and the acute care surgeon. J. Trauma Acute Care Surg. 2014 , 77 , 286–291. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Wodzinski, M.; Daniol, M.; Socha, M.; Hemmerling, D.; Stanuch, M.; Skalski, A. Deep learning-based framework for automatic cranial defect reconstruction and implant modeling. Comput. Methods Programs Biomed. 2022 , 226 , 107173. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gurgitano, M.; Angileri, S.A.; Rodà, G.M.; Liguori, A.; Pandolfi, M.; Ierardi, A.M.; Wood, B.J.; Carrafiello, G. Interventional Radiology ex-machina: Impact of Artificial Intelligence on practice. Radiol. Med. 2021 , 126 , 998–1006. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cohen, A.R.; Lohani, S.; Manjila, S.; Natsupakpong, S.; Brown, N.; Cavusoglu, M.C. Virtual reality simulation: Basic concepts and use in endoscopic neurosurgery training. Child’s Nerv. Syst. 2013 , 29 , 1235–1244. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Shetty, S.; Zevin, B.; Grantcharov, T.P.; Roberts, K.E.; Duffy, A.J. Perceptions, Training Experiences, and Preferences of Surgical Residents toward Laparoscopic Simulation Training: A Resident Survey. J. Surg. Educ. 2014 , 71 , 727–733. [ Google Scholar ] [ CrossRef ]
  • Arora, A.; Swords, C.; Khemani, S.; Awad, Z.; Darzi, A.; Singh, A.; Tolley, N. Virtual reality case-specific rehearsal in temporal bone surgery: A preliminary evaluation. Int. J. Surg. 2014 , 12 , 141–145. [ Google Scholar ] [ CrossRef ]
  • Schirmer, C.M.; Elder, J.B.; Roitberg, B.; Lobel, D.A. Virtual Reality–Based Simulation Training for Ventriculostomy. Neurosurgery 2013 , 73 (Suppl. 1), 66–73. [ Google Scholar ] [ CrossRef ]
  • Khemani, S.; Arora, A.; Singh, A.; Tolley, N.; Darzi, A. Objective Skills Assessment and Construct Validation of a Virtual Reality Temporal Bone Simulator. Otol. Neurotol. 2012 , 33 , 1225–1231. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Wang, J.; Suenaga, H.; Liao, H.; Hoshi, K.; Yang, L.; Kobayashi, E.; Sakuma, I. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation. Comput. Med. Imaging Graph. 2015 , 40 , 147–159. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Wang, J.; Suenaga, H.; Yang, L.; Kobayashi, E.; Sakuma, I. Video see-through augmented reality for oral and maxillofacial surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2017 , 13 , e1754. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gibby, J.T.; Swenson, S.A.; Cvetko, S.; Rao, R.; Javan, R. Head-mounted display augmented reality to guide pedicle screw placement utilizing computed tomography. Int. J. Comput. Assist. Radiol. Surg. 2019 , 14 , 525–535. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Léger, É.; Drouin, S.; Collins, D.L.; Popa, T.; Kersten-Oertel, M. Quantifying attention shifts in augmented reality image-guided neurosurgery. Healthc. Technol. Lett. 2017 , 4 , 188–192. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Elmi-Terander, A.; Skulason, H.; Söderman, M.; Racadio, J.; Homan, R.; Babic, D.; van der Vaart, N.; Nachabe, R. rgical Navigation Technology Based on Augmented Reality and Integrated 3D Intraoperative Imaging: A Spine Cadaveric Feasibility and Accuracy Study. Spine 2016 , 41 , E1303–E1311. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Deng, X.-Y.; Liu, H.; Zhang, Z.-X.; Li, H.-X.; Wang, J.; Chen, Y.-Q.; Mao, J.-B.; Sun, M.-Z.; Shen, L.-J. Retinal vascular morphological characteristics in diabetic retinopathy: An artificial intelligence study using a transfer learning system to analyze ultra-wide field images. Int. J. Ophthalmol. 2024 , 17 , 1001–1006. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Xiao, L.; Wang, C.-W.; Deng, Y.; Yang, Y.-J.; Lu, J.; Yan, J.-F.; Peng, Q.-H. HHO optimized support vector machine classifier for traditional Chinese medicine syndrome differentiation of diabetic retinopathy. Int. J. Ophthalmol. 2024 , 17 , 991–1000. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Seo, J.; Laine, T.H.; Oh, G.; Sohn, K.-A. EEG-Based Emotion Classification for Alzheimer’s Disease Patients Using Conventional Machine Learning and Recurrent Neural Network Models. Sensors 2020 , 20 , 7212. [ Google Scholar ] [ CrossRef ]
  • Delvigne, V.; Wannous, H.; Dutoit, T.; Ris, L.; Vandeborre, J.-P. PhyDAA: Physiological Dataset Assessing Attention. IEEE Trans. Circuits Syst. Video Technol. 2022 , 32 , 2612–2623. [ Google Scholar ] [ CrossRef ]
  • Tai, Y.; Qian, K.; Huang, X.; Zhang, J.; Jan, M.A.; Yu, Z. Intelligent Intraoperative Haptic-AR Navigation for COVID-19 Lung Biopsy Using Deep Hybrid Model. IEEE Trans. Ind. Inform. 2021 , 17 , 6519–6527. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Tai, Y.; Zhang, L.; Li, Q.; Zhu, C.; Chang, V.; Rodrigues, J.J.P.C.; Guizani, M. Digital-Twin-Enabled IoMT System for Surgical Simulation Using rAC-GAN. IEEE Internet Things J. 2022 , 9 , 20918–20931. [ Google Scholar ] [ CrossRef ]
  • Minopoulos, G.M.; Memos, V.A.; Stergiou, C.L.; Stergiou, K.D.; Plageras, A.P.; Koidou, M.P.; Psannis, K.E. Exploitation of Emerging Technologies and Advanced Networks for a Smart Healthcare System. Appl. Sci. 2022 , 12 , 5859. [ Google Scholar ] [ CrossRef ]
  • Rus, G.; Andras, I.; Vaida, C.; Crisan, N.; Gherman, B.; Radu, C.; Tucan, P.; Iakab, S.; Al Hajjar, N.; Pisla, D. Artificial Intelligence-Based Hazard Detection in Robotic-Assisted Single-Incision Oncologic Surgery. Cancers 2023 , 15 , 3387. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Memos, V.A.; Minopoulos, G.; Stergiou, K.D.; Psannis, K.E. Internet-of-Things-Enabled Infrastructure Against Infectious Diseases. IEEE Internet Things Mag. 2021 , 4 , 20–25. [ Google Scholar ] [ CrossRef ]
  • Qu, J.; Zhang, Y.; Tang, W.; Cheng, W.; Zhang, Y.; Bu, L. Developing a virtual reality healthcare product based on data-driven concepts: A case study. Adv. Eng. Inform. 2023 , 57 , 102118. [ Google Scholar ] [ CrossRef ]
  • Winkler-Schwartz, A.; Bissonnette, V.; Mirchi, N.; Ponnudurai, N.; Yilmaz, R.; Ledwos, N.; Siyar, S.; Azarnoush, H.; Karlik, B.; Del Maestro, R.F. Artificial Intelligence in Medical Education: Best Practices Using Machine Learning to Assess Surgical Expertise in Virtual Reality Simulation. J. Surg. Educ. 2019 , 76 , 1681–1690. [ Google Scholar ] [ CrossRef ]
  • Zhao, J.; Lu, Y.; Zhou, F.; Mao, R.; Fei, F. Systematic Bibliometric Analysis of Research Hotspots and Trends on the Application of Virtual Reality in Nursing. Front. Public Health 2022 , 10 , 906715. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

RankCountry or RegionCountsCentralityH-Index
1USA4610.4143
2China3290.0826
3UK1900.3926
4Italy1890.2329
5Germany1660.0425
6Canada1440.1028
7Spain1160.0724
8France1110.0820
9Republic of Korea930.0115
10Japan920.0316
RankInstitutionCountsH-IndexCountry or Region
1University of London4313UK
2University of California System3615USA
3University of Toronto3517Canada
4Imperial College London3312UK
5University of Copenhagen3313Denmark
6Centre National de la Recherche Scientifique (CNRS)3111France
7Rigshospitalet2711Denmark
8University College London2611UK
9Harvard University2613USA
10Chinese Academy of Sciences269China
Rank.Title of
Citing Documents
DOITimes CitedSubjectInterpretation of the FindingsResearch Limitations
1Using Technology to Maintain the Education of Residents During the COVID-19 Pandemic [ ]10.1016/j.jsurg.2020.03.018502Training and educationSeveral innovative solutions using technology may help bridge the educational gap for surgical residents during the COVID-19 pandemic.Further clinical study is needed to adequately demonstrate the usefulness of these techniques in maintaining resident education.
2The Effectiveness of Virtual and Augmented Reality in Health Sciences and Medical Anatomy [ ]10.1002/ase.1696421Training and educationThis paper introduces both VR and AR as potent teaching tools, where student learning is as fruitful as using tablet-based applications. 1. There is no significant difference in test scores between student learning using VR and using the traditional patterns.
2. Participants experienced side effects such as blurred vision and disorientation during the process, which affected the results.
3The virtual reality head-mounted display Oculus Rift induces motion sickness and is sexist in its effects [ ]10.1007/s00221-016-4846-7330Technology assessmentThe results suggest that users of contemporary head-mounted display systems are at significant risk of motion sickness. Regarding motion sickness, these systems may be sexist in their effects.This study’s finding has little significance in guiding clinical work. The guiding significance of this study for clinical work was not obvious.
4Extending the technology acceptance model to explore the intention to use Second Life for enhancing healthcare education [ ]10.1109/TMECH.2010.2090353203Training and educationThis study constructs a virtual ward for nursing students to practice rapid sequence intubation.There are still many differences between the virtual simulation of intubation and practical clinical situations.
5Learning Anatomy through Mobile Augmented Reality: Effects on Achievement and Cognitive Load [ ]10.1002/ase.1603187Training and educationAn AR-based anatomical learning program has been shown to be efficient in improving medical students’ learning performance and reducing cognitive load. The sample size of this study is insufficient.
6Direct manipulation is better than passive viewing for learning anatomy in a three-dimensional virtual reality environment [ ]10.1016/j.compedu.2016.12.009186Training and educationDirect manipulation of an anatomical structure in a 3D VR program benefits anatomy learning, especially for participants with low spatial ability.Participants with greater background knowledge were more capable of effectively using the system to learn, which exerts a great influence on experimental results.
7Using virtual reality to characterize episodic memory profiles in amnestic mild cognitive impairment and Alzheimer’s disease: Influence of active and passive encoding [ ]10.1016/j.compedu.2012.05.011184Clinical useA VR test is more suitable for the early diagnosis and rehabilitation of patients with suspected Alzheimer’s disease than traditional (oral) memory tools currently available.1. The scope of the verified data is limited.
2. The clinical value of the VR test remains to be validated by clinical trials.
8Development of a Hand-Assist Robot With Multi-Degrees-of-Freedom for Rehabilitation Therapy [ ]10.1016/j.neuropsychologia.2011.12.013178Clinical useA virtual reality (VR)-enhanced new hand rehabilitation support system that enables patients to exercise alone. There are still many unresolved issues regarding the further clinical application of this rehabilitation support system.
9Augmented reality in medical education? [ ]10.2196/mental.7387157Training and educationAR technology-supported learning makes ubiquitous, collaborative, and contextual learning possible. It provides a sense of presence, immediacy, and immersion that may benefit the learning process.Further research with abundant samples and valid measurements remains to be conducted.
10Basic endovascular skills for trauma course: Bridging the gap between endovascular techniques and the acute care surgeon [ ]10.1097/TA.0000000000000310147Training and educationA VR-based simulation for endovascular skill training.Only 13 participants attended the test. The small sample size of this study makes the result less convincing.
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Zuo, G.; Wang, R.; Wan, C.; Zhang, Z.; Zhang, S.; Yang, W. Unveiling the Evolution of Virtual Reality in Medicine: A Bibliometric Analysis of Research Hotspots and Trends over the Past 12 Years. Healthcare 2024 , 12 , 1266. https://doi.org/10.3390/healthcare12131266

Zuo G, Wang R, Wan C, Zhang Z, Zhang S, Yang W. Unveiling the Evolution of Virtual Reality in Medicine: A Bibliometric Analysis of Research Hotspots and Trends over the Past 12 Years. Healthcare . 2024; 12(13):1266. https://doi.org/10.3390/healthcare12131266

Zuo, Guangxi, Ruoyu Wang, Cheng Wan, Zhe Zhang, Shaochong Zhang, and Weihua Yang. 2024. "Unveiling the Evolution of Virtual Reality in Medicine: A Bibliometric Analysis of Research Hotspots and Trends over the Past 12 Years" Healthcare 12, no. 13: 1266. https://doi.org/10.3390/healthcare12131266

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Visions of the Internet in 2035
  • 6. Altering “reality”

Table of Contents

  • 1. A sampling of overarching views
  • 2. Building better spaces
  • 3. Constructing effective communities
  • 4. Empowering individuals
  • 5. Changing economic life and work
  • 7. Tackling wicked problems
  • 8. Closing thoughts
  • About this canvassing of experts
  • Acknowledgments

A considerable number of these experts focused their answers on the transformative potential of artificial intelligence (AI), virtual reality (VR) and augmented reality (AR). They say these digital enhancements or alternatives will have growing impact on everything online and in the physical world. This, they believe, is the real “metaverse” that indisputably lies ahead. They salute the possibilities inherent in the advancement of these assistive and immersive technologies, but also worry they can be abused – often in ways yet to be discovered. A number of respondents also predict that yet-to-be envisioned realms will arise.

Andrew Tutt, an expert in law expert and author of “ An FDA for Algorithms ,” predicted, “Digital spaces in the future will be so widely varied that there will not be any canonical digital space, just as there is no canonical physical space. A multitude of new digital spaces using augmented reality and virtual reality to create new ways for people to interact online in ways that feel more personal, intimate and like the physical world will likely arise. These spaces will provide opportunities to experience the world and society in new and exciting ways. One imagines, for example, that in the future, digital classrooms could involve students sitting at virtual desks with a virtual teacher giving a lesson at the front of the room.

“There will be future digital concerts with virtually limitless capacity that allow people to watch their favorite bands in venues. And through AR and VR, people will take future ‘trips’ to museums that can only be visited today by buying plane tickets to fly half a world away. Unlike the experiences of today, which are tightly constrained by limitations of physical distance and space, these offer an opportunity to create a more engaged and interactive civil, political and artistic discourse. People are no longer prevented from meaningfully taking advantage of opportunities for education, entertainment and civic and political discourse. These opportunities will not eliminate the problems that digital spaces today confront.

A fundamental reorientation around these new types of [digital] spaces … will be necessary, but the multiplication of opportunities to interact with friends, neighbors and strangers across the world may have the salutary effect of helping us to be better citizens of these digital spaces and thereby improve them without necessarily changing the fundamental technologies and structures on which they rely. Andrew Tutt, an expert in law expert and author of “ An FDA for Algorithms”

“A fundamental reorientation around these new types of spaces – one in which we impose shared values for these types of shared spaces – will be necessary, but the multiplication of opportunities to interact with friends, neighbors and strangers across the world may have the salutary effect of helping us to be better citizens of these digital spaces and thereby improve them without necessarily changing the fundamental technologies and structures on which they rely.”

Mark Lemley, professor of law and director of the Stanford University program in Law, Science and Technology, said, “We will live more of our lives in more – and more realistic – virtual spaces.”

Mark Deuze, professor of media studies at the University of Amsterdam, the Netherlands, wrote, “The foundation of digital life in 2035 will be lived in a mixed or cross-reality in which the ‘real’ is intersected with and interdependent with multiple forms of augmented and virtual realities. This will make our experience of the world and ourselves in it much more malleable than it already is, with one significant difference: By that time, almost all users will have grown up with this experience of plasticity, and we will be much more likely to commit to making it work together.”

Shel Israel, author of seven books on disruptive technologies and communications strategist to tech CEOs, responded, “By 2035 AR and VR are likely to fit into fashionable headsets that look like everyday eyeglasses and will be the center of our digital lives. Nearly all digital activities will occur through them rather than desktop computers or mobile devices. We will use them for shopping and to virtually visit destinations before we book travel. They will scan our eyes for warnings of biologic anomalies and health concerns. We will see the news and attend classes and communicate through them. By 2045, these glasses will be contact lenses and by 2050, they will be nanotech implants. This will be mostly good, but there will be considerable problems and social issues caused by them. They will likely destroy our privacy, they will be vulnerable to hacking and, by then, they could possibly be used for mind control attempts.

“Positives for 2035:

  • Medical technology will prolong and improve human life.
  • Immersive technology will allow us to communicate with each other through holograms that we can touch and feel, beyond simple Zoom chatting or phoning.
  • Most transportation will be emissions-free.
  • Robots will do most of our unpleasant work, including the fighting of wars.
  • Tech will improve the experience of learning.

“On the dark side of 2035:

  • Personal privacy will be eradicated.
  • The cost of cybercrime will be many times worse than it is today.
  • Global warming will be worse.
  • The computing experience will bombard us with an increasing barrage of unwanted messages.”

Jamais Cascio, distinguished fellow at the Institute for the Future, shared this first-person 2035 scenario: “Today, I felt like a frog so I became one. Well, virtually, of course. I adjusted my presentation avatar (my ‘toon’) to give me recognizably ranidaean features. Anyone who saw me through mixed-reality lenses – that is, pretty much everybody at this point – would see this froggy version of me. I got a few laughs at the taqueria I went to for lunch. It felt good, man. My partner, conversely, had a meeting in which she had to deal with a major problem and (worse) she had to attend physically. To fit her mood, she pulled on the flaming ballgown I had purchased for her a few years ago. The designer went all-out for that one, adding in ray-tracing and color sampling to make sure the flames that composed the dress properly illuminated the world around her from both her point of view and the perspective of observers. She said that she felt as terrifying as she looked. When mixed reality glasses took off late in the 2020s, most pundits paid attention to the opportunity they would give people to remix and recreate the world around them. Would people block out things they didn’t want to see? Would they create imaginary environments and ignore the climate chaos around them? Turned out that what people really wanted to do was wear elaborate only-possible-in-the-virtual-world fashion. Think about it: what was the big draw for real money transactions in online games? Skins – that is, alternative looks for your characters. It’s not hard to see how that could translate into the non-game world. You want to be a frog? Here are five dozen different designs under Creative Commons and another several hundred for prices ranging from a dollar to a thousand dollars. You want to look serious and professional? This outfit includes a new virtual hairstyle, too. We sometimes get so busy trying to deal with the chaos of reality that we sometimes forget that the best way to handle chaos is to play.”

The founder and director of a digital consultancy predicted, “AR and VR technologies will do more to bring us together, teach us about distant places, cultures and experiences and help us become healthier through virtual diagnostics and digital wellness tools. I suppose what I’m really envisioning is a future where the entities that provide digital social services are reoriented to serve users rather than shareholders; a new class of not-for-profit digital utilities regulated by an international network of civic-minded experts. I would like to envision a digital future where we assemble around communities – geographical or interest-based – that provide real support and a plethora of viewpoints. This is really more of a return to the days before Facebook took over the social web and development from there.”

A leading professor of legal studies and business ethics responded, “The expectation that persistent metaverse experiences will be more widespread by 2035 isn’t a prediction, it is a certainty given current development and investment trends. I have wonderful experiences in the digital space of World of Warcraft, which started in 2004. With the huge investment in metaverse platforms, I expect that more people will have that kind of social experience, extending beyond it simply being used for gaming. But that doesn’t mean that digital life will be better or worse on average for 8 billion people in the world.”

A distinguished scientist and data management expert who works at Microsoft said, “In 2035 there will be more ‘face-to-face’ (‘virtual,’ but with a real feel) discussion in digital spaces that opens people’s minds to alternative viewpoints.”

Sam Punnett, retired owner of FAD Research, said, “A better world online would involve authenticated participants. It isn’t too far-fetched to imagine that 15 years from now we will have seen a broad adoption of VR interfaces with a combination of gesture and voice control. After many years of two-dimensional video representation and its interfaces, technology and bandwidth will advance to a point where the VR gesture/voice interface will represent ‘new and improved.’ Watch the gaming environments for more such advances in interface and interaction, as gaming most always leads invention and adoption.”

Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner, commented, “If virtual reality improves akin to the way that video conferencing has improved, VR gaming will be awesome. We have the ‘Star Trek’ communicator now (with mobile phones). If better sensing of body movement was combined with additional advances in head-mounted display and audio, we’d have something like a primitive ‘Star Trek’ holodeck.”

A professor of computer science and entrepreneur wrote a hopeful, homey vignette: “Wearing augmented-reality hardware, a child is learning by doing while moving – launching a rocket, planting a tree, solving an animal-enclosure puzzle in a virtual zoo. In the next room, a sitting parent is teaming with colleagues across the globe to design the next version of a flying car. Grandpa downstairs is baking cookies from the porch Adirondack chair by controlling – via a tablet and instrumented gloves – a couple of chef-robots in the kitchen. While Grandma, from an adjacent chair, is interacting with a granddaughter who lives across the country via virtual-reality goggles.”

Victor Dupuis, managing partner at the UFinancial Group, shared a shopping scenario, writing, “You are buying a new car. You browse cars by using a personal Zoom-type video tech, then switch into a VR mode to take a test drive. After testing several EV cars from different manufacturers, you simultaneously negotiate the best possible price from many of them. You settle on a choice, handle a much more briefly-structured financial transaction, your car is delivered to your front door by drone truck and your trade-in vehicle leaves in the same way.

“Between now and 2035, digital spaces will continue to improve the methods and efficiencies of how we transact life. Financial decision making, information interpretation, major personal and home purchases, all will be handled more efficiently, resulting in reduced unit costs for consumers and the need for companies to plan on higher sales volume to thrive. On the negative side, we are eroding relationally because of an increased dependence on digital space for building relationships and fostering long-term connections. This will continue to erode the relationship aspect of human nature, resulting in more divorces and fractured relationships, and fewer deep and abiding relationships among us.”

Advances in AI can be crucial to achieving human goals

Alexa Raad , chief purpose and policy officer at Human Security and host of the TechSequences podcast, said, “In 2035 AI will increase access to a basic level of medical diagnostic care, mental health counseling, training and education for the average user. Advances in augmented and virtual reality will make access to anything from entertainment to ‘hands-on’ medical training on innovative procedures possible without restrictions imposed by our physical environment (i.e., geography). Advances in the Internet of Things and robotics will enable the average user to control many aspects of their physical lives online by directing robots (routine living tasks like cleaning, shopping, cooking, etc.). Advances in biometrics will help us manage and secure our digital identities.”

A foresight strategist based in Washington, D.C., predicted, “Probably the most significant change in ‘digital life’ in the next 14 years will be the geometric expansion of the power and ubiquity of artificial intelligence. I consider it likely that bots (writ large) will be responsible for generating an increasing portion of our cultural and social information, from entertainment media to news media to autonomous agents that attend our medical and psychosocial needs. Obviously, a lot can go right or wrong in this scenario, and it’s incumbent upon those of us who work in and with digital tech to anticipate these challenges and to help center human dignity and agency as AI becomes more pervasive and powerful.”

Peter B. Reiner, professor and co-founder of the National Core for Neuroethics at the University of British Columbia, proposed the creation of “Loyal AI,” writing, “As artificial intelligence comes to encroach upon more and more aspects of our lives, we need to ensure that our interests as humans are being well-served. The best way for this to happen would be the advent of ‘Loyal AI’ – artificially intelligent agents that put the interests of users first rather than those of the corporations that are developing the technology. This will require wholesale reinvention of the current rapacious business model of surveillance capitalism that pervades our digital lives, whether through innovation or government regulation or both. Such trustworthy AI might foster increased trust in institutions, paving the way for a society in which we can all flourish.”

Digital spaces will live in us. Direct connectivity with the digital world and thus with each other will drive us to new dimensions of discovery of ourselves, our species and life in general (thus not only digital life). Paul Epping, chairman and co-founder of XponentialEQ

Paul Epping, chairman and co-founder of XponentialEQ, predicted, “The way we think and communicate will change. Politics, as we know it today, will disappear because we will all be hyperconnected in a hybrid fashion: physically and virtually. Governments and politics have, in essence, been all about control. That will be different. Things will most likely be ‘governed’ by AI. Therefore, our focus should be on developing ‘good’ AI. The way we solve things today will not be possible in that new society. It has been said that first we create technology and then technology creates us. At that point, tech will operate on a direct cognitive level. Radical ‘neuroconnectivity’ has exponentially more possibilities than we can imagine today; our old brains will not be able to solve new problems anymore. Technology will create the science that we need to evolve.

“Digital spaces will live in us. Direct connectivity with the digital world and thus with each other will drive us to new dimensions of discovery of ourselves, our species and life in general (thus not only digital life). And it will be needed to survive as a species. Since I think that the technologies being used for that purpose are cheaper, faster, smaller and safer, everyone can benefit from it. A lot of the problems along the way will be solved and will have been solved, although new unknowns will brace us for unexpected challenges. E.g.: how will we filter information and what defines the ownership of data/information in that new digital space? Such things must be solved with the future capabilities of thinking in the framework of that time; we can’t solve them with our current way of thinking.”

Heather D. Benoit, a senior managing director of strategic foresight, wrote, “I imagine a world in which information is more useful, more accessible and more relevant. By 2035, AIs should be able to vet information against other sources to verify its accuracy. They should also be able to provide this information to consumers at the times that make the most sense based on time of day, activity and location. Furthermore, some information would be restricted and presented to each individual based on their preferences and communication style. I imagine we’ll all have our own personal AIs that carry out these functions for us, that we trust and that we consider companions of a sort.”

Sam Lehman-Wilzig, professor and former chair of communications at Bar-Ilan University, Israel, said, “I envision greater use of artificial intelligence by social media in ‘censoring’ out particularly egregious speech. Another possibility: Social media that divides itself into ‘modules’ in which some disallow patently political speech or other types of subject matter, i.e., social media modules that are subject-specific.”

An expert on the future of software engineering presented the following scenario: “A political operative writes a misleading story and attempts to circulate it via social media. By means of a carefully engineered network topology, it reaches trusted community members representing diverse views, and – with the assistance of sophisticated AI that helps to find and evaluate the provenance of the story and related information – the network determines that the story is likely a fabrication and damps its tendency to spread. The process and technology are very reliable and trusted across the political spectrum.”

Jerome Glenn, co-founder and CEO of The Millennium Project, predicted, “Personal AI/avatars will search the internet while we are asleep and later wake us up in the morning with all kinds of interesting things to do. They will have filtered out information pollution, distilling just the best for our own unique self-actualization by using blockchain and smart contracts.”

Dweep Chand Singh , professor and director/head of clinical psychology at Aibhas Amity University in India, said, “Communication via digital mode is here to stay, with an eventual addition of brain-to-brain transmission and exchange of information. Biological chips will be prepared and inserted in brains of human beings to facilitate communication without external devices. In addition, artificial neurotransmitters will be developed in neuroscience labs for an alternative mode of brain-to-brain communication.”

An ICANN and IEEE leader based in India proposed a potential future in which everything is connected to everything, writing, “Our lives, the lives of other humans linked to us and the lives of non-human entities (pets, garden plants, homes, devices and household appliances) will all be connected in ways that enhance the sharing of information in order for people to have more meaningful lives. We will be able to upload our thoughts directly to the internet and others will be able to download and experience them. The ‘thoughts’ (experiences, sensory information, states of mind) of other non-human entities will also be uploaded. Among these online thought-objects, there will also be ‘bots’ (AI thought entities), and the internet will become a bus for thoughts and awareness. This will lead to stunning emergent properties that could transform the human experience.”

A futures strategist and consultant warned, “Within the next 15 years, the AI singularity could happen. Humanity can only hope that the optimistic beliefs of Isaac Asimov will hold true. Even in the present day, some AI platforms that were developed in research settings have evolved into somewhat psychopathic personalities, for lack of a better description. We might, in the future, see AI forecasting events based on accumulated information and making decisions that could limit humanity in some facets of life. Many more jobs than present will be run and controlled by this AI, and major companies will literally jump at the nearly free workforce that AI will provide, but at what cost for humanity? We can only hope as we wait and see how this technology will play out. AI lacks the human element that makes us who we are: the ability to dream, to be illogical, to make decisions based on a ‘gut feeling.’ Society could become logic-based, as this is the perception that AI will base its decisions on. Humanity could lose its ability for compassion and, with that, for understanding.”

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Artificial Intelligence
  • Emerging Technology
  • Future of the Internet (Project)
  • Gig & Sharing Economies
  • Misinformation Online
  • Online Privacy & Security
  • Platforms & Services
  • Social Media
  • Social Relations & Tech
  • Tech Companies

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

Many americans think generative ai programs should credit the sources they rely on, americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, striking findings from 2023, most popular, report materials.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

The reality of virtual reality

Benjamin schöne.

1 Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany

Joanna Kisker

2 Differential Psychology and Personality Research, Institute of Psychology, Osnabrück University, Osnabrück, Germany

Thomas Gruber

Sophia sylvester.

3 Semantic Information Systems Research Group, Institute of Computer Science, Osnabrück University, Osnabrück, Germany

Roman Osinsky

Associated data.

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: The data sets generated and analyzed during the current study are available in the Open Science Framework (OSF) repository: https://osf.io/cu2pz/?view_only=6763bdce7219449184a0c25ee8f95055 .

Virtual reality (VR) has become a popular tool for investigating human behavior and brain functions. Nevertheless, it is unclear whether VR constitutes an actual form of reality or is more like an advanced simulation. Determining the nature of VR has been mostly achieved by self-reported presence measurements, defined as the feeling of being submerged in the experience. However, subjective measurements might be prone to bias and, most importantly, do not allow for a comparison with real-life experiences. Here, we show that real-life and VR height exposures using 3D-360° videos are mostly indistinguishable on a psychophysiological level (EEG and HRV), while both differ from a conventional 2D laboratory setting. Using a fire truck, three groups of participants experienced a real-life ( N = 25), a virtual ( N = 24), or a 2D laboratory ( N = 25) height exposure. Behavioral and psychophysiological results suggest that identical exogenous and endogenous cognitive as well as emotional mechanisms are deployed to process the real-life and virtual experience. Specifically, alpha- and theta-band oscillations in line with heart rate variability, indexing vigilance, and anxiety were barely indistinguishable between those two conditions, while they differed significantly from the laboratory setup. Sensory processing, as reflected by beta-band oscillations, exhibits a different pattern for all conditions, indicating further room for improving VR on a haptic level. In conclusion, the study shows that contemporary photorealistic VR setups are technologically capable of mimicking reality, thus paving the way for the investigation of real-world cognitive and emotional processes under controlled laboratory conditions. For a video summary, see https://youtu.be/fPIrIajpfiA .

1. Introduction

Getting into touch with virtual reality (VR), there is one fundamental question that arises immediately: How real is VR? The answer to that question will decide whether VR will only be considered an advanced form of computer technology, a next-generation PC, or if there is a categorical difference between VR and conventional (immersive) media experiences. The answer to this question is crucial for the application of VR for scientific purposes as a tool for learning and for therapeutic purposes.

However, at first glance, the question itself, whether VR is real, seems redundant, as the technology is labeled being a type of “reality” as most users describe an immersive experience. Surprisingly, there is little objective scientific evidence to support these introspective reports and there is little reason to believe that mounting a sophisticated monitor to the forehead indeed leads to an experience that can be considered a form of reality. By contrast, 3D cinema is not considered to create a form of reality albeit highly immersive. Whether VR actually can be considered “being real” is particularly difficult to answer as there is no technique or experiment to directly measure its reality or to derive its ability to create a feeling of reality from its technical properties. It is not possible to accurately quantify the degree to which a virtual reality (VR) experience might be perceived as “real” by an individual as this is a subjective question that depends on the individual’s perceptions. While there are various methods and techniques that can be used to measure brain activity and behavior in relation to VR experiences, these do not directly provide insights into an individual’s subjective experience of reality per se . However, comparing neural patterns and behavior during VR experiences to those during real-life experiences can potentially provide insights into the nature of VR and how it is processed by the brain as opposed to real-life experiences.

Consequently, the benchmark for assessing VR simulation quality can only be the user’s phenomenal consciousness, i.e., the subjective experience of being aware of one’s thoughts, feelings, and perceptions. Specifically, the crucial point is whether the brain can create an alternative reality from the sensory input provided by the VR device so that the brain itself regards this reality as sufficiently real overruling and replacing the physical reality it is actually situated in. To put it differently, the question is whether the sensory input provided by a VR headset is sufficient to create a phenomenal consciousness that is similar to or even indistinguishable from one constructed from real-life sensory input. Contemporary approaches determining the validity of virtual experiences follow this line of thought. One key aspect of VR is the feeling of being physically present in a scene, called “presence” ( Sheridan, 1992 ; Slater et al., 1994 ).

Defining presence as the sensation of “being there” is the most common understanding of the concept, while its definition is widely debated ( Murphy and Skarbez, 2022 ). Finding a definitive definition for the concept in question is beyond the scope of this study. For more extensive elaborations on the topic, see the studies by Schubert et al. (2001) , Slater (2009) , Nilsson et al. (2016) , Slater (2018) , and Murphy and Skarbez (2022) . Nonetheless, three central factors are identified as particularly important in defining the feeling of reality in the present study. First, the sensory feed provided by the VR hardware comprises enough modalities in an adequate quality to relocate the user to the VR and to shield them from the physical reality—a technical ability often referred to as “immersion” ( Nilsson et al., 2016 ). Especially regarding the visual sensory stream, VR offers a surrounding, stereoscopic, and thus enveloping representation of space ( Murphy and Skarbez, 2022 ). Second, to create a feeling of presence, the artificial virtual environment has to present itself as a coherent plausible world following similar constraints as the real physical world in terms of realism ( Welch et al., 1996 ; Slater, 2009 ; Kothgassner and Felnhofer, 2020 ). Third, presence is closely tied to agency ( Piccione et al., 2019 ). A minimum requirement is that real head movement changes the viewing perspective in the virtual environment, thus physically locating the user in VR. Especially the latter invites us to visually explore the virtual scene, triggers sensations of awe ( Chirico et al., 2018 ), and become immersed in the environment’s atmosphere ( Felnhofer et al., 2015 ). This self-reinforcing phenomenon allows the physical world fade away.

Presence is typically a self-reported judgment, surveyed after a VR session using standardized questionnaires ( IJsselsteijn et al., 2000 ; Riva et al., 2003 ), and thus highly prone to individual differences that arise from experiences with, e.g., computer games in general. Although presence gives a hint about how realistic an environment might have felt to the user, the questionnaire construction does not conclude how real the experiences are compared to reality. Presence solely serves the purpose of comparing the qualities of different virtual environments to each other ( Usoh et al., 2000 ), but not assessing the simulation’s reality degree.

Overcoming these limitations of subjective post-induction measurements, the behavior during the virtual experience provides valid insights into the simulation’s realism [ Freeman et al., 2000 ; Kisker et al., 2021a , see also Stoffregen et al. (2003) ]. Conscious, deliberate responses to a virtual environment [e.g., refusal to cross great heights, see Kisker et al. (2021a) ] and the instinctive reaction and automatic reflexes corresponding to real-life behavior [e.g., evasive action to physical objects or adaptions of posture, see Freeman et al. (2000) ] imply that the underlying psychological mechanisms in VR resemble those under real-life conditions. This holds not only true for observable behavior but also for sympathetic responses such as enhanced skin conductivity ( Gromer et al., 2019 ), heart rate ( Gorini et al., 2010 ; Chittaro et al., 2017 ; Peterson et al., 2018 ; Kisker et al., 2021a ), or any other way intense emotional responses might manifest [e.g., Kisker et al., 2021c , for review on emotion elicitation using VR, see Bernardo et al. (2021) ]. Although behavior and psychophysical reactions observed under immersive VR conditions had been interpreted as resembling real behavior ( Blascovich et al., 2002 ; Slater, 2009 ; Bohil et al., 2011 ; Lin, 2017 ; Kothgassner and Felnhofer, 2020 ; Kisker et al., 2021a ), scientific certainty can only be reached when the behavioral and physiological reactions to a virtual scene are directly compared to those markers observed in the real environment that the scene mimics.

Determining VR’s degree of reality requires a triad. Not only the potential overlap between virtual and physical reality needs to be scientifically quantified, but also both need to be distinguished from conventional 2D computer simulations. Otherwise, it remains unclear which behavioral and physiological markers are hallmarks of realistic psychological functioning (see, e.g., Stoffregen et al., 2004 ). Only a few studies compare either VR or conventional setups with physical reality or even compare all of them in a triad. Furthermore, most studies that investigate VR in direct comparison to physical reality have a practical background and are based on clinical applications, foremost exposure therapies ( Krijn et al., 2004 ; Gorini et al., 2010 ; Oing and Prescott, 2018 ; Boeldt et al., 2019 ). Although the use of VR as a therapeutic tool provides strong evidence that VR applications result in effective changes in real-life behavior and cognition even over the long term ( Opriş et al., 2012 ), it needs to be considered that such findings are based on samples that tend to respond extremely to respective aversive stimuli ( Cisler et al., 2010 ). Only recently, researchers venture into classifying VR with regard to its degree of reality based on norm samples. For example, emotional reactions to 360° footage of an enjoyable landscape resemble reactions to its real-life counterpart ( Chirico and Gaggioli, 2019 ). Moreover, the perception of virtual height sufficiently evoked anxiety and reduced standing balance, not significantly differing from outcomes obtained from the equivalent real-life setup ( Simeonov et al., 2005 ). However, to the best of our knowledge, no previous study has put an immersive 3D-360° VR setup into perspective by comparing it to both, an equivalent real-world setup and a conventional PC setup at the same time. Hence, the reality of VR is positioned in the theoretical “no-man’s land” which can be marked out in the border areas of physical reality and conventional computer simulations.

1.1. Current study and hypothesis

The study at hand aims to identify reliable electrophysiological, physiological, and subjective markers that distinguish between different degrees of reality in an environment that was experienced in physical reality, virtual reality by means of photorealistic 3D-360° videos, or conventional media. As a result, the study allows us to conclude to what degree virtual reality can be regarded as real. A height exposure paradigm was chosen to juxtapose these levels of reality for the following reasons: Reactions to virtual height exposure have been of interest from the very beginning of VR research and can to some extent be considered a classical application ( Hodges et al., 1995 ; Rothbaum et al., 1995 ). Height exposure leverages the affordances of VR. The three-dimensionality instantly creates a strong sensation of height, triggering appropriate emotional and associated psychophysiological responses in most individuals. In other words, it is a proven, simple, almost classical way of eliciting stress responses in VR ( Asjad et al., 2018 ; Gromer et al., 2018 , 2019 ; Wolf et al., 2020 ; Kisker et al., 2021a ). Previous attempts to directly compare VR and physical reality used rather calming nature scenes with mixed results. Although some studies concluded that VR elicits the same emotions as their real counterparts ( Higuera-Trujillo et al., 2017 ; Chirico and Gaggioli, 2019 ), the latest meta-analysis states that reality surpasses virtual reality at least for positive emotions ( Browning et al., 2020 ).

To maintain experimental control under realistic conditions as much as possible and to minimize motion artifacts that could significantly affect the quality of the EEG data, the method of choice was a fire truck bringing the participants up to a height of 33 m ( Figure 1 ). The experimental conditions vary only slightly among the participants: Movements and the viewing gaze are restricted by the firefighter basket, a fixed time course consisting of an ascending phase, a maximum height, and a descending phase ensures high comparability of the experiences and thus a high data quality. To recreate this experience in VR and under conventional screen conditions, the ride in the firefighter basket was recorded several times with a VR video camera capable of stereoscopic 360° images (for details see Section “2. Materials and methods”). Stereoscopic 360° videos have previously been proven to be a viable option for creating believable virtual environments ( Serino and Repetto, 2018 ) and were successfully utilized in scientific experiments ( Higuera-Trujillo et al., 2017 ; Serino and Repetto, 2018 ; Schöne et al., 2019 , 2020 ; Kisker et al., 2021b ). Although the users’ location is fixed within the environment, it can be explored via head movements. Both, VR and conventional PC setups were identically designed except for the visual presentation form: In the VR version, the height exposure was mediated by a head-mounted display (HMD) depicting the fire basket video, whereas the very same video was mapped onto a 2D screen for the conventional PC experience. To render the virtual experience as authentic as possible with respect to the haptic and proprioceptive dimensions, a replica of the fire basket was built and complemented by unsteady ground and wind simulation. This mixed reality approach, aligning the visual impression of the height exposure with stereoscopic sound and haptics, is hypothesized to enhance immersion.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-14-1093014-g001.jpg

Photographs from the real-life setting: (1) Basket on the ground at the starting point. (2) Basket with firefighter and participant declining. (3) Basket at a maximum height of 33 m/108 feet. (4) Basket on the ground with firefighter and Insta 360 Pro VR camera instead of a participant. Photos taken by BS.

The rationale of this approach might at first glance seem at odds with the study’s aim to delimit VR settings from conventional screen experiences, but takes up a central thought about the nature of VR. In a devil’s advocate approach, all aspects of the experience which might factor into the sensation of reality were held constant—except the visual VR impression. The PC condition’s big screen occupying most of the participant’s field of view leads to low-level immersion [sometimes even referred to as desktop-VR, see Smith (2019) ], and looking around the video via the keyboard, in turn, facilitates agency. Alternatively, the participants would sit in front of a much smaller monitor and passively watch the video. Yet, it would remain uncertain which of all the many differences between VR and 2D screen presentation would account for the differences in perception of reality. The proposed setup, however, allows us to pinpoint the difference to the HMD. On the other side, when no measurable difference between VR and 2D would arise, which given this devil’s advocate setup is much more likely, it would be hard to argue for a categorical difference between VR and conventional computer simulations with respect to their level of reality.

Following the general narrative of VR, we hypothesized that the difference in perceived realism should primarily manifest in a feeling of physical and emotional involvement, with reality being the most stressful experience as opposed to 2D being the least stressful one. VR was hypothesized to be somewhere between the two poles of this continuous scale of reality.

On a subjective level, self-reporting questionnaires should indicate the stronger presence and emotional involvement along this scale. Cardiovascular and electrophysiological measurements serve as objective markers. Heart rate variability (HRV) denotes the time variance between heartbeats ( Shaffer and Ginsberg, 2017 ), and as it is controlled by the autonomous nervous system (ANS), it is a reliable indicator of resilience to stress and behavioral and emotional adaptability. Whereas a low HRV is interpreted as experiencing distress, a higher HRV indicates states of relaxation. The ANS modulates the intervals between two consecutive R-peaks of the QRS-complex (RR interval) through parasympathetic and sympathetic branches. Specifically, during mental stress, a shift in the autonomic balance of sympathetic activation and parasympathetic withdrawal occurs ( Castaldo et al., 2015 ). The HRV is predominantly vagally controlled, however, on a physiological side, high-frequency oscillations reflect respiratory influences, low-frequency HRV indexes blood pressure control mechanisms, and very low-frequency HRV relates to kidney functions ( Reyes del Paso et al., 2013 ).

Among other biomarkers such as hormone secretion, vasoconstriction of blood vessels, and increased blood pressure, the HRV reflects the “fight-or-flight” reaction ( Taelman et al., 2008 ). High heart rate variability (HRV) is associated with resilience to stress and behavioral flexibility, while low HRV is associated with an increased stress response ( Castaldo et al., 2015 ; An et al., 2020 ). Within the context of this paradigm, HRV should indicate if the level of stress elicited by the VR setting rather resembles the stress elicited by the real setting or by the feeling of safety of the 2D setting.

The power of cortical alpha- (8–12 Hz), theta- (4–7 Hz), and lower beta-band (15–20 Hz) oscillations were selected as electrophysiological markers for further investigation. Alpha and theta power are sensitive to stress, arousal, and anxiety ( Schlink et al., 2017 ; Horvat et al., 2018 ; Klotzsche et al., 2018 ; Hofmann et al., 2019 ) with different functional properties. Alpha-band activity is associated with tonic alertness ( Sadaghiani et al., 2010 ) and maintenance of attentional resources ( Klimesch, 2012 ), it thus is believed to reflect unspecific alert and general vigilance ( Davies and Krkovic, 1965 ) and correlates with anxiety ( Knyazev et al., 2004 ). Overall, alpha functions are a promising marker for emotional arousal processing ( Luft and Bhattacharya, 2015 ; Klotzsche et al., 2018 ). Whereas variations of alpha indicated exogenous orientated processing, i.e., threat-related external stimuli, theta serves as a marker for endogenous processes. Theta is mostly associated with cognitively demanding processes ( Klimesch, 1999 ) and primarily encoding and related working memory processes ( Jensen and Tesche, 2002 ; Sauseng et al., 2004 ). However, theta also varies as a function of emotion or affective involvement and indicates the cognitive processing of emotional stimuli. It thus might indicate emotion regulation ( Ertl et al., 2013 ; Uusberg et al., 2014 ) or cognitive anxiety, i.e., perseverative worrying thoughts and feelings ( Suetsugi et al., 2000 ; Gold et al., 2013 ; Moran et al., 2014 ; Putman et al., 2014 ; Lee et al., 2017 ). As an increase in theta has specifically been associated with the ability to reduce emotional reactions ( Ertl et al., 2013 ), it might reflect active emotional regulation processing during the height exposure (see Grunwald et al., 2014 ). Whereas the alpha-band reflects the exogenous cortical response to the emotional arousal of the height exposure, the theta-band oscillations indicate the endogenous regulation of the experience. As mentioned earlier, VR has generally the ability to elicit strong emotions and especially reflex-like fear responses. Hence, it is even more important to assess whether the emotional reactions in VR, here negative arousal by means of height induction, are regulated like emotions elicited under real-life conditions or like emotions elicited by screen presentations.

Both, alpha- and theta-band oscillations have been proven effective for evaluating emotional arousal and stress in VR setups ( Schlink et al., 2017 ; Klotzsche et al., 2018 ; Hofmann et al., 2019 ). Furthermore, beta-band oscillations have also been associated with numerous emotional processes ( Bos, 2006 ; Schubring and Schupp, 2021 ). A considerably less recognized functional property that is more relevant for this study is that the beta-band also reflects somatosensory processing ( Cheyne et al., 2003 ), sensory gating ( Buchholz et al., 2014 ), motor planning, and execution ( Campus et al., 2012 ). The beta-band even distinguishes between pleasant and unpleasant tactile stimulation on a single trial basis with high accuracy ( Singh et al., 2014 ). Thus, it might effectively contribute to gaining insights into the quality and valence of the somatosensory information within this study’s context. In particular, it comes close to an objective marker of the otherwise subjective feeling of physical presence—one of the vital factors underlying VR.

In summary, our study aims at determining to what degree virtual reality simulates real-life experiences by comparing a virtual height exposure to corresponding real-life and a 2D monitor setup. Among other criteria, foremost the benchmark was whether the cognitive and emotional processes deployed by the brain to process the virtual experience resemble the ones in real-life or the monitor condition. Specifically, we investigated objective markers of emotional arousal, i.e., tonic alterness or vigilance, the regulation thereof, and the haptic processing by means of EEG and HRV, respectively.

2. Materials and methods

2.1. participants.

Seventy-nine participants were recruited from the Osnabrück University. The study was conducted in accordance with the Declaration of Helsinki and approved by the local ethics committee of Osnabrück University. Participants gave their informed written consent. They were screened for psychological and neurological disorders using a standard screening (anamnesis) and were excluded from participation if they suffered from a respective condition (e.g., affective disorders or epilepsy). All had a normal or corrected-to-normal vision. Five participants were excluded from the analysis due to insufficient data quality ( n = 2) or technical problems during the ride in the fire truck’s basket ( n = 3), resulting in a final sample size of N = 74 participants [Real-Life (RL) condition: n = 25, M age = 25.32, SD age = 4.28, 36.0% male, none diverse, 92.0% right-handed; Virtual Reality (VR) condition: n = 24, M age = 22.29, SD age = 2.82, 20.8% male, none diverse, 87.5.0% right-handed; and Computer (PC) condition: n = 25, M age = 22.52, SD age = 2.96, 12.0% male, none diverse, all right-handed]. Participants were rewarded with either partial course credits or 10€ for participation. As we recruited a random sample from the local student population, with most of them being psychology students, the sample is characterized by a high proportion of female students. It is well-known that gender imbalance might cause systematic biases (see, e.g., Perez, 2019 ). For example, women are more likely to suffer from anxiety disorders or general experiences of fear than men (e.g., McLean and Anderson, 2009 ). Some studies report that women are more prone to motion sickness (e.g., Chattha et al., 2020 ) and cybersickness ( Shafer et al., 2017 ), while others do not find evidence for a gender bias in these factors [e.g., Wilson and Kinsela, 2017 ; Fulvio and Rokers, 2018 ; for review, see MacArthur et al. (2021) ]. Yet, we assume that the gender imbalance did not affect the results obtained from group comparisons as we found no significant differences between groups before the height exposure, e.g., concerning trait anxiety, fear of height, avoidance of height, as well as positive and negative affects (see Section “3. Results”).

2.2. Conditions and setup

Participants were randomly assigned to one of three conditions: real-life (RL), virtual reality (VR), or computer screen (PC). The conditions were measured in a block-wise manner per condition as the measurements of the RL condition depended on the cooperating fire department’s schedule. However, the participants did not know in advance in which condition they would participate.

2.2.1. Real-life condition (RL)

Participants were elevated in the firefighter’s basket of a fire truck up to 33 m height at a relatively remote and quiet part of the university’s campus (see Figure 1 ; Coordinates 52°16′57.5″N 8°01′12.9″E; viewing direction north). To ensure the participants’ safety, they were accompanied by a firefighter in the basket and the fire truck was operated by another firefighter from the ground. Both firefighters wore their full uniforms. The firefighter who accompanied the participants in the basket was instructed to turn his back on the participants throughout the ride and not to speak to them. Participants were instructed not to speak to the firefighter unless they wanted to quit the experiment early. The measurements of the RL condition were performed on 4 days in total.

2.2.2. Virtual reality condition (VR)

The 3D-360° videos used for the VR condition were recorded with the Insta360Pro VR camera (Insta360, Shenzhen, China), at a frame rate of 60 fps, 4k resolution, and spatial sound. To cope with different weather conditions, wind force, changes in the environment, and time of day, the camera was set up on each day of the RL condition two to three times for recording. The elevation of the camera was performed exactly as with participants: the camera was accompanied by the same firefighter in full uniform, turning his back on the camera and not speaking at all. A total of 10 videos of the ride in the fire truck’s basket were recorded, two of which were not further used due to disturbed audio recording.

Participants in the VR condition were equipped with an HTC Vive Cosmos (HTC, Taoyuan, Taiwan) head-mounted display (HMD) which allows for a 3D-360° view and head tracking. The Cosmos HMD provides a resolution of 1440 × 1700 pixels per eye and spatial sound.

The videos were presented using the Simple VR video player (Simplevr.pro, Los Angeles, CA, USA). To enhance immersion and provide haptic feedback as similar as possible to the RL condition, participants were standing in a replica of the firefighter’s basket during the VR ride in the fire truck’s basket. The replica of the firefighter’s basket corresponded to the real basket in appearance and size and was positioned on unsteady ground. A fan was used to simulate wind.

2.2.3. Computer condition (PC)

The very same video recordings as for the VR condition were used but presented in 2D instead of 3D-360°. The videos were presented in full-screen mode using GoPro VR Player (GoPro Inc, San Mateo, CA, USA). Participants were able to look around the video using the arrow keys on a conventional keyboard. Participants were positioned in front of an 86″ wide smartboard (SMART SBID-7086-v2; SMART Technologies ULC, Calgary, AB, Canada). The smartboard provided a resolution of 4k (3840 × 2160 pixels) and a frame rate of 60 fps. Equivalent to the VR condition, participants were standing in the same replica of the firefighter’s basket on unsteady ground, and a fan was used for wind simulation. The replica was positioned at a distance of 150 cm from the smartboard, resulting in a horizontal viewing angle of 2 × 32.33°.

2.3. Procedure

The experiment consisted of two appointments with a duration of approximately 75 min at the first and approximately 15 min at the second appointment. The ride in the fire truck’s basket was carried out during the first appointment. During the second appointment, the participants’ mood regarding the past ride in the fire truck’s basket was surveyed.

Before the ride in the fire truck’s basket (t0), participants filled in the German versions of the Positive and Negative Affect Schedule (PANAS, Krohne et al., 1996 ), the State-Trait-Anxiety-Inventory, trait version (STAI-T, Laux et al., 1981 ), the Acrophobia Questionnaire (AQ, Cohen, 1977 ), and the Sensation Seeking Scale Form-V (SSS-V, Zuckerman, 1996 ). The selection of the questionnaires was based on two previous studies investigating height in VR settings ( Biedermann et al., 2017 ; Kisker et al., 2021a ) and aimed to cover general affective responses, anxiety, and sensation seeking on the subjective level.

The mobile EEG and the ECG were applied by the test leaders (see Section “2.3.1. Electrophysiological recordings and preprocessing”). Afterward, the participants were led to the firetruck’s basket (real or replica). The participants saw neither the real basket nor the replica at any earlier time.

Participants were asked to stand still for 30 s facing the firetruck’s basket for ECG baseline measurement. The baseline of 30 s was chosen to determine the participants’ current state immediately before the ride in the basket. HRV is conventionally determined beginning at 5 min intervals, especially with respect to the diagnosis of cardiovascular disease ( Salahuddin et al., 2007 ). Yet, previous studies showed that 30 s of measurement are sufficient to determine current mental stress using HRV in mobile setups [ Cho et al., 2002 ; Salahuddin et al., 2007 ; for review, see Shaffer and Ginsberg (2017) ]. Even much shorter baselines have been sufficient for EEG frequency analyses in conventional EEG (e.g., Gruber et al., 2008 ; Friese et al., 2013 ), mobile EEG (e.g., Packheiser et al., 2020 , 2021 ; Lange and Osinsky, 2021 ), and combined VR-EEG studies (e.g., Kisker et al., 2021c ; Schöne et al., 2021 ). After the baseline interval, participants entered the basket and were asked not to walk around in the basket during the ride. All participants were facing the same direction and were asked to lay their hands on the handrail. For the VR condition, participants were equipped with the Vive Cosmos HMD, which showed the still image of the starting position of the ride in the fire truck’s basket similar to the RL condition. For the PC condition, the same still image of the starting position was projected onto the smartboard. Afterward, the ascend of the basket started (real, VR, or video). When the highest point of 33 m was reached, the firefighter stopped the ascend and held the position for 63 s on average before descending again. The duration of the total ride and the timing of significant events (e.g., start of ascending, see Section “2.3.1. Electrophysiological recordings and preprocessing”) were individually recorded for each participant. The ride took 7.26 min on average (ascend: 2.6 min; descend: 3.7 min; see Section “3. Results”).

Directly after the ride in the fire truck’s basket (t1), participants filled in the German versions of the Igroup Presence Questionnaire (IPQ, Schubert et al., 2001 ) and the PANAS.

On the third day after the ride in the fire truck’s basket (t2, second appointment), the participants were asked to return to the laboratory to fill in the PANAS again and were debriefed.

2.3.1. Electrophysiological recordings and preprocessing

For the EEG data acquisition, the mobile EEG system LiveAmp32 by Brain Products (Gilching, Germany) with active electrodes was used. The electrodes were applied in accordance with the international 10–20 system. An online reference (FCz) and ground electrode (AFz) were applied. A threshold of 25 kΩ is recommended for the used EEG system and the thresholds of 25, 20, and 15 kΩ have successfully been applied in previous studies using the same mobile EEG equipment ( Burani et al., 2021 ; Lange and Osinsky, 2021 ; Liebherr et al., 2021 ; Visser et al., 2022 ). To ensure a good balance between signal quality and preparation time in our elaborate setting, the impedance of all electrodes was kept below 15 kΩ (see, e.g., Lange and Osinsky, 2021 ). The data were recorded with a sampling rate of 500 Hz and online band-pass filtered at 0.016–250 Hz. The following significant events of the ride in the fire truck’s basket were inserted as markers into the data based on the individual time measurement per participant: getting into the basket, starting the ascend, arriving at the highest point, starting the descend, and arriving at the starting position.

The EEG data were pre-processed using MATLAB (version R2020b, MathWorks Inc) and EEGLAB ( Delorme and Makeig, 2004 ). The data were segmented into four epochs based on the individual time measurement, covering baseline, ascending phase, staying at the highest point, and descending phase. Per segment, each electrode was detrended separately. The continuous data were filtered between 1 and 30 Hz to reduce slow drifts and high-frequency artifacts. The electrodes were re-referenced to the average reference for further off-line analysis as recommended for large electrode sets (see, e.g., Cohen, 2014 ).

Artifact correction was performed per epoch by means of the “Fully Automated Statistical Thresholding for EEG artifact Reduction” (FASTER, Nolan et al., 2010 ). In brief, this procedure automatically detects and removes artifacts, such as blinks and white noise, based on statistical estimates for various aspects of the data, e.g., channel variance. FASTER has a high sensitivity and specificity for the detection of various artifacts and is described in more detail elsewhere ( Nolan et al., 2010 ). Due to the recommendations for the use of FASTER with 32 channels, independent component analysis (ICA) and channel interpolation were applied, whereas channel rejection and epoch interpolation were not applied.

To isolate the theta-band (4–7 Hz), alpha-band (8–13 Hz), and lower beta-band (15–20 Hz) specific activity, each band power was calculated using a windowed fast Fourier transform (FFT). A hamming window with a length of 1 s and 50% overlap of the windows was applied. The mean FFT scores were calculated per electrode, logarithmized, and squared to determine the respective band power [ln(μV 2 )].

2.3.2. Cardiovascular measurements and preprocessing

A three-channel ECG was applied and transmitted to the mobile EEG system via a BIP2AUX adapter by Brain Products (Gilching, Germany). The electrodes were placed at the left collarbone, the right collarbone, and the lowest left costal arch. The ECG was recorded synchronously with the EEG data.

The ECG data were segmented into epochs corresponding to the EEG epochs using MATLAB and further preprocessed using Brain Vision Analyzer. They were filtered between 5 and 45 Hz. A notch filter (50 Hz) was applied. An automatic R-peak detection was applied and counterchecked per visual inspection. The classical HRV parameters standard deviation of RR intervals (SDRR) and root mean square of successive differences (rmSSD) were calculated per phase (baseline, ascend, highest point, and descend) using MATLAB. The HRV parameters differed significantly between groups during baseline (see Section “3. Results”). To cope with these baseline differences, the individual changes in SDRR and rmSSD compared to the baseline were calculated for further analysis (delta = phase value − baseline value).

2.4. Statistical analysis

The statistical analyses were carried out using SPSS version 26. All variables were tested for normal distribution regarding the separate groups using the Shapiro–Wilk test. All further statistical tests were chosen accordingly (see Supplementary Table 6 for a detailed report of the Shapiro–Wilk test).

2.4.1. Subjective measures

The scales of the questionnaires were calculated as sum scales. Concerning the PANAS, the scores for positive and negative affects concerning the time points before the ride in the fire truck’s basket (t0), directly after the ride (t1), and 3 days after (t2) were calculated. In addition, the change in affect was calculated as the difference between the pre-measurement and both post-measurements (change t1 = t1 − t0; change t2 = t2 − t0). PANAS, STAI, AQ, and IPQ were analyzed using the Kruskal–Wallis test and complemented by post hoc Mann–Whitney U -tests. The SSS-V was analyzed using a one-way ANOVA, complemented by post hoc t -tests.

2.4.2. Dependent measures

Duration of the ride: To ensure that all participants experienced the ride in the fire truck’s basket for the same duration, the timing of the ride was compared using the Kruskal–Wallis test separately per phase as well as cumulated for the total ride.

Electrophysiological and cardiovascular measures: The EEG data and the HRV parameters were analyzed using the Kruskal–Wallis test and complemented by the post hoc Mann–Whitney U -tests. The HRV parameters were analyzed with respect to baseline and three separate phases (ascend, highest point, and descend) for an in-depth analysis. For the EEG data, the topographies were visually inspected and the electrodes of interest were determined using a false discovery rate (FDR, Benjamini and Yekutieli, 2001 ) based approach: the Kruskal–Wallis test and the post hoc Mann–Whitney U -tests were carried out per electrode, frequency band, and phase of the ride. The individual critical p -value indicating significant differences between groups was determined for each electrode. For further analysis, a whole brain analysis was performed, i.e., all electrodes were included in the mean values for statistical analysis as the FDR approach indicated significant differences between groups at the vast majority of electrodes. We successfully implemented a comparable analysis protocol for combined VR/EEG analysis yielding robust and meaningful data in previous studies ( Kisker et al., 2021c ; Schöne et al., 2021 ).

To provide statistical evidence for the equality of measures, we used the Wilcoxon two one-sided test (TOST) (1) as a robust equivalence test for all non-significant comparisons ( Caldwell, 2022 ). The TOST procedure allows us to provide support for the null hypothesis, i.e., the absence of effects large enough to be considered worthwhile. In the TOST procedure, the smallest effect size of interest (SESOI) is determined to set upper and lower equivalence bounds. Two composite null hypotheses are tested: first, the effect found must be equal to or greater than the lower bound, and second, it must be equal to or smaller than the upper bound. If both can be statistically rejected, it can be concluded that the observed effects are practically equivalent to zero ( Lakens, 2017 ).

We calculated the largest difference between groups in the baseline to determine the smallest SESOI for the comparisons in the alpha, theta, and beta bands. The rationale behind this approach is that for a difference between groups to be meaningful, i.e., being based on more than the simple baseline differences between the conditions, it has to surpass the largest differences present in the baseline. Hence, for alpha, we used 0.5199 ln(μV 2 ), for theta 0.7455 ln(μV 2 ), and for beta 0.4555 ln(μV 2 ) (see also Figure 2 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-14-1093014-g002.jpg

Average topographic oscillatory power amplitude [ln(μV2)] distribution across participants by frequency band (rows) and conditions (columns) while staying at the highest point of the ride in the fire trucks basket. The pictograms in the last row depict the experimental setup per condition. See Supplementary Figure 11 for the topographical distribution separately per phase and frequency band power.

3.1. Duration of the ride

The duration of the separate phases as well as of the total ride did not differ between groups. The ascend took 2.5 min on average [ H (2) = 2.76, p = 0.25; M RL = 2.49, SD RL = 0.30; M VR = 2.54, SD VR = 0.19; M PC = 2.53, SD PC = 0.19], and all participants stayed for 63 s at the highest point [ H (2) = 0.6, p = 0.97; M RL = 1.06, SD RL = 0.05; M VR = 1.06, SD VR = 0.04; M PC = 1.06, SD PC = 0.04] and descended for 3.7 min on average [ H (2) = 0.886, p = 0.65; M RL = 3.75, SD RL = 0.30; M VR = 3.67, SD VR = 0.33; M PC : 3.66, SD PC = 0.36]. The total ride took an average of 7.26 min [ H (2) = 0.24, p = 0.89; M RL = 7.27, SD RL = 0.55; M VR = 7.27, SD VR = 0.22; M PC : 7.25, SD PC = 0.29].

3.2. Subjective measures

All groups were equal with respect to trait anxiety, fear of height, avoidance of height, as well as positive and negative affects before the ride in the fire truck’s basket [all H s(2) < 20, all p s > 0.05; see Supplementary Table 2 ], and sensation seeking [ F (2, 68) = 0.25, p = 0.78]. Furthermore, participants did not differ with respect to negative affect, neither directly after nor 3 days after the ride in the fire truck’s basket [all H s(2) < 50, all p s > 0.05; see Supplementary Table 2 ].

However, participants reported different sensations of presence, positive affect, and change in positive affect directly after the ride in the fire truck’s basket as well as 3 days later as a function of the experimental condition [all H s(2) > 24.00, all p s < 0.05, see Supplementary Table 2 ].

In particular, the RL group reported higher levels of general presence, spatial presence, and realness compared to both, VR and PC groups (all U s ≤ 151.0, all p s ≤ 0.003). Counterintuitively, the RL group reported lower levels of involvement compared to the VR group ( U = 32.5, p < 0.001) and did not differ significantly from the PC group ( U = 241.5, p = 0.64). Notably, the IPQ scale involvement compares the experienced environment with the real-life environment, which might be very ambiguous for the RL group to answer. The VR group reported significantly higher levels of presence regarding all IPQ scales compared to the PC group (all U s > 92.00, all p s ≤ 0.003, see Supplementary Table 3 ).

Moreover, the RL group experienced a stronger positive affect and reported higher changes in positive affect at t1 and t2 compared to both other groups (all U s ≤ 45.00, all p s < 0.05, see Supplementary Table 3 ). Furthermore, the VR group exhibited higher levels of positive affect and change in positive affect compared to the PC group (all U s ≤ 152.0, all p s < 0.03), with exception of the change in positive affect at t1 ( U = 191.5, p = 0.074). See Supplementary Tables 2 , 3 , 8 for a detailed report of all statistics.

3.3. Alpha, theta, and beta powers

Overall, the Kruskal–Wallis test indicated significant differences between groups regarding each phase and each aforementioned frequency range [all H s(2) > 6.00; p s < 0.05; see Supplementary Table 9 ].

The RL group and the VR group exhibited similar levels of alpha and theta powers during all phases of the ride in the firetruck’s basket, significantly differing in beta power and during baseline in theta power only (all p s > 0.09, see Supplementary Table 4 ). In contrast, the PC group differed significantly from the RL group for each frequency band and each phase of the ride (all p s < 0.045, see Supplementary Table 4 ). In line, differences between VR and PC conditions were found for all phases and frequency bands as well, with exception of the baseline phases (see Supplementary Table 4 ).

In more detail, the RL group exhibited considerably greater alpha- and theta-band powers compared to the PC group regarding all phases, but only slightly and non-significantly different levels compared to the VR group (all p s < 0.025, see Figures 2 , ​ ,3; 3 ; Supplementary Table 4 ). Diverging from the trend, beta power was highest for the VR group, lower for the RL group, and lowest for the PC group (see Figures 2 , 3 ; Supplementary Table 4 ). Overall, the results suggest that the observed alpha and theta band effects in the RL and VR groups are largely similar, as indicated by the TOST procedure ( Table 1 ), while both groups differ significantly from the PC condition ( Figure 2 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-14-1093014-g003.jpg

Median band power [ln(μV2)] per group, phase, and frequency band. Negative power values result from the logarithmic transformation during preprocessing: The logarithm of values greater than zero and smaller than one is negative. Hence, negative power values are to be read as smaller power compared to positive power values. The respective tables indicate the statistical characteristics per comparison in a reduced overview. Significant differences between groups are marked respectively (* p < 0.05, ** p < 0.01; *** p ≤ 0.001), and the effect size r is interpreted as follows: a, small effect; b, medium effect; c, large effect. Find a detailed report of all statistics in the Supplementary material .

Wilcoxon TOST results for non-significant comparisons.

TOST
AlphaBaseRLVR2070.0320.277
VRPC2090.0350.237
AscendRLVR84<0.001-0.023
HighRLVR125<0.0010.027
DescendRLPC2190.0360.331
VRPC1900.0140.38
ThetaBaseVRPC109<0.0010.297
AscendRLVR50<0.0010.067
HighRLVR75<0.0010.167
DescendRLVR100<0.0010.27
BetaBaseVRPC1820.0090.08

Wilcoxon TOST (W) of the upper equivalence bound and rank-biserial correlations as effect size. In rank-biserial correlations, zero indicates no relationship between the variables, positive values indicate the dominance of the first sample with 1 meaning that all values of the first sample are larger than all values of the second sample, and negative values indicate the dominance of the second sample, with −1 being the total dominance. Except this table, all tables reporting detailed statistics can be found in the Supplementary material . They are sorted by their relevance for the comprehension of the results.

See Supplementary Tables 4 , 8 , 9 for a detailed report of all statistics.

3.4. HRV parameters

The groups exhibited significant differences in SDRR [ H (2) = 18.64, p < 0.001] and rmSSD [ H (2) = 17.17, p < 0.001] during baseline. The baseline was recorded while participants stood in front of either the real fire truck’s basket (RL condition) or its replica (VR and PC conditions). Hence, instead of the uncorrected SDRR and rmSSD values, the changes in SDRR and rmSSD compared to the baseline (see Section “2. Materials and methods”) were further analyzed.

For an in-depth analysis, the HRV parameters were calculated and analyzed for the separate phases and corrected for baseline differences (delta = HRV during respective phase minus HRV during baseline; see also Figure 4 ). The Kruskal–Wallis test revealed significant differences in changes of SDRR and rmSSD as a function of the group regarding all separate phases [all H s(2) > 14.00, all p s ≤ 0.001; see Supplementary Table 10 ]. Specifically, RL and VR conditions did not differ for any phase and HRV parameter, with exception of the change in rmSDD during the phase at which participants stayed at the highest point of the ride. However, the PC condition significantly differed from both other conditions during all phases for both HRV parameters (see Figure 4 ; Supplementary Table 5 ). Notably, SDRR and rmSSD decreased during all phases compared to baseline regarding the PC condition and increased only slightly during the stay at the highest point (SDRR only, see Figure 4 ). On the contrary, the SDRR and rmSSD increased during the ascend and the stay at the highest point regarding the RL condition and deceased during the descend. The VR condition differed from this trend regarding the rmSSD during the stay at the highest point only (see Figure 4 ; Supplementary Table 5 ). See Supplementary Tables 5 , 10 for a detailed report of all statistics.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-14-1093014-g004.jpg

Changes in the HRV parameters SDRR (A) and rmSSD (B) for the separate phases of the ride in the fire truck’s basket compared to baseline per condition: The phases were corrected for the baseline (delta = HRV during the respective phase minus HRV during the baseline). The zero line thus marks the baseline, which was therefore not depicted separately. The error bars depict the standard deviation. Significant differences between groups are marked respectively (** p < 0.01, *** p < 0.001).

4. Discussion

The study aimed to determine to what degree emotional and cognitive processing in virtual reality resembles the corresponding mechanisms deployed in a real-life setting as opposed to conventional laboratory conditions using objective markers. In turn, this allows estimating to which degree VR provides adequate sensory input for the brain from which it can construct a form of reality that it itself regards to be sufficient.

To this end, we set up a height exposure paradigm in which we shifted the degree of reality from a 2D monitor presentation to a VR simulation to a real experience in a parametric fashion. The premise by which it was assessed whether the VR experience can be considered sufficiently real to mimic a real-life experience was the simulation’s emotional potency. The real and the 2D monitor experience serve as the two opposite poles of a reality scale with VR located somewhere in between. The participants’ emotional experiences were indexed by several subjective measurements and further complemented by an objective measure of the ANS response (HRV), as well as electrophysiological correlates of arousal (alpha-band), emotion regulation (theta-band), and somatosensory processes (beta-band). The results shed new light on virtual experiences and confirm the introspective (or anecdotical) reports of VR users and scientific studies presuming that VR elicits real emotions while leaving a margin for further technological improvements on the somatosensory level.

Statistically speaking, the real-life condition and the virtual condition exhibit numerous non-significant differences across questionnaires and psychophysiological measurements, while they both differ from the PC condition. The overall pattern supports the notion that participants in real-life and virtual reality conditions generally exhibit the same level of alpha and theta powers (see below). Thus, when discussing the outcome of the study, the overall pattern of results is taken into consideration and not a single result is either significant or non-significant. Splitting the data into several phases makes the data more difficult to interpret and leads to a loss of statistical power, however, investigating the dynamic unfolding of cognitive and emotional processes with respect to the phases of the experiment (baseline, ascend, peak, and descend) seemed preferable to us from a scientific perspective. The baseline measurements foremost yield the weakest effects between VR and RL implying that with the increasing arousal from the height exposure, the differences between VR and RL seem to diminish. Therefore, the degree of reality that VR can evoke is closely related to its ability to evoke situational emotional responses.

4.1. Presence and interaction fidelity

In line with previous research, the VR condition generally elicited a stronger feeling of presence as opposed to the monitor condition. While immersive media like VR primarily might invoke presence using the perception of an enveloping space, they yet invoke realistic, most likely bottom-up mediated, responses ( Murphy and Skarbez, 2022 ). A higher sense of presence is a crucial hallmark of VR especially in this experiment as it facilitates real-life behavior ( Blascovich et al., 2002 ; Slater, 2009 ; Kisker et al., 2021a ) and is in turn positively correlated with the emergence and strength of emotions, although the causal dynamics remain unspecified ( Riva et al., 2007 ; Diemer et al., 2015 ; Felnhofer et al., 2015 ). A higher sense of presence in the VR condition as opposed to the monitor condition is thus a first indicator as well as a necessary precondition or accompaniment for real-life processing of VR content.

For the sake of completeness, we also obtained data for the real-life condition, showing that presence is highest for this condition. However, we suggest discarding this data as uninterpretable as the questionnaire is not meant for real-life application, and some questions cannot be answered meaningfully ( Usoh et al., 2000 ). According to the data, a measurement artifact reflects this assumption as the self-reported involvement between the real and the monitor condition does not differ. We consider it imperative and essential to tackle this problem in the medium term. Immersion and presence are vital features of VR and thus need to be adequately measured to compare the score across different types of (simulated) reality. Notwithstanding the difficulties, the conventional questionnaire approach seems to fall short, especially as it poses a post-induction reappraisal method ( IJsselsteijn et al., 2000 ; Usoh et al., 2000 ; Riva et al., 2003 ) and should be complemented, e.g., by objective and behavioral measures to estimate the fidelity of the simulation [e.g., Kisker et al., 2021a , see also Stoffregen et al. (2003) ].

Since our setting was only limitedly interactive, it restricted the possibility to express behaviors. However, this restriction especially of body movements was desired to interfere as little as possible with the EEG measurement. Yet, the participants were not completely passive in this setting: They were able to control their field of view either by head-motion (RL and VR) which offered high interaction fidelity between both conditions ( Shafer, 2021 ; see also, e.g., Riva, 2006 ; Cipresso et al., 2018 ; Kisker et al., 2021c ) or were mediated by keystrokes (PC). Moreover, they were surrounded by a photorealistic 3D environment in both former conditions, while the PC condition was limited to a large 2D screen. Considering that, involvement should definitely differ between the former two and the latter conditions as high interaction fidelity increases the presence, interactivity, and realism ( Shafer, 2021 ).

As Stoffregen et al. (2004) pointed out, direct comparisons of real-life and simulated environments are needed to investigate whether perception and (inter-)actions under both conditions are similar. However, these comparisons can only be realized to a limited extent based on subjective assessments. Moreover, presence is neither necessarily nor fully dependent on the knowledge that the experience is mediated by an immersive medium. For example, emotional experiences trigger coherent reactions despite the knowledge that the experience cannot be real ( Kisker et al., 2021c ) or the knowledge that one is wearing a VR headset (see Murphy and Skarbez, 2022 ). The feeling of presence is therefore not to be equated with the illusion of non-mediation ( Murphy and Skarbez, 2022 ) and can be evoked despite the knowledge of mediation ( Slater and Sanchez-Vives, 2016 ).

4.2. Emotions during and after the experiment

The comparison of the emotional responses between the three groups revealed an ambiguous pattern. Whereas positive affect differs substantially between all three conditions over time, negative affect did not differ at all. The ride thus was thrilling but perceived as a safe experience.

Interestingly, the pattern is reversed for the measurements obtained 3 days later. In retrospect, participants in the real and the VR condition rated their experience equally positive, whereas both groups differ from the PC condition. In contrast to the first pattern, this could indeed be interpreted as an indicator of VR being able to evoke real-life emotional responses. The PC condition, however, deviates. These contradicting results can be resolved by taking recruitment into account. To ensure that the collected data would come from a homogeneous sample, especially with respect to fear of heights, all participants were told that they would experience a real trip. Two groups of participants had to be disappointed, which might decrease the validity of the t1 data. The meaningfulness of the data regarding the perception of reality could be overshadowed by the disappointment of having missed a real ride.

Against the background of VR memory studies ( Harman et al., 2017 ; Ernstsen et al., 2019 ; Krokos et al., 2019 ; Schöne et al., 2019 ; Smith, 2019 ; Kisker et al., 2020 , 2021b ), the comparison 3 days later is more informative. A common phenomenon comparing retrieval success for VR and conventional PC experiences is that immersive experiences are remembered with higher accuracy. Furthermore, behavioral ( Schöne et al., 2019 ; Kisker et al., 2021b ) and electrophysiological ( Kisker et al., 2020 ) evidence suggest that different mnemonic processes are employed. Taken together, VR experiences seem to be weaved into a person’s narrative or autobiographical memory just like real experiences ( Schöne et al., 2019 ; Kisker et al., 2020 ). Like episodic memory, the autobiographical memory is a subsystem of declarative memory ( Renoult et al., 2012 ). Both are characterized by a unique first-person perspective and its encoding context. However, autobiographical memory comprises a larger set of operations, of which especially self-referential processing, personal relevance, and self-involvement are susceptible to virtual experiences ( Greenberg and Rubin, 2003 ). Remembering VR experiences means remembering being at a place at a particular time and not only having it seen on a screen. Although the post-induction affects between the real and the VR condition differed (presumably due to disappointment), the event is emotionally remembered in the most similar way. This finding seems to be more conclusive within the context of VR applications than the immediate post-induction measurement: Although VR is perhaps less exciting than a real-life ride, it is remembered as exciting. The efficacy of, e.g., therapeutical and educational applications depends on this aspect of VR. Long-term success can only be achieved when the virtual application substitutes not only on a factual but also on an emotional level for the corresponding real-life scenario: Virtual treatment of acrophobia ( Renoult et al., 2012 ), bulimia ( De Carvalho et al., 2017 ), or social anxiety ( Bouchard et al., 2017 ) is effective if patients remember the exposure emotionally as if it was a real experience.

4.3. Heart rate variability

Although emotional memory provides a good starting point for evaluating VR’s realness, online measures are needed to draw a clear picture of the participants’ mental stress during the ride. To assess the level of reality, VR, and PC simulations, the HRV functions as a marker for stress imposed by the conditions on the autonomic nervous system (ANS, Shaffer and Ginsberg, 2017 ).

Two measures are used to evaluate the variance of the RR intervals; the standard deviation (SDRR) and the root mean square of successive RR interval differences (rmSSD). Both measures are commonly employed and appropriate ultra-short time measurements (<5 min) ( Baek et al., 2015 ; Shaffer and Ginsberg, 2017 ). The basis for the short-term HRV is the aforementioned interplay between sympathetic and parasympathetic branches, but also respiration-driven speeding and slowing of the heart, baroreceptor reflex, and rhythmic changes in vascular tone [for a comprehensive overview, see Shaffer and Ginsberg (2017) ]. SDRR and rmSSD reflect slightly different ANS properties, with the rmSSD reflecting the vagally mediated changes in variance ( Shaffer and Ginsberg, 2017 ). Since there is a debate about whether one measurement should be favored over the other one predominantly arising from a lack of research about their physiological origin ( Shaffer and Ginsberg, 2017 ), both measurements were used in this study to estimate the ANS stress response.

The reported data are baseline corrected; however, another baseline was chosen as opposed to the questionnaires to circumvent the anticipated problem of disappointment. To assess the ride’s emotional efficacy, the baseline was measured while standing in front of the fire truck basket or the replica, making it clear to the participants what they had to expect. While this approach avoids the problems already mentioned, on the downside, it does not provide an entirely emotionally neutral baseline. The participants would joyfully expect a real ride or be aware that they would participate in an unconventional but neutral experiment. The very nature of this unconventional experiment makes it impossible to find the ideal baseline.

As higher HRV scores indicate relaxation and lower scores indicate stress, a baseline-corrected negative score indicates an increase in stress and a positive score indicates relaxation. The responses in all phases are identical for the SDRR showing that the stress response did not differ between the real and the VR condition. The rmSSD followed the same trend, except for the response at the highest point. Both groups differed significantly from the PC condition in all phases and both measures: The baseline-corrected data indicate an elevated stress response in the PC condition and a likewise light to neutral stress response in the real and virtual conditions.

This overall pattern unfolds the temporal dynamic of the emotional experiences. The significance pattern for the SDRR phase analysis confirms the first impression that the real and virtual conditions are indistinguishable in terms of ANS responses. Positive scores for both conditions in the ascend even increasing in the peak phase, corresponding to the overall positive PANAS ratings, indicate a vagotonic state of the ANS, associated with relaxation or a positive mood ( Shiota et al., 2011 ). During the descent, the score again leans toward the baseline. The PC condition follows a similar course; however, the peak resembles the baseline indicating an overall dissatisfaction and an ergotropic state of the ANS ( Gellhorn, 1970 ). The rmSSD pattern follows the course with a deviation at the peak: Reality surpasses VR significantly. Since the significance patterns otherwise support the equivalence of reality and VR, the difference might be explained physiologically. Both measurements reflect slightly different ANS properties with the rmSSD reflecting the vagally mediated respiration-driven speeding and slowing of the heart ( Schipke, 1999 ). Hence, it might be a different respiratory pattern that accounts for the differences. Being outside, experiencing a fresh breeze at 33 m, when the temperature at the ground was beyond 30°C, and feeling a sensation of awe ( Chirico et al., 2017 ) could trigger a deep breath and thus increase the score technically.

As an interim conclusion, the HRV data corresponds to self-reported emotional memory, revealing the same equivalence pattern of the real and virtual domains in distinction to the PC condition at t2. They thus confirm a previous VR video study providing the first evidence that heart rate and by that the visceral ANS stress response to VR environments corresponds to real-life responses much more than to conventional laboratory settings ( Higuera-Trujillo et al., 2017 ).

4.4. Electrophysiological correlates

While the HRV analysis provides robust evidence for a somatic level of arousal, the EEG analysis is more conclusive over a wider variety of cognitive, emotional, and somatosensory processes. Interestingly, the overall picture indicates that the EEG result parallels what a naive VR user would have suspected: The emotional responses elicited by the real-life and the virtual experience are indistinguishable on an electrophysiological level as the alpha- and theta-band oscillations do not yield significant differences. However, the participants’ somatosensory experiences differed between all three conditions as indexed by beta-band oscillations.

First of all, it should be noted that we employed a robust methodological approach at the expense of the EEG’s already limited ability to identify the neural underpinning of the obtained signal. Thus, a distinct functional attribution of the respective oscillations remains speculative to a certain degree and leaves a margin for further improvements. Importantly, a more precise classification of the functional properties does not necessarily augment the epistemological value of this particular study. The brain’s oscillatory response to VR by means of 3D-360° videos resembles the response to a real-life event in two of the three selected frequency bands. Regarding alpha and theta band powers, the equivalence tests (TOST) strongly indicate that the effects for RL and VR are equivalent. On the other side, differences between RL and PC as well as VR and PC mainly exhibit strong effects.

Generally, as alpha-band oscillations are correlating with emotional arousal ( Klotzsche et al., 2018 ; Schubring et al., 2020 ), the data indicate that VR comes in with the same sense of arousal (or thrill) as the real-life ride. The advantage of the study’s experimental setup is that even the real-life situation with a professional firefighter crew imparted a feeling of safety and assurance. Aside from these safety considerations, it should be noted that the alpha-power decreased with gaining altitude till the highest point, and then got back closer to zero. As the alpha-band is inversely correlated with cortical activity ( Schöne et al., 2016 ), both measures, HRV and alpha power, likewise imply heightened stress or arousal with a gaining altitude. In other words, the 3D/360° VR video and the real-life condition conveyed the same level of hazard and, as pointed out in the Introduction section, likewise facilitated the same level of tonic, unspecific alertness, and anxiety but also general vigilance or sustained attention ( Dockree et al., 2007 ). Interestingly, theta oscillations are most positive in the baseline for the RL condition as opposed to VR and PC: An increase in spectral theta has been interpreted as reflecting an integrative regulatory function ( Cruikshank et al., 2012 ). Whereas the positive theta oscillations indicate the downregulation of negative emotions, like it would be expected when standing in front of the firetruck ready for departure, the negative theta values reflect a diminished or even a loss of emotional control ( Grunwald et al., 2014 ). This can be either a deliberate process, e.g., due to an overwhelming experience, or the lack of necessity. The latter seems to apply to the PC condition, where theta is most negative: Emotion regulation appears to be of minimal importance in this context, as the experience is only displayed on a screen. Conventional media elicits emotions that are less impactful and are accordingly managed differently. Emotions are a response to the environment in which they arise and prompt appropriate actions. This link between emotion and environment is dissolved or at least weakened when the current emotional state is triggered by a monitor experience. For example, anxiety elicited by a screening event does not provide information about the hazardousness of the environment, often it is even elicited for entertainment purposes by watching a horror movie. In a corresponding real-life situation, it facilitates appropriate behavior and has to be regulated properly. Processes that also occur in VR ( Kisker et al., 2021a , c ), which is why it is in line with previous literature to the study at hand, and real life do not exhibit significant differences with respect to the brain’s theta-band response. The emotions elicited by the height exposure need to be evaluated and regulated appropriately. This cascade makes a case for the realism of VR. VR not only seems to be sufficiently real to elicit genuine emotions (first response) but once provoked, the brain processes them like real emotions; and seems to deploy the same emotion regulation mechanisms as if they were evoked under real-life conditions (second response). Denying the authenticity of an experience or an emotion is a well-known way to regulate, e.g., fear ( Cantor, 2006 ) and it would be a possible strategy in the VR condition by discarding VR as being unreal. A feeling is only denoted as imagination or fiction to alleviate it (e.g., cognitive avoidance, Lin, 2017 ). Given the premise that the obtained theta signal in our study possesses sufficient discriminatory power for regulatory processes, the data imply that this demotion to the imagination is not applied to our experiment.

As an intermediate conclusion, the PANAS questionnaire (at t2), the HRV measurements, and the alpha and theta oscillations indicate—individually and in conjunction—that emotions elicited in VR resemble real-life emotions. The study thus fills the gap in VR research showing that VR simulations elicit strong realistic emotions ( Gorini et al., 2010 ; Felnhofer et al., 2015 ; Chirico and Gaggioli, 2019 ; Gromer et al., 2019 ; Bernardo et al., 2021 ) and supports the assumption that they reflect real-life emotional processing ( Kisker et al., 2021a , c ).

However, the beta-band results exhibit a different pattern: All three conditions differ from each other. As mentioned in the introduction, the beta-band has also been associated with emotional processes, but as all previously discussed metrics imply emotional equivalence between VR and real-life scenarios, the somatosensory account seems to be preferable. Technically speaking, the experiment was a mixed reality (MR) setup as the study aimed to minimize the perceptual differences between all three conditions by including physical cues, e.g., a wobbly basket replica and wind simulation. All three conditions still exhibited unique somatosensory characteristics: especially the HMD sticks out. Only the participants in the VR condition had additional weight mounted on the head, influencing proprioceptive perception. It is unclear to what extent VR (or MR) environments need to physically resemble a corresponding real-world scenario to mitigate such somatosensory differences. Lighter HMDs, synthetic skin mimicking touch sensation ( Lucarotti et al., 2013 ), and other technical advances might further enhance the realism of simulations. Furthermore, the RL condition was the only one that actually came with a vertical acceleration. Nevertheless, according to the data, the study’s setup was sufficiently real to elicit real-life emotions, leaving the question of to which degree physical cues play a role at all. Notably, methodological examinations demonstrated that wearing common VR headsets like the HTC Vive does not impact the EEG’s signal quality regarding frequency bands below 50 Hz ( Hertweck et al., 2019 ). In a competition for cognitive and especially attentional resources, sensory information might only play a subordinate role to emotional cues, especially in a scenario with a fixed body position and limited interaction. Conclusively, it seems that the relevance of somatosensory information for the manifestation of a sense of realism highly depends on the purpose of the application.

Alternatively, ongoing beta activity is associated with subsequent memory formation success, independent of stimulus modality ( Scholz et al., 2017 ). As the ride was a unique experience, the beta power might also reflect a memory-promoting state that albeit being modality independent might be modulated by attentional or emotional states accounting for the differences between the conditions.

5. Conclusion

The study has shown that today’s VR setups using photorealistic 3D-360° experiences fulfill the essential prerequisites for the emergence of a feeling of reality and paves the way for a more in-depth examination of the relevant cognitive and emotional processes as well as the technological features of VR giving rise to them. Furthermore, the study provides a scientific framework for developing recreational, educational, and therapeutic VR applications. Scientists might proportionally benefit from the enhanced ecological validity achieved by VR. Psychological processes can be studied under previously unprecedented realistic conditions in controlled laboratory conditions ( Bohil et al., 2011 ; Parsons, 2015 ). However, as outlined in the introduction, the realism of VR simulations requires particular diligence in applying VR materials. The laws and guidelines, according to which content is created and evaluated for an audience (e.g., the MPA rating system in the USA, scientific ethics committees), do not translate into the virtual domain without any adjustment ( Slater et al., 2020 ) as VR can be considered to be sufficiently real to mimic reality. For a video summary, see https://youtu.be/fPIrIajpfiA .

6. Limitations

For practical reasons, our study investigated only one scenario under real-life, VR, and monitor conditions. It is yet unclear to which condition or other scenarios our results generalize. As the Introduction section mentioned, height exposures are classic immersive VR experiences leveraging all its affordances. Future research should investigate less immersive and less arousing scenarios to determine a cutoff where VR and PC might be on the same level, both differing from reality. However, the conclusions drawn from our study are in line with previous VR studies from various fields inferring the degree of reality of virtual reality from behavioral observations (see Section “1. Introduction”). The generalizability of our research thus should be given and is presumably higher than the generalizability of monitor experiments.

The HRV baseline chosen for the experiment was not free of induction. Standing in front of the fire truck expecting to go up naturally led to increased arousal. The rationale behind this approach was to correct for any condition’s specific arousal by subtracting this baseline from the HRV during the ride. However, an induction-free baseline might shed a more differentiated light on the temporal dynamics of the arousal during the experience.

The participant in the experiment had no task but was passively exposed to the height. Thus, tying the electrophysiological responses to mental states is speculative beyond very basic functions. Nonetheless, due to the very simple measurements, future research should include behavior and investigate task-specific neural oscillation for that reason.

Data availability statement

Ethics statement.

The studies involving human participants were reviewed and approved by Local Ethics Committee of Osnabrück University. The participants provided their written informed consent to participate in this study.

Author contributions

JK, SS, and LL performed the testing and data collection. JK, SS, and LL performed the data analyses under the supervision of BS. BS performed the data interpretation and drafted the manuscript. JK, LL, TG, and RO provided the critical revisions. BS, JK, LL, TG, SS, and RO contributed to the study design based on an idea by BS. All authors approved the final version of the manuscript for submission.

Acknowledgments

We thank the voluntary fire brigade Osnabrück-Neustadt (Osnabrück, Germany) for making it possible to conduct this study. In particular, we thank Christoph Plogmann, Martin Brug, Niels Giebel, and Marcel Beste for their significant support. Furthermore, we also thank the members of the Department of Biology/Chemistry, Osnabrück University, especially Prof. Jacob Piehler and Dr. Stefan Walter, for offering their facilities for the duration of the real-life condition data collection.

Funding Statement

We acknowledged the support of Deutsche Forschungsgemeinschaft (DFG) and the Open Access Publishing Fund of Osnabrück University.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1093014/full#supplementary-material

  • An E., Nolty A. A. T., Amano S. S., Rizzo A. A., Buckwalter J. G., Rensberger J. (2020). Heart rate variability as an index of resilience. Milit. Med. 185 363–369. 10.1093/milmed/usz325 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Asjad N. S., Paris R., Adams H., Bodenheimer B. (2018). “Perception of height in virtual reality – a study of climbing stairs,” in Proceedings of the SAP 2018: ACM symposium on applied perception , (Vancouver, BC: ), 1–8. 10.1145/3225153.3225171 [ CrossRef ] [ Google Scholar ]
  • Baek H. J., Cho C. H., Cho J., Woo J. M. (2015). Reliability of ultra-short-term analysis as a surrogate of standard 5-min analysis of heart rate variability. Telemed. e-Health 21 404–414. [ PubMed ] [ Google Scholar ]
  • Benjamini Y., Yekutieli D. (2001). The control of the false discovery rate in multiple testing under dependency. Ann. Stat. 29 1165–1188. 10.1214/aos/1013699998 [ CrossRef ] [ Google Scholar ]
  • Bernardo P. D., Bains A., Westwood S. (2021). Mood induction using virtual reality: A systematic review of recent findings . J. Technol. Behav. Sci. 6 , 3–24. 10.1007/s41347-020-00152-9 [ CrossRef ] [ Google Scholar ]
  • Biedermann S. V., Biedermann D. G., Wenzlaff F., Kurjak T., Nouri S., Auer M. K., et al. (2017). An elevated plus-maze in mixed reality for studying human anxiety-related behavior. BMC Biol. 15 : 125 . 10.1186/s12915-017-0463-6 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Blascovich J., Loomis J., Beall A. C., Swinth K. R., Hoyt C. L., Bailenson J. N. (2002). Immersive virtual environment technology as a methodological tool for social psychology. Psychol. Inq. 13 103–124. 10.1207/S15327965PLI1302_01 [ CrossRef ] [ Google Scholar ]
  • Boeldt D., McMahon E., McFaul M., Greenleaf W. (2019). Using virtual reality exposure therapy to enhance treatment of anxiety disorders: Identifying areas of clinical adoption and potential obstacles. Front. Psychiatry 10 : 773 . 10.3389/fpsyt.2019.00773 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bohil C. J., Alicea B., Biocca F. A. (2011). Virtual reality in neuroscience research and therapy. Nat. Rev. Neurosci. 12 752–762. 10.1038/nrn3122 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bos D. O. (2006). EEG-based emotion recognition . Influence Vis. Auditory. Stimuli 56 , 1–17. [ Google Scholar ]
  • Bouchard S., Dumoulin S., Robillard G., Guitard T., Klinger E., Forget H., et al. (2017). Virtual reality compared with in vivo exposure in the treatment of social anxiety disorder: A three-arm randomised controlled trial. Br. J. Psychiatry 210 276–283. 10.1192/bjp.bp.116.184234 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Browning M. H. E. M., Shipley N., McAnirlin O., Becker D., Yu C.-P., Hartig T., et al. (2020). An actual natural setting improves mood better than its virtual counterpart: A meta-analysis of experimental data. Front. Psychol. 11 : 2200 . 10.3389/fpsyg.2020.02200 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Buchholz V. N., Jensen O., Medendorp W. P. (2014). Different roles of alpha and beta-band oscillations in anticipatory sensorimotor gating. Front. Hum. Neurosci. 8 : 446 . 10.3389/fnhum.2014.00446 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Burani K., Gallyer A., Ryan J., Jordan C., Joiner T., Hajcak G. (2021). Acute stress reduces reward-related neural activity: Evidence from the reward positivity. Stress 24 833–839. 10.1080/10253890.2021.1929164 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Caldwell A. R. (2022). Exploring equivalence testing with the updated TOSTER R package . PsyArXiv [Preprint]. 10.31234/osf.io/ty8de [ CrossRef ] [ Google Scholar ]
  • Campus C., Brayda L., de Carli F., Chellali R., Famà F., Bruzzo C., et al. (2012). Tactile exploration of virtual objects for blind and sighted people: The role of beta 1 EEG band in sensory substitution and supramodal mental mapping. J. Neurophysiol. 107 2713–2729. 10.1152/jn.00624.2011 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cantor J. (2006). “ Why horror doesn’t die: The enduring and paradoxical effects of frightening entertainment ,” in Psychology of entertainment , eds Bryant J., Vorderer P. (Hillsdale, NJ: Lawrence Erlbaum Associates Publishers; ), 315–327. [ Google Scholar ]
  • Castaldo R., Melillo P., Bracale U., Caserta M., Triassi M., Pecchia L. (2015). Acute mental stress assessment via short term HRV analysis in healthy adults: A systematic review with meta-analysis. Biomed. Sign. Proc. Cont. 18 370–377. 10.1016/j.bspc.2015.02.012 [ CrossRef ] [ Google Scholar ]
  • Chattha U. A., Janjua U. I., Anwar F., Madni T. M., Cheema M. F., Janjua S. I. (2020). Motion sickness in virtual reality: An empirical evaluation. IEEE Access 8 130486–130499. [ Google Scholar ]
  • Cheyne D., Gaetz W., Garnero L., Lachaux J.-P., Ducorps A., Schwartz D., et al. (2003). Neuromagnetic imaging of cortical oscillations accompanying tactile stimulation. Cogn. Brain Res. 17 599–611. 10.1016/S0926-6410(03)00173-3 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chirico A., Gaggioli A. (2019). When virtual feels real: Comparing emotional responses and presence in virtual and natural environments. Cyberpsychol. Behav. Soc. Network. 22 220–226. 10.1089/cyber.2018.0393 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chirico A., Cipresso P., Yaden D. B., Biassoni F., Riva G., Gaggioli A. (2017). Effectiveness of immersive videos in inducing awe: An experimental study. Sci. Rep. 7 : 1218 . 10.1038/s41598-017-01242-0 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chirico A., Ferrise F., Cordella L., Gaggioli A. (2018). Designing awe in virtual reality: An experimental study. Front. Psychol. 8 : 2351 . 10.3389/fpsyg.2017.02351 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chittaro L., Sioni R., Crescentini C., Fabbro F. (2017). Mortality salience in virtual reality experiences and its effects on users’ attitudes towards risk. Int. J. Hum. Comput. Stud. 101 10–22. 10.1016/j.ijhcs.2017.01.002 [ CrossRef ] [ Google Scholar ]
  • Cho B. H., Lee J.-M., Ku J. H., Jang D. P., Kim J. S., Kim I.-Y., et al. (2002). “Attention enhancement system using virtual reality and EEG biofeedback,” in Proceedings IEEE virtual reality 2002 , (Orlando, FL: IEEE; ), 156–163. [ Google Scholar ]
  • Cipresso P., Giglioli I. A. C., Raya M. A., Riva G. (2018). The past, present, and future of virtual and augmented reality research: A network and cluster analysis of the literature. Front. Psychol. 9 : 2086 . 10.3389/fpsyg.2018.02086 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cisler J. M., Olatunji B. O., Feldner M. T., Forsyth J. P. (2010). Emotion regulation and the anxiety disorders: An integrative review. J. Psychopathol. Behav. Assess. 32 68–82. 10.1007/s10862-009-9161-1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cohen D. C. (1977). Comparison of self-report and overt-behavioral procedures for assessing acrophobia. Behav. Ther. 8 17–23. 10.1016/S0005-7894(77)80116-0 [ CrossRef ] [ Google Scholar ]
  • Cohen M. X. (2014). Analyzing neural time series data: Theory and practice. Cambridge, MA: MIT press, 10.7551/mitpress/9609.001.0001 [ CrossRef ] [ Google Scholar ]
  • Cruikshank L. C., Singhal A., Hueppelsheuser M., Caplan J. B. (2012). Theta oscillations reflect a putative neural mechanism for human sensorimotor integration. J. Neurophysiol. 107 65–77. 10.1152/jn.00893.2010 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Davies D. R., Krkovic A. (1965). Skin-conductance, alpha-acitivity, and vigilance. Am. J. Psychol. 78 304–306. 10.2307/1420507 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Carvalho M. R., De Santana Dias T. R., Duchesne M., Nardi A. E., Appolinario J. C. (2017). Virtual reality as a promising strategy in the assessment and treatment of bulimia nervosa and binge eating disorder: A systematic review. Behav. Sci. 7 : 43 . 10.3390/bs7030043 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Delorme A., Makeig S. (2004). EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 134 9–21. 10.1016/j.jneumeth.2003.10.009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Diemer J., Alpers G. W., Peperkorn H. M., Shiban Y., Mühlberger A. (2015). The impact of perception and presence on emotional reactions: A review of research in virtual reality. Front. Psychol. 6 : 26 . 10.3389/fpsyg.2015.00026 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dockree P. M., Kelly S. P., Foxe J. J., Reilly R. B., Robertson I. H. (2007). Optimal sustained attention is linked to the spectral content of background EEG activity: Greater ongoing tonic alpha (∼10 Hz) power supports successful phasic goal activation. Eur. J. Neurosci. 25 900–907. 10.1111/j.1460-9568.2007.05324.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ernstsen J., Mallam S. C., Nazir S. (2019). Incidental memory recall in virtual reality: An empirical investigation. Proc. Hum. Fact. Ergon. Soc. Annu. Meet. 63 2277–2281. 10.1177/1071181319631411 [ CrossRef ] [ Google Scholar ]
  • Ertl M., Hildebrandt M., Ourina K., Leicht G., Mulert C. (2013). Emotion regulation by cognitive reappraisal - the role of frontal theta oscillations. NeuroImage 81 412–421. 10.1016/j.neuroimage.2013.05.044 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Felnhofer A., Kothgassner O. D., Schmidt M., Heinzle A. K., Beutl L., Hlavacs H., et al. (2015). Is virtual reality emotionally arousing? Investigating five emotion inducing virtual park scenarios. Int. J. Hum. Comput. Stud. 82 48–56. 10.1016/j.ijhcs.2015.05.004 [ CrossRef ] [ Google Scholar ]
  • Freeman J., Avons S. E., Meddis R., Pearson D. E., Ijsselsteijn W. (2000). Using behavioral realism to estimate presence: A study of the utility of postural responses to motion stimuli. Pres. Teleoperat. Virtual Environ. 9 149–164. 10.1162/105474600566691 [ CrossRef ] [ Google Scholar ]
  • Friese U., Köster M., Hassler U., Martens U., Trujillo-Barreto N., Gruber T. (2013). Successful memory encoding is associated with increased cross-frequency coupling between frontal theta and posterior gamma oscillations in human scalp-recorded EEG. Neuroimage 66 642–647. [ PubMed ] [ Google Scholar ]
  • Fulvio J., Rokers B. (2018). Sensitivity to sensory cues predicts motion sickness in virtual reality. J. Vis. 18 1066–1066. [ Google Scholar ]
  • Gellhorn E. (1970). The emotions and the ergotropic and trophotropic systems - part I. The physiological control of the emotions. Psychol. Forsch. 34 48–66. 10.1007/BF00422862 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gold C., Fachner J., Erkkilä J. (2013). Validity and reliability of electroencephalographic frontal alpha asymmetry and frontal midline theta as biomarkers for depression. Scand. J. Psychol. 54 118–126. 10.1111/sjop.12022 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gorini A., Griez E., Petrova A., Riva G. (2010). Assessment of the emotional responses produced by exposure to real food, virtual food and photographs of food in patients affected by eating disorders. Ann. Gen. Psychiatry 9 : 30 . 10.1186/1744-859X-9-30 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greenberg D. L., Rubin D. C. (2003). The neuropsychology of autobiographical memory. Cortex 39 687–728. 10.1016/S0010-9452(08)70860-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gromer D., Madeira O., Gast P., Nehfischer M., Jost M., Müller M., et al. (2018). Height simulation in a virtual reality cave system: Validity of fear responses and effects of an immersion manipulation. Front. Hum. Neurosci. 12 : 372 . 10.3389/fnhum.2018.00372 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gromer D., Reinke M., Christner I., Pauli P. (2019). Causal interactive links between presence and fear in virtual reality height exposure. Front. Psychol. 10 : 141 . 10.3389/fpsyg.2019.00141 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gruber T., Tsivilis D., Giabbiconi C.-M., Müller M. M. (2008). Induced electroencephalogram oscillations during source memory: Familiarity is reflected in the gamma band, recollection in the theta band. J. Cogn. Neurosci. 20 1043–1053. [ PubMed ] [ Google Scholar ]
  • Grunwald M., Weiss T., Mueller S., Rall L. (2014). EEG changes caused by spontaneous facial self-touch may represent emotion regulating processes and working memory maintenance. Brain Res. 1557 111–126. [ PubMed ] [ Google Scholar ]
  • Harman J., Brown R., Johnson D. (2017). “ Improved memory elicitation in virtual reality: New experimental results and insights ,” in IFIP conference on human-computer interaction , eds Bernhaupt R., Dalvi G., Joshi A., Balkrishan D., O’Neill J., Winckler M. (Cham: Springer; ), 128–146. 10.1007/978-3-319-67684-5_9 [ CrossRef ] [ Google Scholar ]
  • Hertweck S., Weber D., Alwanni H., Unruh F., Fischbach M., Latoschik M. E., et al. (2019). “ Brain activity in virtual reality: Assessing signal quality of high-resolution EEG while using head-mounted displays ,” in Proceedings of the 2019 26th IEEE conference on virtual reality and 3D user interfaces, VR , (Osaka: ), 970–971. 10.1109/VR.2019.8798369 [ CrossRef ] [ Google Scholar ]
  • Higuera-Trujillo J. L., López-Tarruella Maldonado J., Llinares Millán C. (2017). Psychological and physiological human responses to simulated and real environments: A comparison between photographs, 360 ° panoramas, and virtual reality. Appl. Ergon. 65 398–409. 10.1016/j.apergo.2017.05.006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hodges L. F., Kooper R., Meyer T. C., Rothbaum B. O., Opdyke D., de Graaff J. J., et al. (1995). Virtual environments for treating the fear of heights. Computer 28 27–33. 10.1109/2.391038 [ CrossRef ] [ Google Scholar ]
  • Hofmann S. M., Klotzsche F., Mariola A., Nikulin V. V., Villringer A., Gaebler M. (2019). “ Decoding subjective emotional arousal during a naturalistic VR experience from EEG using LSTMs ,” in Proceedings of the 2018 IEEE international conference on artificial intelligence and virtual reality, AIVR 2018 , (Taichung: ), 128–131. 10.1109/AIVR.2018.00026 [ CrossRef ] [ Google Scholar ]
  • Horvat M., Dobrinic M., Novosel M., Jercic P. (2018). “ Assessing emotional responses induced in virtual reality using a consumer EEG headset: A preliminary report ,” in Proceedings of the 2018 41st international convention on information and communication technology, electronics and microelectronics, MIPRO , (Opatija: ), 1006–1010. 10.23919/MIPRO.2018.8400184 [ CrossRef ] [ Google Scholar ]
  • IJsselsteijn W., de Ridder H., Freeman J., Avons S. (2000). Presence: Concept, determinants and measurement. Proc. SPIE Int. Soc. Opt. Eng. 3959 : 520 . 10.1117/12.387188 [ CrossRef ] [ Google Scholar ]
  • Jensen O., Tesche C. D. (2002). Frontal theta activity in humans increases with memory load in a working memory task. Eur. J. Neurosci. 15 1395–1399. 10.1046/j.1460-9568.2002.01975.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kisker J., Gruber T., Schöne B. (2020). Virtual reality experiences promote autobiographical retrieval mechanisms: Electrophysiological correlates of laboratory and virtual experiences. Psychol. Res. 10.1007/s00426-020-01417-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kisker J., Gruber T., Schöne B. (2021a). Behavioral realism and lifelike psychophysiological responses in virtual reality by the example of a height exposure. Psychol. Res. 85 68–81. [ PubMed ] [ Google Scholar ]
  • Kisker J., Gruber T., Schöne B. (2021b). Experiences in virtual reality entail different processes of retrieval as opposed to conventional laboratory settings: A study on human memory. Curr. Psychol. 40 3190–3197. [ Google Scholar ]
  • Kisker J., Lange L., Flinkenflügel K., Kaup M., Labersweiler N., Tetenborg F., et al. (2021c). Authentic fear responses in virtual reality: A mobile EEG study on affective, behavioral and electrophysiological correlates of fear. Front. Virt Real. 2 : 106 . 10.3389/frvir.2021.716318 [ CrossRef ] [ Google Scholar ]
  • Klimesch W. (1999). EEG alpha and theta oscillations reflect cognitive and memory performance: A review and analysis. Brain Res. Rev. 29 169–195. 10.1016/S0165-0173(98)00056-3 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Klimesch W. (2012). Alpha-band oscillations, attention, and controlled access to stored information. Trends Cogn. Sci. 16 606–617. 10.1016/j.tics.2012.10.007 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Klotzsche F., Mariola A., Hofmann S., Nikulin V. V., Villringer A., Gaebler M. (2018). “ Using EEG to decode subjective levels of emotional arousal during an immersive VR roller coaster ride ,” in Proceedings of the 2018 25th IEEE conference on virtual reality and 3D user interfaces, VR , (Tuebingen: ), 605–606. 10.1109/VR.2018.8446275 [ CrossRef ] [ Google Scholar ]
  • Knyazev G. G., Savostyanov A. N., Levin E. A. (2004). Alpha oscillations as a correlate of trait anxiety. Int. J. Psychophysiol. 53 147–160. 10.1016/j.ijpsycho.2004.03.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kothgassner O. D., Felnhofer A. (2020). Does virtual reality help to cut the Gordian knot between ecological validity and experimental control? Ann. Int. Commun. Assoc. 44 210–218. 10.1080/23808985.2020.1792790 [ CrossRef ] [ Google Scholar ]
  • Krijn M., Emmelkamp P. M. G., Biemond R., de Wilde de Ligny C., Schuemie M. J., Van Der Mast C. A. P. G. (2004). Treatment of acrophobia in virtual reality: The role of immersion and presence. Behav. Res. Ther. 42 229–239. 10.1016/S0005-7967(03)00139-6 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Krohne H. W., Egloff B., Kohlmann C. W., Tausch A. (1996). (PDF) Untersuchungen mit einer deutschen Version der “positive and negative affect schedule” (PANAS). Diagnostica 42 139–156. [ Google Scholar ]
  • Krokos E., Plaisant C., Varshney A. (2019). Virtual memory palaces: Immersion aids recall. Virt. Real. 23 1–15. 10.1007/s10055-018-0346-3 [ CrossRef ] [ Google Scholar ]
  • Lakens D. (2017). Equivalence tests: A practical primer for t tests, correlations, and meta-analyses. Soc. Psychol. Personal. Sci. 8 355–362. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lange L., Osinsky R. (2021). Aiming at ecological validity—Midfrontal theta oscillations in a toy gun shooting task. Eur. J. Neurosci. 54 8214–8224. 10.1111/ejn.14977 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Laux L., Glanzmann P., Schaffner P., Spielberger C. (1981). State-trait-angstinventar [German version of State-Trait-Anxiety Inventory]. Göttingen: Hogrefe. [ Google Scholar ]
  • Lee S. M., Jang K. I., Chae J. H. (2017). Electroencephalographic correlates of suicidal ideation in the theta-band. Clin. EEG Neurosci. 48 316–321. 10.1177/1550059417692083 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Liebherr M., Corcoran A. W., Alday P. M., Coussens S., Bellan V., Howlett C., et al. (2021). EEG and behavioral correlates of attentional processing while walking and navigating naturalistic environments. Sci. Rep. 11 : 22325 . 10.1038/s41598-021-01772-8 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lin J. H. T. (2017). Fear in virtual reality (VR): Fear elements, coping reactions, immediate and next-day fright responses toward a survival horror zombie virtual reality game. Comput. Hum. Behav. 72 350–361. 10.1016/j.chb.2017.02.057 [ CrossRef ] [ Google Scholar ]
  • Lucarotti C., Oddo C. M., Vitiello N., Carrozza M. C. (2013). Synthetic and bio-artificial tactile sensing: A review. Sensors (Switzerland) 13 1435–1466. 10.3390/s130201435 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luft C. D. B., Bhattacharya J. (2015). Aroused with heart: Modulation of heartbeat evoked potential by arousal induction and its oscillatory correlates. Sci. Rep. 5 : 15717 . 10.1038/srep15717 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • MacArthur C., Grinberg A., Harley D., Hancock M. (2021). “ You’re making me sick: A systematic review of how virtual reality research considers gender & cybersickness ,” in Proceedings of the 2021 CHI conference on human factors in computing systems , (New York, NY: ), 1–15. [ Google Scholar ]
  • McLean C. P., Anderson E. R. (2009). Brave men and timid women? A review of the gender differences in fear and anxiety . Clin. Psychol. Rev. 29 , 496–505. 10.1016/j.cpr.2009.05.003 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moran T. P., Bernat E. M., Aviyente S., Schroder H. S., Moser J. S. (2014). Sending mixed signals: Worry is associated with enhanced initial error processing but reduced call for subsequent cognitive control. Soc. Cogn. Affect. Neurosci. 10 1548–1556. 10.1093/scan/nsv046 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Murphy D., Skarbez R. (2022). What do we mean when we say “presence”? Pres. Virt. Augment Real. 29 171–190. [ Google Scholar ]
  • Nilsson N. C., Nordahl R., Serafin S. (2016). Immersion revisited: A review of existing definitions of immersion and their relation to different theories of presence. Hum. Technol. 12 108–134. 10.17011/ht/urn.201611174652 [ CrossRef ] [ Google Scholar ]
  • Nolan H., Whelan R., Reilly R. B. (2010). FASTER: Fully automated statistical thresholding for EEG artifact rejection. J. Neurosci. Methods 192 152–162. 10.1016/j.jneumeth.2010.07.015 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Oing T., Prescott J. (2018). Implementations of virtual reality for anxiety-related disorders: Systematic review. J. Med. Intern. Res. 6 : e10965 . 10.2196/10965 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Opriş D., Pintea S., García-Palacios A., Botella C., Szamosközi Ş, David D. (2012). Virtual reality exposure therapy in anxiety disorders: A quantitative meta-analysis. Depress. Anxiety 29 85–93. 10.1002/da.20910 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Packheiser J., Berretz G., Rook N., Bahr C., Schockenhoff L., Güntürkün O., et al. (2021). Investigating real-life emotions in romantic couples: A mobile EEG study. Sci. Rep. 11 : 1142 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Packheiser J., Schmitz J., Pan Y., El Basbasse Y., Friedrich P., Güntürkün O., et al. (2020). Using mobile EEG to investigate alpha and beta asymmetries during hand and foot use. Front. Neurosci. 14 : 109 . 10.3389/fnins.2020.00109 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Parsons T. D. (2015). Virtual reality for enhanced ecological validity and experimental control in the clinical, affective and social neurosciences. Front. Hum. Neurosci. 9 : 660 . 10.3389/fnhum.2015.00660 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Perez C. C. (2019). Invisible women: Exposing data bias in a world designed for men. London: Vintage Books. [ Google Scholar ]
  • Peterson S. M., Furuichi E., Ferris D. P. (2018). Effects of virtual reality high heights exposure during beam-walking on physiological stress and cognitive loading. PLoS One 13 : e0200306 . 10.1371/journal.pone.0200306 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Piccione J., Collett J., De Foe A. (2019). Virtual skills training: The role of presence and agency. Heliyon 5 : e02583 . 10.1016/j.heliyon.2019.e02583 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Putman P., Verkuil B., Arias-Garcia E., Pantazi I., Van Schie C. (2014). EEG theta/beta ratio as a potential biomarker for attentional control and resilience against deleterious effects of stress on attention. Cogn. Affect. Behav. Neurosci. 14 782–791. 10.3758/s13415-013-0238-7 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Renoult L., Davidson P. S. R., Palombo D. J., Moscovitch M., Levine B. (2012). Personal semantics: At the crossroads of semantic and episodic memory. Trends Cogn. Sci. 16 550–558. 10.1016/j.tics.2012.09.003 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Reyes del Paso G. A., Langewitz W., Mulder L. J. M., van Roon A., Duschek S. (2013). The utility of low frequency heart rate variability as an index of sympathetic cardiac tone: A review with emphasis on a reanalysis of previous studies. Psychophysiology 50 477–487. 10.1111/psyp.12027 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Riva G. (2006). “ Virtual reality ,” in Wiley encyclopedia of biomedical engineering , ed. Akay M. (New York, NY: John Wiley & Sons; ), 1–10. 10.1007/978-3-319-98390-5_34-1 [ CrossRef ] [ Google Scholar ]
  • Riva G., Davide F., Ijsselsteijn W. (2003). Being there: Concepts, effects and measurement of user presence in synthetic environments. Amsterdam: IOS Press, 110–118. [ Google Scholar ]
  • Riva G., Mantovani F., Capideville C. S., Preziosa A., Morganti F., Villani D., et al. (2007). Affective interactions using virtual reality: The link between presence and emotions. Cyberpsychol. Behav. 10 45–56. 10.1089/cpb.2006.9993 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rothbaum B. O., Hodges L. F., Kooper R., Opdyke D., Williford J. S., North M. (1995). Virtual reality graded exposure in the treatment of acrophobia: A case report. Behav. Ther. 26 547–554. 10.1016/S0005-7894(05)80100-5 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sadaghiani S., Scheeringa R., Lehongre K., Morillon B., Giraud A. L., Kleinschmidt A. (2010). Intrinsic connectivity networks, alpha oscillations, and tonic alertness: A simultaneous electroencephalography/functional magnetic resonance imaging study. J. Neurosci. 30 10243–10250. 10.1523/JNEUROSCI.1004-10.2010 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Salahuddin L., Cho J., Jeong M. G., Kim D. (2007). “ Ultra short term analysis of heart rate variability for monitoring mental stress in mobile settings ,” in Proceedings of the 2007 29th annual international conference of the ieee engineering in medicine and biology society , (Piscataway, NJ: IEEE; ), 4656–4659. [ PubMed ] [ Google Scholar ]
  • Sauseng P., Klimesch W., Doppelmayr M., Hanslmayr S., Schabus M., Gruber W. R. (2004). Theta coupling in the human electroencephalogram during a working memory task. Neurosci. Lett. 354 123–126. 10.1016/j.neulet.2003.10.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schipke A. G. (1999). Effect of respiration rate on short-term heart rate variability. J. Clin. Basic Cardiol. 2 92–95. [ Google Scholar ]
  • Schlink B. R., Peterson S. M., Hairston W. D., König P., Kerick S. E., Ferris D. P. (2017). Independent component analysis and source localization on mobile EEG data can identify increased levels of acute stress. Front. Hum. Neurosci. 11 : 310 . 10.3389/fnhum.2017.00310 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scholz S., Schneider S. L., Rose M. (2017). Differential effects of ongoing EEG beta and theta power on memory formation. PLoS One 12 : e0171913 . 10.1371/journal.pone.0171913 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schöne B., Kisker J., Sylvester R. S., Radtke E. L., Gruber T. (2021). Library for universal virtual reality experiments (luVRe): A standardized immersive 3D/360 ° picture and video database for VR based research. Curr. Psychol. 1–19. 10.1007/s12144-021-01841-1 [ CrossRef ] [ Google Scholar ]
  • Schöne B., Schomberg J., Gruber T., Quirin M. (2016). Event-related frontal alpha asymmetries: Electrophysiological correlates of approach motivation. Exp. Brain Res. 234 559–567. 10.1007/s00221-015-4483-6 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schöne B., Sylvester R. S., Radtke E. L., Gruber T. (2020). Sustained inattentional blindness in virtual reality and under conventional laboratory conditions. Virt. Real. 10.1007/s10055-020-00450-w [ CrossRef ] [ Google Scholar ]
  • Schöne B., Wessels M., Gruber T. (2019). Experiences in virtual reality: A window to autobiographical memory. Curr. Psychol. 38 715–719. 10.1007/s12144-017-9648-y [ CrossRef ] [ Google Scholar ]
  • Schubert T., Friedmann F., Regenbrecht H. (2001). The experience of presence: Factor analytic insights. Pres. Teleoperat. Virt. Environ. 10 266–281. 10.1162/105474601300343603 [ CrossRef ] [ Google Scholar ]
  • Schubring D., Schupp H. T. (2021). Emotion and brain oscillations: High arousal is associated with decreases in alpha- and lower beta-band power. Cereb. Cortex 31 1597–1608. 10.1093/cercor/bhaa312 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schubring D., Kraus M., Stolz C., Weiler N., Keim D. A., Schupp H. (2020). Virtual reality potentiates emotion and task effects of alpha/beta brain oscillations. Brain Sci. 10 : 537 . 10.3390/brainsci10080537 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Serino S., Repetto C. (2018). New trends in episodic memory assessment: Immersive 360 ° ecological videos. Front. Psychol. 9 : 1878 . 10.3389/fpsyg.2018.01878 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shafer D. M. (2021). The effects of interaction fidelity on game experience in virtual reality. Psychol. Popular Media 10 : 457 . [ Google Scholar ]
  • Shafer D. M., Carbonara C. P., Korpi M. F. (2017). Modern virtual reality technology: Cybersickness, sense of presence, and gender. Media Psychol. Rev. 11 : 1 . [ Google Scholar ]
  • Shaffer F., Ginsberg J. P. (2017). An overview of heart rate variability metrics and norms. Front. Public Health 5 : 258 . 10.3389/fpubh.2017.00258 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sheridan T. B. (1992). Musings on telepresence and virtual presence. Pres. Teleoperat. Virt. Environ. 1 120–126. 10.1162/pres.1992.1.1.120 [ CrossRef ] [ Google Scholar ]
  • Shiota M. N., Neufeld S. L., Yeung W. H., Moser S. E., Perea E. F. (2011). Feeling good: Autonomic nervous system responding in five positive emotions. Emotion 11 1368–1378. 10.1037/a0024278 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Simeonov P. I., Hsiao H., Dotson B. W., Ammons D. E. (2005). Height effects in real and virtual environments. Hum. Fact. 47 430–438. 10.1518/0018720054679506 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Singh H., Bauer M., Chowanski W., Sui Y., Atkinson D., Baurley S., et al. (2014). The brain’s response to pleasant touch: An EEG investigation of tactile caressing. Front. Hum. Neurosci. 8 : 893 . 10.3389/fnhum.2014.00893 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Slater M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. B Biol. Sci. 364 3549–3557. 10.1098/rstb.2009.0138 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Slater M. (2018). Immersion and the illusion of presence in virtual reality. Br. J. Psychol. 109 431–433. [ PubMed ] [ Google Scholar ]
  • Slater M., Sanchez-Vives M. V. (2016). Enhancing our lives with immersive virtual reality. Front. Robot. AI 3 : 74 . 10.3389/frobt.2016.00074 [ CrossRef ] [ Google Scholar ]
  • Slater M., Gonzalez-Liencres C., Haggard P., Vinkers C., Gregory-Clarke R., Jelley S., et al. (2020). The ethics of realism in virtual and augmented reality. Front. Virt. Real. 1 : 1 . 10.3389/frvir.2020.00001 [ CrossRef ] [ Google Scholar ]
  • Slater M., Usoh M., Steed A. (1994). Depth of presence in virtual environments. Pres. Teleoperat. Virt. Environ. 3 130–144. 10.1162/pres.1994.3.2.130 [ CrossRef ] [ Google Scholar ]
  • Smith S. A. (2019). Virtual reality in episodic memory research: A review. Psychon. Bull. Rev. 26 1213–1237. 10.3758/s13423-019-01605-w [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Stoffregen T. A., Bardy B. G., Merhi O. A., Oullier O. (2004). Postural responses to two technologies for generating optical flow. Presence 13 601–615. [ Google Scholar ]
  • Stoffregen T. A., Bardy B. G., Smart L. J., Pagulayan R. J. (2003). “ On the nature and evaluation of fidelity in virtual environments ,” in Virtual and adaptive environments: Applications, implications, and human performance issues , eds Hettinger L. J., Haas M. W. (Hillsdale, NJ: Lawrence Erlbaum Associates Publishers; ), 111–128. [ Google Scholar ]
  • Suetsugi M., Mizuki Y., Ushijima I., Kobayashi T., Tsuchiya K., Aoki T., et al. (2000). Appearance of frontal midline theta activity in patients with generalized anxiety disorder. Neuropsychobiology 41 108–112. 10.1159/000026641 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Taelman J., Vandeput S., Spaepen A., Van Huffel S. (2008). Influence of mental stress on heart rate and heart rate variability. IFMBE Proc. 22 1366–1369. 10.1007/978-3-540-89208-3_324 [ CrossRef ] [ Google Scholar ]
  • Usoh M., Catena E., Arman S., Slater M. (2000). Using presence questionnaires in reality. Pres. Teleoperat. Virt. Environ. 9 497–503. 10.1162/105474600566989 [ CrossRef ] [ Google Scholar ]
  • Uusberg A., Thiruchselvam R., Gross J. J. (2014). Using distraction to regulate emotion: Insights from EEG theta dynamics. Int. J. Psychophysiol. 91 254–260. 10.1016/j.ijpsycho.2014.01.006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Visser A., Büchel D., Lehmann T., Baumeister J. (2022). Continuous table tennis is associated with processing in frontal brain areas: An EEG approach. Exp. Brain Res. 240 1899–1909. 10.1007/s00221-022-06366-y [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Welch R. B., Blackmon T. T., Liu A., Mellers B. A., Welch R. B., Blackmon T. T., et al. (1996). The effects of pictorial realism, delay of visual feedback, and observer interactivity on the subjective sense of presence. Pres. Teleoperat. Visual Environ. 5 263–273. 10.1162/pres.1996.5.3.263 [ CrossRef ] [ Google Scholar ]
  • Wilson M. L., Kinsela A. J. (2017). “ Absence of gender differences in actual induced hmd motion sickness vs. pretrial susceptibility ratings ,” in Proceedings of the human factors and ergonomics society annual meeting , Vol. 61 (Los Angeles, CA: SAGE Publications; ), 1313–1316. [ Google Scholar ]
  • Wolf E., Schüler T., Morisse K. (2020). “ Impact of virtual embodiment on the perception of virtual heights ,” in Augmented reality and virtual reality , eds Jung T., tom Dieck M. C., Rauschnabel P. A. (Cham: Springer; ), 197–211. 10.1007/978-3-030-37869-1_17 [ CrossRef ] [ Google Scholar ]
  • Zuckerman M. (1996). Item revisions in the sensation seeking scale form V (SSS-V). Pers. Individ. Differ. 20 : 515 . 10.1016/0191-8869(95)00195-6 [ CrossRef ] [ Google Scholar ]

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Systematic Review
  • Open access
  • Published: 06 December 2021

Virtual reality: a powerful technology to provide novel insight into treatment mechanisms of addiction

  • Massimiliano Mazza   ORCID: orcid.org/0000-0001-5143-7198 1 ,
  • Kornelius Kammler-Sücker   ORCID: orcid.org/0000-0002-6127-5468 2 , 3 ,
  • Tagrid Leménager 1 ,
  • Falk Kiefer 1 &
  • Bernd Lenz   ORCID: orcid.org/0000-0001-6086-0924 1  

Translational Psychiatry volume  11 , Article number:  617 ( 2021 ) Cite this article

10k Accesses

22 Citations

38 Altmetric

Metrics details

  • Pathogenesis
  • Scientific community

Due to its high ecological validity, virtual reality (VR) technology has emerged as a powerful tool for mental health research. Despite the wide use of VR simulations in research on mental illnesses, the study of addictive processes through the use of VR environments is still at its dawn. In a systematic literature search, we identified 38 reports of research projects using highly immersive head-mounted displays, goggles, or CAVE technologies to provide insight into treatment mechanisms of addictive behaviors. So far, VR research has mainly addressed the roles of craving, psychophysiology, affective states, cognition, and brain activity in addiction. The computer-generated VR environments offer very realistic, dynamic, interactive, and complex real-life simulations requesting active participation. They create a high sense of immersion in users by combining stereoscopic three-dimensional visual, auditory, olfactory, and tactile perceptions, tracking systems responding to user movements, and social interactions. VR is an emerging tool to study how proximal multi-sensorial cues, contextual environmental cues, as well as their interaction (complex cues) modulate addictive behaviors. VR allows for experimental designs under highly standardized, strictly controlled, predictable, and repeatable conditions. Moreover, VR simulations can be personalized. They are currently refined for psychotherapeutic interventions. Embodiment, eye-tracking, and neurobiological factors represent novel future directions. The progress of VR applications has bred auspicious ways to advance the understanding of treatment mechanisms underlying addictions, which researchers have only recently begun to exploit. VR methods promise to yield significant achievements to the addiction field. These are necessary to develop more efficacious and efficient preventive and therapeutic strategies.

Similar content being viewed by others

research on virtual reality

Personalized brain circuit scores identify clinically distinct biotypes in depression and anxiety

research on virtual reality

A virtual rodent predicts the structure of neural activity across behaviors

research on virtual reality

Microdosing with psilocybin mushrooms: a double-blind placebo-controlled study

Introduction.

Addictive disorders are highly prevalent and very complex conditions. They pose strong burdens on the affected individuals, their relatives, and the society in general. Many patients with addictions show a chronic relapsing course and cannot maintain long-term abstinence. So far, the mechanisms underlying addictions are poorly understood. Also, we lack reliable, sufficiently accurate, and objective biomarkers of addictive behavior. To diagnose addictive disorders, clinicians mainly rely on self-report measures, which are subject to diverse biases. A better understanding of the interaction between biopsychosocial and environmental factors in the development and maintenance of addictive disorders is needed to improve the therapeutic outcome of substance use disorders (SUD).

Cognitive, behavioral, and physiological reactivity to environmental cues is a well-established phenomenon in the pathophysiology of addictions [ 1 , 2 ]. Craving is an important cue-elicited behavior, as it provokes drug-seeking behavior with preparatory physiological responses [ 3 ] and ultimately relapses in so far successful abstainers [ 4 ]. Besides being a central characteristic of addictions, craving is also an important diagnostic criterion, and a major factor for the chronic relapsing course of addictive disorders [ 4 , 5 , 6 ]. It includes psychological, cognitive, emotional, physiological, and behavioral aspects related to desiring or “wanting” a drug or an experience with subsequent seeking behavior [ 7 ].

Craving is elicited by a vast myriad of triggers including the location and situation (i.e., at home, bar, restaurant, party), social cues, day time, weekday, emotional state, gender, and age [ 8 ]. Relapses are usually preceded by complex and inter-individually varying high-risk situations with powerful but often unknown cues. These involve visual, auditory, olfactory, and tactile perceptions, as well as social interactions. Many cue reactivity studies have investigated the effects of direct cues and neglected contextual triggers, as these are hard to simulate in laboratory and clinical settings. Thus, novel methods are necessary to gain deeper insight into how environmental factors trigger the individual’s addictive behaviors.

The available therapeutic strategies to empower patients with addictions to deal with craving, physiological responses, and dysfunctional cognitions with the goal to prevent relapse have limited efficacy. The same is true for cue exposure therapy, which is used to extinguish the strongly linked stimulus-response association of an addictive behavior and which can be combined with biofeedback training [ 9 , 10 ]. The traditional in vivo methods using pictures and photographs, passive videos, olfactory scents, tactile materials such as wine glasses, bongs, and pipes and guided imagery programs offer only minor ecological validity. This explains the limited efficacy of today’s clinical settings; e.g., lab bars have been shown to induce lower expectancies of alcohol effects than real bars [ 11 ]. Moreover, patients may not be able to make use of behavioral strategies learned in cue-free clinical settings when they are in their natural cue-laden environments.

Mainly due to their high ecological validity, virtual reality (VR) technologies have emerged as very powerful tools to overcome the limitations of classical laboratory and clinical settings. In VR environments, individuals immerse in a computer-generated world that reacts to the individual motions and behaviors. In comparison to in vivo settings, VR enhances the possible breadth, salience, and the standardizability of addiction-related cues. However, the use of VR environments in addiction research is still at its early stages, but with a very promising future.

VR technologies facilitate the investigation of environmental triggers [ 12 ]. They enable to differentially assess the effects of cues on behavior and biology of individuals with addictive behaviors. Cues are divided into multisensorial proximal cues, which refer to specific objects directly related to drug use or addictive behavior (e.g., sight and smell of a preferred alcoholic beverage, a cigarette, syringes), contextual environmental cues (e.g., convenience store) with or without social interaction, and complex cues, i.e., how proximal and contextual cues interact with each other to influence addictive behavior [ 12 , 13 , 14 , 15 ]. VR technology provides an excellent basis to investigate the effects of various combinations of proximal and contextual cues, which is important because preclinical evidence suggests that contexts may influence drug-seeking behavior independently from the direct cues [ 16 ]. Cue exposures in VR settings are more potent and reach higher ecological validity than traditional methods to evoke craving [ 17 ]. In particular, VR simulations applied via head-mounted displays (HMDs), goggles, or a CAVE, i.e., projection wall systems, are promising tools [ 18 ].

VR simulations are already successfully employed in exposure-based treatments of other mental illnesses such as anxiety disorders [ 19 ] and posttraumatic stress disorder [ 20 ]. A recent review of literature from the addiction field has shown that VR simulations can elicit craving, although treatment effects have been found to be heterogeneous [ 21 ].

In this article, our aim is to provide a review on the available studies using VR technology via HMDs, goggles, or a CAVE system to enlighten the mechanistic underpinnings of addictive behaviors. Our systematic literature search identified 38 studies. We discuss the advantages of VR simulations over traditional laboratory and clinical settings, the challenges and potentialities linked to the use of VR, and the shortcomings of the available literature. We propose future directions of VR research on addictive behaviors.

The literature search was conducted on March 17, 2021, and the following search term was used in PubMed ( https://pubmed.ncbi.nlm.nih.gov/ : (virtual[Title]) AND ((addictive[Title]) OR (addiction[Title]) OR (substance[Title]) OR (alcohol[Title]) OR (cocaine[Title]) OR (cannabis[Title]) OR (opioid[Title]) OR (tobacco[Title]) OR (nicotine[Title]) OR (methamphetamine[Title]) OR (gaming[Title]) OR (gambling[Title])). The eligibility criteria were addiction as a relevant target, original data, published in English language, clarification of mechanisms underlying addiction, and use of HMD, goggles, or CAVE. We excluded studies without evidence for use of HMD, goggles, or a CAVE setting because these immersive technologies provide higher ecological validity with stronger effects. The studies were screened and selected independently by two researchers (MM and BL), differences were discussed, and in all cases a compromise was agreed on. Out of the initially identified 195 records, 38 studies could be included (see Fig. 1 for the PRISMA diagram and Table 1 for the selected studies) and the reported results were divided into the following categories: craving, psychophysiology and affective states, attention and cognition, brain activity, other treatment mechanisms, and study protocols. Table 1 shows also the country where the study was conducted, the sex composition, average age, and whether a social component has been included in the experiment. This systematic review was not pre-registered.

figure 1

It illustrates the numbers of records identified, excluded and included, and reasons for exclusions. HMD head-mounted display.

A plethora of complex pathogenetic models including affective and cognitive processing, conditioning learning, motivational processes, and the incentive sensitization theory have been associated with craving [ 22 , 23 ]. Environmental cues trigger craving, which increases the risk for relapse [ 24 ]. However, the complexity of this model should not be underestimated, as craving is elicited by an interaction of various individual and environmental factors including affective states, autobiographical information, and sensory input. VR simulations offer dynamic and complex exposure settings, which is a prerequisite to gain a more in-depth understanding of which factors trigger craving [ 25 ]. This might result in novel treatment strategies. For example, cue reactivity in VR simulations might reliably predict who will relapse in the future or who will respond to cue exposure therapy.

VR simulations provoke alcohol craving in subjects with alcohol abuse or dependence [ 26 ]. Diverse VR environments such as at home, bar, restaurant, and pub elicit alcohol craving in patients with alcohol use disorder (AUD), but not significantly in social drinkers [ 27 ]. In VR, heavy alcohol drinkers exhibit higher craving scores than occasional drinkers and the extent of ecological validity correlates with craving scores in heavy, but not in occasional drinkers [ 28 ]. In patients with alcohol dependence, Lee et al. [ 29 ] investigated the interaction between cue-laden and cue-free environments, social pressure, and group affiliation “addicted patients vs. controls” on induction of craving. In the cue-free VR environment, social pressure, i.e., persuasions to consume alcohol, increased craving in both patients and controls, whereas in the cue-laden VR environment, social pressure enhanced craving in the controls but not in the addicted group, as the latter experienced already high levels of craving due to the environmental cues. In a VR project on cross-cue reactivity, nicotine-dependent and non-alcohol-dependent subjects showed lower alcohol craving than nicotine- and alcohol-dependent subjects in an office environment, but not in a party simulation [ 12 ]. This observation suggests that non-alcohol-dependent smokers are more vulnerable to contextual cues. With regards to treatment effects, ten sessions of VR treatment reduced craving of alcohol-dependent patients more strongly than treatment as usual consisting of education and cognitive therapy [ 30 ]. In a single-blinded randomized trial, treatment as usual combined with six VR cue exposure therapy sessions over a period of approximately five weeks was superior to treatment as usual alone in reducing craving from admission to discharge in patients with AUD [ 31 ]. Notably, the study results also indicate that VR cue exposure therapy might be more efficacious in patients with higher than in those exhibiting lower levels of craving. The effectiveness of VR cue-exposure therapy has also been shown in a case report [ 32 ]. Taken together, alcohol craving is elicited in cue-laden VR environments; the effects differ between patients with AUD and social drinkers, depending on the social context, and are affected by cross-cue reactivity. Moreover, preliminary evidence suggests that VR cue exposure therapy might excel traditional methods.

Similar to alcohol, VR simulations reliably induce craving for smoking [ 33 ]. Smoking cues in VR environments increase craving for cigarette smoking more strongly than cues presented in pictures [ 34 ] or than non-smoking or neutral cues [ 12 , 25 , 35 , 36 , 37 ]. Craving for smoking is induced by various environments including at home, pub, restaurant, and street scenarios [ 38 ]. Interestingly, the subjective level of presence correlates positively with craving for smoking a cigarette [ 38 ]. Accordingly, craving is induced less strongly in a paraphernalia room containing sights, sounds, and scents linked to tobacco smoking than in a party room with similar smoking cues plus interacting avatars that offer cigarettes, drinks being poured, conversations, and cigarettes being lit up [ 25 ]. Craving for cigarette smoking was found to be more persistent and to show a carry-over effect in the neutral condition in alcohol-dependent smokers, but not in non-alcohol-dependent smokers [ 12 ]. In treatment studies, VR cognitive behavioral therapy for cigarette smokers reduces craving [ 39 , 40 ]. Six sessions of cue exposure therapy also reduced the morning smoking count [ 41 ]. Moreover, VR cue-exposure therapy and cognitive behavioral therapy are effective for smoking cessation, although the intervention effects on craving only showed a trend in one study [ 42 ]. However, also null findings of cue-exposure therapy on craving have been reported [ 43 ]. Altogether, VR simulations with smoking cues induce craving, and, similar to the alcohol field, repeated VR cue exposure reduces craving for smoking.

Illicit drugs

Also in users of illicit drugs, drug-related cues elicit craving [ 44 , 45 ]. VR simulations which depict the use of cocaine and which allow for interacting with a dealer increase craving in cocaine-dependent individuals [ 46 ]. VR marijuana environments elicit craving in individuals with cannabis abuse or dependence with even large effect sizes [ 47 ]. In patients with methamphetamine dependence, cue-induced craving is associated with cue-elicited heart rate variability changes [ 48 ], and a counterconditioning approach to pair methamphetamine-related cues with aversive stimuli (e.g., contact to the police, hallucinations, infections, death) reduces craving and liking of methamphetamine (in comparison to a waiting list) [ 49 ].

Internet gaming and gambling

In a VR Internet café simulation, patients with internet gaming disorder (IGD) develop higher craving than controls. Interestingly, the simulations of entering an internet café and being invited to game result in higher craving than observing a conversation about internet games. Moreover, craving was lower during a cognitive-behavioral-oriented refusal skill practice than during the invitation to game condition [ 50 ]. In gamblers, being on the sidewalk of a street and seeing a bar elicit craving, which highlights the importance of contextual stimuli [ 51 ]. The VR simulation also provokes craving more strongly in frequent than in infrequent gamblers with effect sizes comparable to a real video lottery terminal [ 52 ]. Different VR casino environments increase craving also in recreational gamblers [ 53 ]. The effect was strongest in the scenario of playing a casino game, and the authors speculate that higher presence relative to the other scenarios (navigating the casino, choosing chips exchanged, witnessing a jackpot scene and discussion about gambling) might account for this observation. However, a single 20-minutes exposure was not long enough to induce extinction [ 51 ]. These data show that VR simulations can induce craving also in individuals with non-substance-related addictions. Moreover, they highlight the diverse behavioral effects of complex cues and underline the need for experimental research using VR technologies.

Physiology and affective states

Perceived stress, depressed mood, and anxiety are important factors in the development and maintenance of SUD [ 54 , 55 ], and anxiety is involved in eliciting craving [ 56 ]. VR studies used skin conductance rate, which might serve as an objective mirror of affective states [ 57 ], and heart rate variability to investigate the role of the autonomic nervous system and emotions in addiction.

Cue exposure induces anxiety in patients with AUD, whereas it decreases anxiety in social drinkers, and these cue-elicited anxiety effects differentiate better between patients and social drinkers than cue-elicited craving levels [ 27 ]. In a case report of a male patient with long lasting severe AUD, six cue-exposure sessions reduced cue-elicited anxiety levels by 95% [ 32 ].

In nicotine-dependent subjects, heart rate is increased by smoking cues (relative to neutral cues) including an avatar lighting up a cigarette or offering a cigarette to the participant and a burning cigarette in an ashtray, but not viewing a cigarette pack on the table [ 25 ]. This indicated that animated simulations excel unanimated simulations. Smoking cues also heighten skin conductance levels and social situations increase skin conductance levels more than object cues related to cigarette smoking [ 33 ]. Moreover, the psychophysiological response in skin conductance decreases with repeated exposures [ 33 ].

Cocaine-related VR simulations increase heart rate and reduce happiness in cocaine-dependent individuals [ 46 ]. In male patients with methamphetamine use disorder (MUD), VR methamphetamine-cue exposure induces higher skin conductance than the neutral condition [ 44 ]. Furthermore, VR cue exposure increases skin conductance more strongly in male patients with MUD than in healthy controls. In this project, galvanic skin response was also superior to electroencephalogram (EEG) signals to classify patients vs. controls [ 58 ]. In men with methamphetamine dependence, the methamphetamine-cue VR exposure induces larger heart rate variability in affected subjects, while it reduces the heart rate variability in healthy controls [ 48 ]. Also in a Taiwanese study, heart rate variability, galvanic skin response, and pupil size changed from pre to post VR stimulation in patients with MUD, but not or only weakly in controls. Patients differed significantly from controls post VR simulation in heart rate variability and skin conductance [ 45 ]. These data agree with emotional arousal during VR simulation in patients with SUD, but not in controls. Concerning the potential therapeutic outlook, a counterconditioning procedure in a VR environment showed larger decreases in heart rate variability indexes on time domain and non-linear domain pre to post intervention than a waiting list group [ 49 ].

In men with IGD, stronger anxiety and depression correlate with more severe IGD and a higher percentage of digital activities [ 59 ]. However, electromyography, skin conductance, and heart rate did not significantly change when recreational gamblers were exposed to VR casino scenarios [ 53 ]. The authors speculate that the missing effect was due to the specific study population, not fulfilling the dependence criteria, but reporting recreational use.

Taken together, addiction-related cues in VR environments increase skin conductance in subjects with addictions (relative to neutral conditions and relative to controls). Because the skin conductance mirrors autonomous nervous system activity, these results highlight a more stressful state under VR cue exposure. Moreover, addiction-related cues in VR environments also affect heart rate, which can give insight into regulation of emotional load [ 60 ], and anxiety and depression levels. Finally, animated simulations and the inclusion of social situations may reinforce the effects on physiology and affective states, probably due to higher immersion and presence.

Attention and cognition

VR simulations with embedded neuropsychological assessments can reliably and sensitively measure attention, decision-making, and visual processing speed [ 61 ]. In patients with AUD, addiction-related cues in VR attract more attention to the sight, the smell, and the thoughts about alcohol than neutral cues [ 26 ]. In a male patient with severe AUD, six VR cue exposure sessions reduced interference towards alcohol content in the Alcohol Stroop Task [ 32 ]. A treatment study applying cognitive behavioral therapy in a VR setting to tobacco smokers showed higher attention to visual and olfactory cigarette cues and increased thoughts about smoking in cue-enriched conditions than in a neutral condition [ 40 ]. Moreover, the intervention increased smoking abstinence self-efficacy and confidence to resist smoking and reduced the number of cigarettes smoked during and until two months post treatment [ 39 ]. In marijuana VR environments compared to neutral environments, individuals with cannabis abuse or dependence are more likely to pay attention to the sight, the smell, and thoughts of cannabis [ 47 ]. A study in pathological gamblers highlights the feasibility to use VR in cognitive behavioral therapy. In comparison to imaginal exercises, the VR setting facilitates the identification of risk situations for gambling and tends to detect more dysfunctional thoughts [ 52 ]. Altogether, the identified studies show that VR settings are suitable to investigate how substance use affects cognitive processes and how addiction-related cues attract attention and thinking. Therefore, VR may serve as an important tool to investigate cognitive dysfunctions in patients with addictions.

Brain activity

It is well established that dysfunctional brain activities play an important role in addictive behaviors. VR technologies have been combined with EEG and functional magnetic resonance imaging (fMRI) to provide insight into treatment mechanisms of addiction.

In alcohol-dependent patients, ten sessions of VR therapy increased absolute EEG alpha power in Fp2-A2 and F8-A2. No such effects were observed after treatment as usual, which consisted of education and cognitive behavioral therapy elements [ 30 ].

In tobacco smoking men, smoking cues display stronger fMRI brain activity than neutral cues in the prefrontal cortex (superior frontal gyrus, right medialfrontal gyrus, left orbital gyri), left anterior cingulate gyrus, right superior temporal gyrus, left uncus, right fusiform gyrus, right lingual gyrus, and right precuneus. The authors also report a reduction of the activity in the left superior and inferior frontal gyrus pre vs. post VR cue-exposure therapy [ 43 ].

In patients with MUD, low and high gamma EEG bands in the right dorsolateral prefrontal cortex are weaker in a methamphetamine-cue VR condition compared to a neutral condition, and the EEG bands predict skin conductance levels [ 44 ]. The alterations in dorsolateral prefrontal cortex function are related to reduced impulse control ability [ 62 ], which is of direct relevance to addictive behavior.

Internet gaming

An 8-week VR therapy consisting of relaxation, simulation of high-risk situations, and sound-assisted cognitive reconstruction reduced the severity of online gaming addiction in young men, and the effect size was similar to the cognitive behavioral therapy condition. VR therapy enhanced the functional connectivity in the left middle frontal–posterior cingulate cortex–bilateral temporal lobe and resulted in improved balance in the cortico-striatal–limbic circuit. This suggests that VR therapy prevents habitual, emotionless gaming in that it facilitates limbic-regulated responses to rewarding stimuli [ 63 ].

Taken together, VR studies using EEG and fMRI technology show impaired activity of the prefrontal cortex in patients with addictive disorders in comparison to control subjects; also, these alterations are linked to psychophysiological parameters and change during VR therapy.

Other treatment mechanisms

Girard et al. [ 64 ] compared the interventions “crushing cigarettes” vs. “grasping balls” in a VR environment (12-week study period). Crushing cigarettes was superior to reduce Fagerström scores for severity of nicotine dependence, decrease number of cigarettes smoked, and reach abstinence. In terms of behavioral economics, VR tobacco cues relative to neutral cues have been shown to increase willingness to invest more money in cigarettes, to continue smoking in case of high prices for cigarettes, and to decrease sensitivity to cigarette prices [ 36 ].

Study protocols

The systematic literature search also identified three study protocols to investigate the effects (i) of cognitive behavioral therapy and VR exposure therapy on abstinence rates, anxiety, and depression in tobacco smokers [ 65 ], (ii) of a VR-based memory retrieval-extinction intervention on craving, psychophysiology, affective states, and brain activity in men and women with MUD [ 66 ], and (iii) of mindfulness and VR cue exposure on craving, anxiety, depression, and attention bias in MUD [ 67 ].

To our knowledge, this is the first review based on a systematic literature search of studies using VR HMDs, goggles, or CAVE technology to provide insight into treatment mechanisms underlying addictive disorders. The available body of research shows that VR environments are a reliable technology to provoke craving and physiological reactions and to influence affective states, attention, cognition, and brain activity. These effects have been shown for both substance-related and for non-substance-related addictive behaviors. Our predefined search criteria lead to the exclusion of studies on conditions such as eating disorders and morbid obesity. However, the existing literature suggests that the above reported effects are also present in food addiction [ 68 , 69 , 70 , 71 ] highlighting the generalizability to a broad range of addiction-related behaviors.

Furthermore, VR environments have the potential to excel traditional laboratory and clinical methods regarding effect size, as more immersive simulations with a higher sense of presence are more powerful. The effects also depend on the characteristics of the study populations (i.e., addicted individuals, heavy users, social users), the social context, as well as the integration of social interaction into the VR simulation. There is also preliminary evidence for cross-cue reactivity. Finally, repeated exposures appear to reduce psychophysiological reactivity. Various therapeutic strategies including VR cue exposures, a VR counterconditioning approach, and VR cognitive behavioral therapy addressing self-confidence and coping skills have been tested with mixed results. Study protocols describe ongoing projects using VR exposure therapy [ 65 ], VR-based memory retrieval-extinction intervention [ 66 ], and mindfulness [ 67 ]. However, further experimental research using VR technologies is necessary.

Advantages of VR simulations over traditional laboratory, clinical, and real-life settings

Cue exposure in VR simulations excels cue exposure in traditional laboratory or clinical settings in several ways including higher effect size, easier and more realistic bench-to-bedside translation, higher level of standardization, easier repeatability, reduced distraction, higher immersion and presence, and higher feasibility of studies on complex cues and complex target behaviors. VR environments elicit stronger effects than traditional laboratory or clinical settings based on pictures [ 17 , 34 ]. fMRI data gathered during visually three-dimensional conditions also evoke more attention and visual balance than data from visually two-dimensional conditions [ 72 ]. Moreover, VR environments facilitate the translation from experimental research to patient care as they offer highly standardized and repeatable procedures without variation between experimental research settings and patient care settings. This might be particularly relevant to stress paradigms, which play an important role in addictive disorders. Once the VR environments are established, there are only low costs and the training setups are widely accessible, even from the home setting. However, there is still great need for research on how exactly the novel VR environments can be effectively translated to the home environment.

VR environments can simulate reality without limitations to space and time. VR also permits to create environments that would be unethical, illegal, or dangerous in real-life. While environmental cues of interest can be specifically inserted, unexpected and unwanted distracting effects, such as surrounding light and sounds could be filtered out easily.

Patients are more willing to accept exposure therapy and reduce avoidance behavior when conducted in VR instead of a real-life setting, as VR exposure is estimated less aversive. Especially to individuals with SUD, VR technology seems very attractive [ 46 ]. Addictive disorders are highly co-morbid with attention-deficit hyperactivity disorder [ 73 ] and social phobia [ 74 ]. Subjects with attention deficits and/or discomfort in groups might particularly benefit from VR therapies, as these might immerse such patients more easily in the therapy as compared to traditional cognitive behavioral therapy elements without VR simulations [ 75 ].

VR simulations facilitate the combination of different proximal cues with contextual cues, thus providing an excellent environment to study reactivity to complex cues. This does not only include visual and auditory cues, but also refers to actions and social factors, such as facial expressions and social interaction [ 48 ]. The investigation of how proximal, contextual, and social factors interact with each other is important, since for example in smokers greater craving is elicited by visual and olfactory cues [ 76 ], contextual cues [ 77 ], and interpersonal interaction [ 78 ]. Moreover, standardization of social interaction is a challenge in laboratory experiments. However, social situations related to substance use have been shown to elicit even stronger psychophysiological reactions than object exposure alone [ 33 ]. Lee et al. [ 29 ] have highlighted the relevance of studies exploring the interactions between different cues and social interactions. They established a ceiling effect in that social pressure affects craving for alcohol in controls independent from whether environmental alcohol-related cues are present or not, whereas in addicted subjects, social pressure increases craving for alcohol in a cue-free environment, but not in a cue-laden environment. Thus, VR technology is an excellent tool to significantly advance our understanding of how social interactions contribute to the development and maintenance of addictive behaviors.

In traditional laboratory studies, addiction phenotypes are often assessed outside of the settings following the exposures, e.g., stress paradigms. In contrast, VR experiments enable to embed recording of behavioral, psychophysiological endpoints directly in the VR display system. This allows for real-time event-triggered data analysis. An example of an embedded VR tool is the CONVIRT test battery, which allows to assess the neuropsychological functions of attention, decision-making, and visual processing speed within the VR simulation [ 61 ]. This tool also enables to measure saccadic eye movements.

Furthermore, in comparison to traditional laboratory studies to assess drug craving, effects, and self-administration [ 79 , 80 ], VR experiments allow researchers to modify environmental factors, such as the bar appearance, more easily. In addition, addictive substances are not only consumed in bars, but also at home (e.g., living room setting), as well as in public locations (e.g., open air settings), or with friends in a wide array of private locations or clubs. All these settings can be easily simulated using VR, which grants a wider study of cue-elicited reactivity in more settings than just the bar laboratory paradigm. However, a limitation is that as of yet we still lack empirical evidence for the differences in effect size between VR settings and bar laboratory exposures. In the future, more direct comparisons are needed to figure out differences in effects between VR environments and traditional laboratory settings, clinical settings, and real-life settings.

Challenges of using VR technology

Although VR technology excels with high ecological validity, the effect sizes of VR simulations depend on their capability to immerse subjects and to create high levels of presence [ 81 ]. Thus, perceived ecological validity is a relevant factor when studying cue reactivity in VR simulations. For example, perceived realism predicted craving post VR stimulation in heavy alcohol drinkers [ 28 ]. The perception of presence also depends on functional fidelity [ 18 ]; i.e., the ability of users to interact realistically with the environment. Presence has also been highlighted as an important factor for intervention efficacy [ 64 ]. In a study on IGD, conditions where participants had to follow a given pathway (internet café entrance task) and where they interacted with avatars (gaming invitation task) evoked stronger craving than a passive condition (observing a conversation about internet games) [ 50 ]. Accordingly, a higher subjective level of presence in VR relates to more craving for smoking a cigarette [ 38 ]. These data encourage researchers to use VR environments that address multiple sensory systems, include social interaction, and request active participation. The implementation of verbal and non-verbal reactions of avatars to the participants’ behaviors and a complex interactivity are limited due to technical constraints. Programming such interactive simulations might be very time consuming and requires expert skills. However, it is an important goal to create VR simulations with avatars that provide appropriate reactions. The establishment of multidimensional VR settings addressing different sensory systems to create the highest realism and naturalism possible has only just begun. The literature stimulates to believe that future efforts addressing this issue will generate important progress.

Another problem of VR simulations might be “cybersickness”, which refers to nausea, headaches, vomiting, and spatial disorientation and stems from differing visual, vestibular, and proprioceptive information. Such symptoms might limit the ecological validity. However, a study found that VR was not significantly associated with relevant cybersickness [ 52 ], and the use of HMDs is more likely to cause cybersickness than VR presented in a CAVE [ 33 ]. Also, dropout rates in the VR setting have been only slightly higher than in the non-VR control environments [ 39 ], which indicates that VR settings pose only a low additional burden on the participants. Taken together, there is so far no compelling evidence that cybersickness or safety concerns might pose a significant problem to strategies based on VR technology. Nevertheless, future studies are needed to investigate feasibility, acceptability, and safety of VR simulations in the light of adverse events. This is particularly important, as the potential of translation into clinical settings strongly depends on the ability of patients and their clinicians to use novel tools easily, effectively, and without harm.

To establish classification procedures, advanced algorithms of artificial intelligence and machine learning techniques are needed to analyze the multidimensional and longitudinal data gathered in VR experiments [ 82 ]. Previous attempts have been successful. Using a split-dataset design, Ding et al. [ 58 ] trained several algorithms on EEG and skin conductance data to distinguish male patients with MUD from male controls and found good classification accuracies for a validation dataset (88.57 for random forest, 90.38 for support vector machine, and 90.68 for logistic regression classification).

Potential of VR technology for treatment

It is a well-known problem that psychotherapeutic treatments have only limited efficacy when applied in a clinical setting or a therapy practice. Therapeutic interventions might be more effective in VR environments as these offer more realistic high-risk situations as compared to the relatively artificial clinic settings (that often do not allow to bring the patients into high-risk environments such as bars) [ 39 ]. Therefore, VR interventions are likely to increase the effectiveness of conventional cue exposure therapies [ 21 ]. VR settings are also effective as a tool to address factors of cognitive behavioral therapy such as self-confidence and coping skills. For example, refusal skills practice in a VR internet café environment reduces craving in patients with IGD [ 50 ]. VR may create realistic environments for treatment, which could not be used in natura due to safety or confidentiality concerns.

In clinical settings, it is often difficult to identify the exact individual triggers. Dysfunctional behaviors can sometimes only be provoked with reduced severity. Moreover, health care professionals often have difficulty helping the patient to transfer newly learned skills to the natural setting. VR technologies have the potential to overcome these hurdles as they offer access to real-world environments that may make cue exposure trainings more effective. Hence, VR technology allows the therapist to provide advice in a natural-like setting. Therapists found also other advantages in integrating VR into cognitive behavioral therapy, such as easier access to triggers, emotions, and dysfunctional thoughts particularly in highly rational patients, more efficient validation of what has been learned during therapy, and stronger reinforcement of self-efficacy [ 52 ]. Moreover, specific settings can be repeated without any variance between the training sessions to test (and retest) the newly developed behaviors. Contrary to the traditional settings, VR simulations also allow for easy switches between scenario difficulties. Due to the computer-controlled presentation of stimuli and timing, VR technology enables higher standardization than human-operated experimental paradigms. VR cue exposure-induced changes of cue-reactivity (craving and anxiety) also generalize into daily life experiences, which is a very important goal when aiming at tailoring effective treatment strategies [ 32 ].

VR settings enable precisely defined exposures to stereoscopic three-dimensional and interactive stimuli with high ecological validity under strictly controlled conditions. The therapeutic sessions can be repeated in a standardized manner as often as feasible, which is an important factor for cue-exposure training sessions in a confidential therapeutic setting. The VR settings can also be easily adapted to individual needs such as combinations of drug-related cues and multi-sensorial contextual cues. For example, custom-made VR environments may allow the therapists and patients to select individual types of alcoholic beverages and drinking situations during the sessions. This aspect might also be particularly relevant to the generalization of exposure effects and reduction of renewal risk in the context dependency of extinction [ 83 ]. VR simulations might also contribute to personalized medicine, as they enable to easily provide additional input or adapt the cues to the needs of the individual (e.g., embodiment projects). VR training sessions that include interactive behavior may be used to reveal and modify automatized addictive behavior [ 64 ].

The costs to build up and operate a VR system are a major issue. However, after establishment, treatments based on VR simulations are simple to use and cause only low costs. Thus, such therapeutic methods can be easily implemented in daily clinical routine. In the last decade, software and hardware have been well advanced, and hardware companies now intend to increase production for the mass market. The costs have been decreasing, and the availability of HMDs will increase in the near future. As a result, these methods may be used to practice psychotherapeutic methods at home.

Limitations

The available literature is limited in several ways. So far, the investigated sample sizes are mostly small. Future research is needed to study different contexts, sexes, ethnicities, and age groups to test for generalizability. Most of the studies focus on men; women are strongly neglected, which is a problem of addiction research in general [ 84 ]. Also, they rather include young cohorts, as digital natives are more enthusiastic to use modern digital technology. However, treatments in VR might be especially interesting to older people who are less mobile. Further open questions are which of the effects found in VR studies can be generalized and which are restricted to subgroups of substance-related and non-substance-related addictive behaviors. Moreover, the available studies have mostly focused on self-reported outcomes such as craving and affect. It would be important for future research to integrate additional objective measures such as physiological parameters and neuro-imaging phenotypes more regularly. Studies using VR in addiction research have mostly addressed VR environments with pleasant connotations (such as enjoying drug use together with the peer group); however, stress situations are also well-known reasons for substance use and can trigger addictive behaviors. Thus, future studies should also investigate negative situations, e.g., stress paradigms.

The first VR technologies were developed in the 1960s, and since the 1980s, some of the VR systems were already highly sophisticated, with full stereo display, head tracking, and at least hand tracking [ 85 ]. However, since then VR technologies have made strong progress in aspects such as graphics quality, computing power, and accessibility. As higher levels of presence produced by VR environments with higher capabilities for immersion are thought to produce stronger effect size [ 81 ], direct comparisons of early and more recent VR studies need to be interpreted carefully due to potential biases induced by the technological progress.

The Proteus effect: embodiment

The “Proteus effect” refers to an interesting phenomenon characterized by one’s change in behavior and cognitions depending on the avatar that one embodies. Embodiment can be achieved by visually substituting one’s own body with a virtual one through visuomotor synchrony (the real-time motion capture applied to the virtual body) as well as through visuotactile stimulation, in which there is a correspondence between a virtual object seen to touch the body and a synchronous touch on the real body [ 86 ]. For instance, white participants who embody dark-skinned virtual bodies show a reduction in implicit bias against black people [ 87 ]. Also, embodiment techniques are able to change temperature sensitivity [ 88 ]. Finally, embodying self-compassion within VR may have a considerable clinical potential in the treatment of depression and may decrease SUD risk [ 89 ]. Thus, avatars inducing a sense of self-confidence might promote desired therapeutic effects.

Eye-tracking

With the advent of eye-tracking technology incorporated into VR HMDs, a new avenue of research on addicted patients’ attention gaze shifts is born. Indeed, studies on attentional bias in addiction have tasks that are incongruent with the natural environment [ 28 ]. This problem might be overcome by novel eye-tracking technology. Moreover, specific background cues that induce craving can be assessed by tracking eye movements in virtual supermarkets, which have already been employed for marketing purposes [ 90 ]. Recently, Tsai et al. [ 45 ] developed a VR system for the purpose of inducing cravings for MUD patients and observed significant differences in eye tracking between pre-VR stimulation and post-VR stimulation in MUD patients but not in controls.

Neuropeptides, genetics, and epigenetics

Whether VR immersion impacts on neurobiological parameters involved in the pathogenesis of addictive behaviors, such as cortisol, oxytocin and vasopressin, is yet to be determined. Moreover, research on how genetics and epigenetics interact with behaviors in VR is needed. To our knowledge, only few studies applying VR technology have observed changes in neuroendocrine parameters [ 91 , 92 ]. To investigate whether such neuroendocrine effects in VR can be observed in patients with addictions can help us elucidate key neurobiological underpinnings of the etiopathogenesis of addiction.

VR is a powerful and promising tool for the study of cue-elicited craving in patients with addictions. It is a technology characterized by an exponential growth with important implications for the development of addiction research. VR can be associated with other technologies, such as eye-tracking technology, in order to pin-point specific cues that elicit craving in patients. It may also help us differentiate cue-elicited cravings among different types of addictions. Finally, changes in neuroendocrine parameters elicited by VR environments may aid in further elucidating the neurobiological underpinnings of addiction. VR is already complementing naturalistic settings in addiction research, increasing the feasibility and reproducibility of studies of cue-elicited cravings. Future VR studies will also contribute significantly to the development of new therapeutic approaches.

Niaura RS, Rohsenow DJ, Binkoff JA, Monti PM, Pedraza M, Abrams DB. Relevance of cue reactivity to understanding alcohol and smoking relapse. J Abnorm Psychol. 1988;97:133–52.

Article   PubMed   CAS   Google Scholar  

Carter BL, Tiffany ST. Meta-analysis of cue-reactivity in addiction research. Addiction 1999;94:327–40.

Winkler MH, Weyers P, Mucha RF, Stippekohl B, Stark R, Pauli P. Conditioned cues for smoking elicit preparatory responses in healthy smokers. Psychopharmacology. 2011;213:781–9.

Weinland C, Mühle C, Kornhuber J, Lenz B. Body mass index and craving predict 24-month hospital readmissions of alcohol-dependent in-patients following withdrawal. Prog Neuropsychopharmacol Biol Psychiatry. 2019;90:300–7.

Article   PubMed   Google Scholar  

Stohs ME, Schneekloth TD, Geske JR, Biernacka JM, Karpyak VM. Alcohol craving predicts relapse after residential addiction treatment. Alcohol Alcohol. 2019;54:167–72.

Sliedrecht W, de Waart R, Witkiewitz K, Roozen HG. Alcohol use disorder relapse factors: a systematic review. Psychiatry Res. 2019;278:97–115.

Tiffany ST, Wray JM. The clinical significance of drug craving. Ann NY Acad Sci. 2012;1248:1–17.

Ghiţă A, Teixidor L, Monras M, Ortega L, Mondon S, Gual A, et al. Identifying triggers of alcohol craving to develop effective virtual environments for cue exposure therapy. Front Psychol. 2019;10:74.

Article   PubMed   PubMed Central   Google Scholar  

Du J, Fan C, Jiang H, Sun H, Li X, Zhao M. Biofeedback combined with cue-exposure as a treatment for heroin addicts. Physiol Behav. 2014;130:34–39.

Schoenberg PL, David AS. Biofeedback for psychiatric disorders: a systematic review. Appl Psychophysiol Biofeedback. 2014;39:109–35.

Wall AM, McKee SA, Hinson RE, Goldstein A. Examining alcohol outcome expectancies in laboratory and naturalistic bar settings: a within-subject experimental analysis. Psychol Addict Behav. 2001;15:219–26.

Traylor AC, Parrish DE, Copp HL, Bordnick PS. Using virtual reality to investigate complex and contextual cue reactivity in nicotine dependent problem drinkers. Addict Behav. 2011;36:1068–75.

Conklin CA, Robin N, Perkins KA, Salkeld RP, McClernon FJ. Proximal versus distal cues to smoke: the effects of environments on smokers' cue-reactivity. Exp Clin Psychopharmacol. 2008;16:207–14.

Paris MM, Carter BL, Traylor AC, Bordnick PS, Day SX, Armsworth MW, et al. Cue reactivity in virtual reality: the role of context. Addict Behav. 2011;36:696–9.

Heilig M, Epstein DH, Nader MA, Shaham Y. Time to connect: bringing social context into addiction neuroscience. Nat Rev Neurosci. 2016;17:592–9.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Chesworth R, Corbit LH. Recent developments in the behavioural and pharmacological enhancement of extinction of drug seeking. Addict Biol. 2017;22:3–43.

Hone-Blanchet A, Wensing T, Fecteau S. The use of virtual reality in craving assessment and cue-exposure therapy in substance use disorders. Front Hum Neurosci. 2014;8:844.

Salomoni P, Prandi C, Roccetti M, Casanova L, Marchetti L, Marfia G. Diegetic user interfaces for virtual environments with HMDs: a user experience study with oculus rift. J Multimodal Use Interfaces. 2017;11:173–84.

Article   Google Scholar  

Maples-Keller JL, Bunnell BE, Kim SJ, Rothbaum BO. The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders. Harv Rev Psychiatry. 2017;25:103–13.

Kothgassner OD, Goreis A, Kafka JX, Van Eickels RL, Plener PL, Felnhofer A. Virtual reality exposure therapy for posttraumatic stress disorder (PTSD): a meta-analysis. Eur J Psychotraumatol. 2019;10:1654782.

Segawa T, Baudry T, Bourla A, Blanc JV, Peretti CS, Mouchabac S, et al. Virtual reality (VR) in assessment and treatment of addictive disorders: a systematic review. Front Neurosci. 2020;13:1409.

Drummond DC. Theories of drug craving, ancient and modern. Addiction 2001;96:33–46.

van Lier HG, Pieterse ME, Schraagen JMC, Postel MG, Vollenbroek-Hutten MMR, de Haan HA, et al. Identifying viable theoretical frameworks with essential parameters for real-time and real world alcohol craving research: a systematic review of craving models. Addiction Res Theory. 2018;26:35–51.

Hormes JM, Niemiec MA. Does culture create craving? Evidence from the case of menstrual chocolate craving. PLoS ONE. 2017;12:e0181445.

Thompson-Lake DG, Cooper KN, Mahoney JJ, Bordnick PS, Salas R, Kosten TR, et al. Withdrawal symptoms and nicotine dependence severity predict virtual reality craving in cigarette-deprived smokers. Nicotine Tob Res. 2015;17:796–802.

Bordnick PS, Traylor A, Copp HL, Graap KM, Carter B, Ferrer M, et al. Assessing reactivity to virtual reality alcohol based cues. Addict Behav. 2008;33:743–56.

Ghiţă A, Hernández-Serrano O, Fernández-Ruiz Y Y, Monras M, Ortega L, Mondon S. et al. Cue-elicited anxiety and alcohol craving as indicators of the validity of ALCO-VR software: a virtual reality study. J Clin Med. 2019;8:1153

Article   PubMed Central   Google Scholar  

Simon J, Etienne AM, Bouchard S, Quertemont E. Alcohol craving in heavy and occasional alcohol drinkers after cue exposure in a virtual environment: the role of the sense of presence. Front Hum Neurosci. 2020;14:124.

Lee JS, Namkoong K, Ku J, Cho S, Park JY, Choi YK, et al. Social pressure-induced craving in patients with alcohol dependence: application of virtual reality to coping skill training. Psychiatry Investig. 2008;5:239–43.

Lee SH, Han DH, Oh S, Lyoo IK, Lee YS, Renshaw PF, et al. Quantitative electroencephalographic (qEEG) correlates of craving during virtual reality therapy in alcohol-dependent patients. Pharm Biochem Behav. 2009;91:393–7.

Article   CAS   Google Scholar  

Hernández-Serrano O, Ghiţă A, Figueras-Puigderrajols N, Fernández-Ruiz J, Monras M, Ortega L, et al. Predictors of changes in alcohol craving levels during a virtual reality cue exposure treatment among patients with alcohol use disorder. J Clin Med. 2020;9:3018

Ghiţă A, Hernández-Serrano O, Fernández-Ruiz J, Moreno M, Monras M, Ortega L, et al. Attentional bias, alcohol craving, and anxiety implications of the virtual reality cue-exposure therapy in severe alcohol use disorder: a case report. Front Psychol. 2021;12:543586.

Choi JS, Park S, Lee JY, Jung HY, Lee HW, Jin CH, et al. The effect of repeated virtual nicotine cue exposure therapy on the psychophysiological responses: a preliminary study. Psychiatry Investig. 2011;8:155–60.

Lee JH, Ku J, Kim K, Kim B, Kim IY, Yang BH, et al. Experimental application of virtual reality for nicotine craving through cue exposure. Cyberpsychol Behav. 2003;6:275–80.

Bordnick PS, Graap KM, Copp H, Brooks J, Ferrer M, Logue B. Utilizing virtual reality to standardize nicotine craving research: a pilot study. Addict Behav. 2004;29:1889–94.

Acker J, MacKillop J. Behavioral economic analysis of cue-elicited craving for tobacco: a virtual reality study. Nicotine Tob Res. 2013;15:1409–16.

Carter BL, Bordnick P, Traylor A, Day SX, Paris M. Location and longing: the nicotine craving experience in virtual reality. Drug Alcohol Depend. 2008;95:73–80.

Ferrer-García M, García-Rodríguez O, Gutiérrez-Maldonado J, Pericot-Valverde I, Secades-Villa R. Efficacy of virtual reality in triggering the craving to smoke: its relation to level of presence and nicotine dependence. Stud Health Technol Inf. 2010;154:123–7.

Google Scholar  

Bordnick PS, Traylor AC, Carter BL, Graap KM. A feasibility study of virtual reality-based coping skills training for nicotine dependence. Res Soc Work Pract. 2012;22:293–300.

Kaganoff E, Bordnick PS, Carter BL. Feasibility of using virtual reality to assess nicotine cue reactivity during treatment. Res Soc Work Pract. 2012;22:159–65.

Lee J, Lim Y, Graham SJ, Kim G, Wiederhold BK, Wiederhold MD, et al. Nicotine craving and cue exposure therapy by using virtual environments. Cyberpsychol Behav. 2004;7:705–13.

Park CB, Choi JS, Park SM, Lee JY, Jung HY, Seol JM, et al. Comparison of the effectiveness of virtual cue exposure therapy and cognitive behavioral therapy for nicotine dependence. Cyberpsychol Behav Soc Netw. 2014;17:262–7.

Moon J, Lee JH. Cue exposure treatment in a virtual environment to reduce nicotine craving: a functional MRI study. Cyberpsychol Behav. 2009;12:43–45.

Tan H, Chen T, Du J, Li R, Jiang H, Deng CL, et al. Drug-related virtual reality cue reactivity is associated with gamma activity in reward and executive control circuit in methamphetamine use disorders. Arch Med Res. 2019;50:509–17.

Tsai MC, Chung CR, Chen CC, Yeh SC, Chen JY, Lin CH, et al. An intelligent virtual-reality system with multi-model sensing for cue-elicited craving in patients with methamphetamine use disorder. IEEE Trans Biomed Eng. 2021;68:2270–80.

Saladin ME, Brady KT, Graap K, Rothbaum BO. A preliminary report on the use of virtual reality technology to elicit craving and cue reactivity in cocaine dependent individuals. Addict Behav. 2006;31:1881–94.

Bordnick PS, Copp HL, Traylor A, Graap KM, Carter BL, Walton A, et al. Reactivity to cannabis cues in virtual reality environments. J Psychoact Drugs. 2009;41:105–12.

Wang YG, Shen ZH, Wu XC. Detection of patients with methamphetamine dependence with cue-elicited heart rate variability in a virtual social environment. Psychiatry Res. 2018;270:382–8.

Wang YG, Liu MH, Shen ZH. A virtual reality counterconditioning procedure to reduce methamphetamine cue-induced craving. J Psychiatr Res. 2019;116:88–94.

Shin YB, Kim JJ, Kim MK, Kyeong S, Jung YH, Eom H, et al. Development of an effective virtual environment in eliciting craving in adolescents and young adults with internet gaming disorder. PLoS ONE. 2018;13:e0195677.

Giroux I, Faucher-Gravel A, St-Hilaire A, Boudreault C, Jacques C, Bouchard S. Gambling exposure in virtual reality and modification of urge to gamble. Cyberpsychol Behav Soc Netw. 2013;16:224–31.

Bouchard S, Robillard G, Giroux I, Jacques C, Loranger C, St-Pierre M, et al. Using virtual reality in the treatment of gambling disorder: the development of a new tool for cognitive behavior therapy. Front Psychiatry. 2017;8:27.

Park CB, Park SM, Gwak AR, Sohn BK, Lee JY, Jung HY, et al. The effect of repeated exposure to virtual gambling cues on the urge to gamble. Addict Behav. 2015;41:61–4.

Müller CP, Mühle C, Kornhuber J, Lenz B. Sex-dependent alcohol instrumentalization goals in non-addicted alcohol consumers versus patients with alcohol use disorder: longitudinal change and outcome prediction. Alcohol Clin Exp Res. 2021;45:577–86.

Wheeler RA, Twining RC, Jones JL, Slater JM, Grigson PS, Carelli RM. Behavioral and electrophysiological indices of negative affect predict cocaine self-administration. Neuron 2008;57:774–85.

Anker JJ, Kummerfeld E, Rix A, Burwell SJ, Kushner MG. Causal network modeling of the determinants of drinking behavior in comorbid alcohol use and anxiety disorder. Alcohol Clin Exp Res. 2019;43:91–97.

Liu Y, Jiang C. Recognition of shooter’s emotions under stress based on affective computing. IEEE Access. 2019;7:62338–43.

Ding X, Li Y, Li D, Li L, Liu X. Using machine-learning approach to distinguish patients with methamphetamine dependence from healthy subjects in a virtual reality environment. Brain Behav. 2020;10:e01814.

Lee N, Kim JJ, Shin YB, Eom H, Kim MK, Kyeong S, et al. Choice of leisure activities by adolescents and adults with internet gaming disorder: development and feasibility study of a virtual reality program. JMIR Serious Games. 2020;8:e18473.

Appelhans BM, Luecken LJ. Heart rate variability as an index of regulated emotional responding. Rev Gen Psychol. 2006;10:229–40.

Amato I, Nanev A, Piantella S, Wilson KE, Bicknell R, Heckenberg R, et al. Assessing the utility of a virtual-reality neuropsychological test battery, 'CONVIRT', in detecting alcohol-induced cognitive impairment. Behav Res Methods. 2021;53:1115–23.

Hayashi T, Ko JH, Strafella AP, Dagher A. Dorsolateral prefrontal and orbitofrontal cortex interactions during self-control of cigarette craving. Proc Natl Acad Sci USA. 2013;110:4422–7.

Park SY, Kim SM, Roh S, Soh MA, Lee SH, Kim H, et al. The effects of a virtual reality treatment program for online gaming addiction. Comput Methods Prog Biomed. 2016;129:99–108.

Girard B, Turcotte V, Bouchard S, Girard B. Crushing virtual cigarettes reduces tobacco addiction and treatment discontinuation. Cyberpsychol Behav. 2009;12:477–83.

Giovancarli C, Malbos E, Baumstarck K, Parola N, Pelissier MF, Lancon C, et al. Virtual reality cue exposure for the relapse prevention of tobacco consumption: a study protocol for a randomized controlled trial. Trials. 2016;17:96

Liu W, Chen XJ, Wen YT, Winkler MH, Paul P, He YL, et al. Memory retrieval-extinction combined with virtual reality reducing drug craving for methamphetamine: study protocol for a randomized controlled trial. Front Psychiatry. 2020;11:322.

Chen XJ, Wang DM, Zhou LD, Winkler M, Pauli P, Sui N, et al. Mindfulness-based relapse prevention combined with virtual reality cue exposure for methamphetamine use disorder: Study protocol for a randomized controlled trial. Contemp Clin Trials. 2018;70:99–105.

Stramba-Badiale C, Mancuso V, Cavedoni S, Pedroli E, Cipresso P, Riva G. Transcranial magnetic stimulation meets virtual reality: the potential of integrating brain stimulation with a simulative technology for food addiction. Front Neurosci. 2020;14:720.

Ferrer-Garcia M, Pla-Sanjuanelo J, Dakanalis A, Vilalta-Abella F, Riva G, Fernandez-Aranda F, et al. Eating behavior style predicts craving and anxiety experienced in food-related virtual environments by patients with eating disorders and healthy controls. Appetite 2017;117:284–93.

Manzoni GM, Cesa GL, Bacchetta M, Castelnuovo G, Conti S, Gaggioli A, et al. Virtual reality-enhanced cognitive-behavioral therapy for morbid obesity: a randomized controlled study with 1 year follow-up. Cyberpsychol Behav Soc Netw. 2016;19:134–40.

Pla-Sanjuanelo J, Ferrer-Garcia M, Gutierrez-Maldonado J, Riva G, Andreu-Gracia A, Dakanalis A, et al. Identifying specific cues and contexts related to bingeing behavior for the development of effective virtual environments. Appetite. 2015;87:81–89.

Lee JH, Lim Y, Wiederhold BK, Graham SJ. A functional magnetic resonance imaging (FMRI) study of cue-induced smoking craving in virtual environments. Appl Psychophysiol Biofeedback. 2005;30:195–204.

Dirks H, Scherbaum N, Kis B, Mette C. [ADHD in adults and comorbid substance use disorder: prevalence, clinical diagnostics and integrated therapy]. Fortschr Neurol Psychiatr. 2017;85:336–44.

PubMed   Google Scholar  

Lemyre A, Gauthier-Légaré A, Bélanger RE. Shyness, social anxiety, social anxiety disorder, and substance use among normative adolescent populations: a systematic review. Am J Drug Alcohol Abus. 2019;45:230–47.

Shema-Shiratzky S, Brozgol M, Cornejo-Thumm P, Geva-Dayan K, Rotstein M, Leitner Y, et al. Virtual reality training to enhance behavior and cognitive function among children with attention-deficit/hyperactivity disorder: brief report. Dev Neurorehabil. 2019;22:431–6.

Droungas A, Ehrman RN, Childress AR, O'Brien CP. Effect of smoking cues and cigarette availability on craving and smoking behavior. Addict Behav. 1995;20:657–73.

Niaura R, Abrams D, Demuth B, Pinto R, Monti P. Responses to smoking-related stimuli and early relapse to smoking. Addict Behav. 1989;14:419–28.

Niaura R, Abrams DB, Pedraza M, Monti PM, Rohsenow DJ. Smokers' reactions to interpersonal interaction and presentation of smoking cues. Addict Behav. 1992;17:557–66.

Farokhnia M, Schwandt ML, Lee MR, Bollinger JW, Farinelli LA, Amodio JP, et al. Biobehavioral effects of baclofen in anxious alcohol-dependent individuals: a randomized, double-blind, placebo-controlled, laboratory study. Transl Psychiatry. 2017;7:e1108.

Bujarski S, Ray LA. Experimental psychopathology paradigms for alcohol use disorders: applications for translational research. Behav Res Ther. 2016;86:11–22.

Schuemie MJ, van der Straaten P, Krijn M, van der Mast CA. Research on presence in virtual reality: a survey. Cyberpsychol Behav. 2001;4:183–201.

Sakr S, Elshawi R, Ahmed AM, Qureshi WT, Brawner CA, Keteyian SJ, et al. Comparison of machine learning techniques to predict all-cause mortality using fitness data: the Henry ford exercIse testing (FIT) project. BMC Med Inf Decis Mak. 2017;17:174.

Shiban Y, Pauli P, Muhlberger A. Effect of multiple context exposure on renewal in spider phobia. Behav Res Ther. 2013;51:68–74.

Lenz B, Müller CP, Stoessel C, Sperling W, Biermann T, Hillemacher T, et al. Sex hormone activity in alcohol addiction: integrating organizational and activational effects. Prog Neurobiol. 2012;96:136–63.

Blanchard C, Burgess S, Harvill Y, Lanier J, Lasko A, Oberman M, et al. Reality built for two: a virtual reality tool. ACM SIGGRAPH Computer Graph. 1990;24:35–36.

Slater M, Neyret S, Johnston T, Iruretagoyena G, Crespo MAC, Alabernia-Segura M, et al. An experimental study of a virtual reality counselling paradigm using embodied self-dialogue. Sci Rep. 2019;9:10903.

Peck TC, Seinfeld S, Aglioti SM, Slater M. Putting yourself in the skin of a black avatar reduces implicit racial bias. Conscious Cogn. 2013;22:779–87.

Llobera J, Sanchez-Vives MV, Slater M. The relationship between virtual body ownership and temperature sensitivity. J R Soc Interface. 2013;10:20130300.

Falconer CJ, Slater M, Rovira A, King JA, Gilbert P, Antley A, et al. Embodying compassion: a virtual reality paradigm for overcoming excessive self-criticism. PLoS ONE. 2014;9:e111933.

Nonnemaker J, Kim A, Shafer P, Loomis B, Hill E, Holloway J, et al. Influence of point-of-sale tobacco displays and plain black and white cigarette packaging and advertisements on adults: evidence from a virtual store experimental study. Addict Behav. 2016;56:15–22.

Martens MA, Antley A, Freeman D, Slater M, Harrison PJ, Tunbridge EM. It feels real: physiological responses to a stressful virtual reality environment and its impact on working memory. J Psychopharmacol. 2019;33:1264–73.

Zimmer P, Buttlar B, Halbeisen G, Walther E, Domes G. Virtually stressed? A refined virtual reality adaptation of the Trier Social Stress Test (TSST) induces robust endocrine responses. Psychoneuroendocrinology 2019;101:186–92.

Heinz A, Kiefer F, Smolka MN, Endrass T, Beste C, Beck A, et al. Addiction Research Consortium: losing and regaining control over drug intake (ReCoDe)—from trajectories to mechanisms and interventions. Addict Biol. 2020;25:e12866.

Download references

Acknowledgements

We thank the reviewers for their constructive and helpful comments and suggestions.

This work was funded in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project-ID 402170461 – TRR 265 [ 93 ]. The funder had no role in the preparation of the manuscript. Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Department of Addictive Behavior and Addiction Medicine, Central Institute of Mental Health (CIMH), Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany

Massimiliano Mazza, Tagrid Leménager, Falk Kiefer & Bernd Lenz

Center for Innovative Psychiatric and Psychotherapeutic Research, Central Institute of Mental Health (CIMH), Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany

Kornelius Kammler-Sücker

Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health (CIMH), Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany

You can also search for this author in PubMed   Google Scholar

Contributions

Conducted the systematic literature search and wrote the first draft of the paper: MM, BL. Commented on the manuscript and provided intellectual input: KK-S, TL, FK.

Corresponding author

Correspondence to Massimiliano Mazza .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Mazza, M., Kammler-Sücker, K., Leménager, T. et al. Virtual reality: a powerful technology to provide novel insight into treatment mechanisms of addiction. Transl Psychiatry 11 , 617 (2021). https://doi.org/10.1038/s41398-021-01739-3

Download citation

Received : 29 July 2021

Revised : 20 October 2021

Accepted : 01 November 2021

Published : 06 December 2021

DOI : https://doi.org/10.1038/s41398-021-01739-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Cigarette craving in virtual reality cue exposure in abstainers and relapsed smokers.

  • Benedikt Schröder
  • Agnes Kroczek
  • Andreas Mühlberger

Scientific Reports (2024)

Alcohol does not influence trust in others or oxytocin, but increases positive affect and risk-taking: a randomized, controlled, within-subject trial

  • Leonard P. Wenger
  • Oliver Hamm

European Archives of Psychiatry and Clinical Neuroscience (2024)

Fully immersive virtual reality exergames with dual-task components for patients with Parkinson’s disease: a feasibility study

  • Seo Jung Yun
  • Sung Eun Hyun
  • Han Gil Seo

Journal of NeuroEngineering and Rehabilitation (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research on virtual reality

artificial intelligence

human-machine interaction

learning + teaching

architecture

human-computer interaction

consumer electronics

wearable computing

bioengineering

machine learning

environment

social science

entertainment

computer science

storytelling

engineering

prosthetics

developing countries

civic technology

social robotics

social media

computer vision

communications

augmented reality

neurobiology

public health

virtual reality

urban planning

synthetic biology

biotechnology

social networks

affective computing

biomechanics

climate change

transportation

data visualization

behavioral science

social change

fabrication

zero gravity

data science

cognitive science

agriculture

prosthetic design

manufacturing

racial justice

sustainability

neural interfacing and control

3d printing

banking and finance

electrical engineering

cryptocurrency

human augmentation

civic action

construction

microfabrication

performance

open source

language learning

marginalized communities

natural language processing

autonomous vehicles

microbiology

interactive

visualization

internet of things

social justice

collective intelligence

mental health

mechanical engineering

clinical science

nanoscience

nonverbal behavior

long-term interaction

biomedical imaging

sports and fitness

gender studies

orthotic design

assistive technology

pharmaceuticals

mechatronics

soft-tissue biomechanics

open access

autism research

member company

digital currency

real estate

womens health

decision-making

Building intelligent personified technologies that collaborate with people to help them learn, thrive, and flourish.

To Lunar Gravity and Back: What reduced gravity flights can teach us about our future in space

The Space Exploration Initiative’s 2022 microgravity flight features lunar-specific payloads in advance of an upcoming moon mission

Stochastic Self-Assembly via Magnetically Programmed Materials

By Martin NisserThe ability to deploy large space structures is key to enabling long-duration and long-distance space missions, supporting …

Augmenting and mediating human experience, interaction, and perception with sensor networks

Life with AI | Designing the future of smart systems to improve the human experience

Co-creating climate futures with real-time data and spatial storytelling

WORLDING workshop organizers collaborated with research scientist Dr. Rachel Connolly to co-design their first-ever in-person event.

Space Exploration Initiative

Connected Mind + Body | Revolutionizing the future of mental and physical wellbeing

Designing Systems for Cognitive Support

research on virtual reality

Humanizing Agent-Based Models (h - ABM)

Humanizing Agent-Based Models (h-ABM) emerges as a pioneering technique to simulate real-world behaviors, aiming to seamlessly bridge …

People and intelligent machines in a creative, symbiotic loop

Future Worlds | Design and action for the future we want to live in

Earth Mission Control (EMC)

Advanced Data Visualizations for Climate IntelligenceMarked by heightened awareness and global concern over the climate crisis, the MI…

The Gravity Loading Countermeasure Skinsuit

By Rachel BellisleOverview:The Gravity Loading Countermeasure Skinsuit (GLCS or “Skinsuit”) is an intravehicular activity suit for astronau…

Invent new tangible and embodied interactions that inspire and engage people

research on virtual reality

Essence Wearables: Biometric Olfactory Interfaces for Day and Night

Human-computer interaction (HCI) has traditionally focused on designing and investigating interfaces that provide explicit visual, auditory…

Extending expression, learning, and health through innovations in musical composition, performance, and participation

research on virtual reality

KnitworkVR is an immersive virtual experience designed to simulate the Living Knitwork Pavilion showcased at Burning Man, through the use o…

Capturing the Moon: Assessing virtual reality for remote Lunar geological fieldwork

Cody Paige, MIT AeroAstro Contributors: Ferrous Ward, MIT AeroAstro; Don Derek Haddad, ResEnv; Jess Todd, MIT…

Personalized Performance-Optimization Platform (AttentivU - P-POP)

​The environmental conditions of prolonged spaceflight pose significant psychological risks for astronauts. In particular, crews of future …

Cultivating Creativity | Catalyzing a global movement enabling everyone to unlock and unleash their individual and collective creativity

MIT Time Travel VR Installation

A virtual, outdoor time-travel experience.

Inventing, building, and deploying wireless sensor technologies to address complex problems in society, industry, and ecology

Curiosity Unbounded Episode 4: Build your own superpower, then share it with the world

On Curiosity Unbounded, Professor Fadel Adib, head of the Signal Kinetics group, talks to MIT President Sally Kornbluth about his work.

How x-ray vision is becoming a reality

Tara Boroushaki, a PhD student in the Signal Kinetics group, explains how she's combined AR and wireless signals to find hidden objects.

Dr. Mary Lou Jepsen: Curing cancer and human telepathy

Media Lab alum and former professor Mary Lou Jepsen talks to Danielle Newnham about her journey and inspirations.

Interplanetary Gastronomy

The  Interplanetary Gastronomy research  aims to address the unique challenges and opportunities associated with eating in s…

Tasting Menu in Zero G

 A multi-course tasting menu was flown on a zero gravity flight in August 2019.  Five specially crafted dishes were consumed…

Molecular Gastronomy in Zero G

A molecular gastronomy experiment was flown on a zero gravity flight in August 2019, using spherification techniques to create recipes in z…

Mental Machine: Labour in the Self Economy

​Mental Machine: Labour in the Self Economy, 2022, is a live performance by Kawita Vatanajyankur made in collaboration with Pat Patara…

research on virtual reality

Battery-free wireless underwater camera

More than 95% of the ocean has never been observed by humans, even though the ocean plays the largest role in the world's climate system, h…

Augmented Reality with X-Ray Vision

X-AR is an augmented reality (AR) system that gives humans "X-Ray Vision" X-AR is a new AR headset that enables users to see things th…

research on virtual reality

Deep Reinforcement Learning for Autonomous UAVs (AUAVs)

Many tasks are not easily defined and/or too complex for supervised machine learning approaches. For these reasons, a technique known as&nb…

Domain Randomization & Synthetic Data Generation for AUAVs - Reducing Perceptual Uncertainty of Sim2Real Navigation

Navigation for autonomous UAVS (unmanned aerial vehicles) is a complex problem and physical field testing of associated tasks introduces a …

Fiducial Marker-Based Navigation for Autonomous UAVs (AUAVs)

Depending on the operational environment of an autonomous system, a great deal of perceptual uncertainty may be introduced to an object det…

Unmanned Aerial Vehicle (UAV) Pilot Simulator

With the advent of real-time photorealism (RTPR), virtual environments are now able to achieve higher degrees of engagement than ever befor…

Oceans Internet of Things

Our Oceans IoT technologies enable new applications in climate and ecological monitoring, aquaculture, energy, and robotic navigation. …

Virtual reality system lets you stop and smell the roses

Media Lab alum Judith Amores talks to Scientific American about a lightweight, wireless interface to deliver scents to VR users.

New research from the Signal Kinetics research group is featured on "Science Without the Gobbledygook"

Physicist Sabine Hossenfelder discusses X-AR, the new research from the Media Lab’s Signal Kinetics group.

MIT researchers develop augmented reality headset to help users see hidden objects

Using augmented reality, the X-AR system developed by the Signal Kinetics research group can guide users to find hidden objects.

Towards a Congruent Hybrid Reality without Experiential Artifacts

​Situated VR: Towards a Congruent Hybrid RealityThe vision of Extended Reality (XR) systems is living in a hybrid reality or "Metaverse" wh…

Adaptive Virtual Neuroarchitecture

Jain, A., Maes, P., Sra, M. (2023). Adaptive Virtual Neuroarchitecture. In: Simeone, A., Weyers, B., Bialkova, S., Lindeman, R.W. (eds) Everyday Virtual and Augmented Reality. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-031-05804-2_9

Augmented reality headset enables users to see hidden objects

The device could help workers locate objects for fulfilling e-commerce orders or identify parts for assembling products.

Transforming data into knowledge

The One Earth Model: Geographic interoperability in the real-world metaverse

Michael Naimark considers the potential of the "real-world metaverse," which combines virtual experiences with specific physical places.

Unveiling the Invisible

Tamiko Thiel, an innovator in VR/AR who studied computer graphics and visual imaging, talks about her work and her career.

Brelyon secures $15 million for Ultra Reality immersive technology

Media Lab spinoff Brelyon has raised $15 million in Series A financing to accelerate development of headset-free immersive displays.

ML Learning

Changing storytelling, communication, and everyday life through sensing, understanding, and new interface technologies

research on virtual reality

Bird: a 3D Cursor for 3D Interaction in Virtual Reality

Bird is a hand-controlled pointing system that translates a user's finger movements and positions into the motion of a 3D pointer in a virt…

Virtual and augmented reality headsets are unique as they have access to our facial area, an area that presents an excellent opportunity fo…

EmotionalBeasts

With advances in virtual reality (VR) and physiological sensing technology, even more immersive computer-mediated communication through lif…

Body-Borne Reality: Risky Disembodiment in VR

Ai-generated characters are here, they’re just not evenly distributed.

The Fluid Interfaces group + NTT DATA hosted Virtual Beings + Being Virtual, a workshop exploring positive uses of AI-generated characters

research on virtual reality

Hacking Reality

XR technology is becoming increasingly important in a world where the need to know intersects with the need to experience.

Living Observatory: Sensor networks for documenting and experiencing ecology

Living Observatory is an initiative for documenting and interpreting ecological change that will allow people, individually and collectivel…

E-maki: XR-enabled Scroll for Interactive Museum Artifacts

E-maki is a collection of ancient, painted scrolls accompanied by a cross-reality app that give contextual information and interactive expe…

The Interplanetary Cookbook

The Interplanetary Cookbook, developed by Maggie Coblentz, presents thought-provoking recipes and zero-g kitchenware for the future of life…

Increasing Embodiment for Desktop-Based Participants in Mixed Desktop and Immersive Collaborative Virtual Environments

Simonson, Aubrey and Pattie Maes. “Increasing Embodiment for Desktop-Based Participants in Mixed Desktop and Immersive Collaborative Virtual Environments.” (May 7, 2020).

What’s it like to design a meal that floats?

Taking a taste of the sensory research of Space Exploration Initiative’s Maggie Coblentz

Battery-Free Underwater GPS

Can we build a battery-free underwater GPS? While underwater localization is a long-studied problem,  we seek to bring it to batt…

Deep Reality: An underwater VR experience to promote relaxation by unconscious HR, EDA, and brain activity biofeedback

We present an interactive virtual reality (VR) experience that uses biometric information for reflection and relaxation. We monit…

  • Technology & Telecommunications ›

Virtual reality (VR) - statistics & facts

Could apple’s entrance to the market be a gamechanger, vr adoption set to rise as use cases develop, key insights.

Detailed statistics

Consumer and enterprise VR market size worldwide 2021-2026

VR hardware B2C market revenue worldwide 2019-2029

VR software B2C market revenue worldwide 2017-2028, by segment

Editor’s Picks Current statistics on this topic

XR market size 2021-2026

VR headset unit sales worldwide 2019-2024

Steam user VR headset share worldwide 2023, by device

Further recommended statistics

  • Premium Statistic XR market size 2021-2026
  • Premium Statistic Consumer and enterprise VR market size worldwide 2021-2026
  • Premium Statistic VR B2C market revenue worldwide 2019-2029
  • Premium Statistic AR/VR headset shipments worldwide 2022-2027, by segment
  • Premium Statistic AR/VR headset shipment year-over-year growth worldwide 2021-2026
  • Premium Statistic AR/VR headset shipments worldwide 2021-2026, by market

Extended reality (XR) market size worldwide from 2021 to 2026 (in billion U.S. dollars)

Consumer and enterprise virtual reality (VR) market revenue worldwide from 2021 to 2026 (in billion U.S. dollars)

VR B2C market revenue worldwide 2019-2029

Virtual reality (VR) B2C market revenue worldwide from 2019 to 2029 (in billion U.S. dollars)

AR/VR headset shipments worldwide 2022-2027, by segment

Augmented reality (AR) and virtual reality (VR) headset shipments worldwide from 2022 to 2027, by segment (in million units)

AR/VR headset shipment year-over-year growth worldwide 2021-2026

Augmented reality (AR) and virtual reality (VR) headset shipment year-over-year growth worldwide from 2021 to 2026

AR/VR headset shipments worldwide 2021-2026, by market

Augmented reality (AR) and virtual reality (VR) headset shipments worldwide from 2021 to 2026, by market (in million units)

  • Premium Statistic VR B2C market revenue growth worldwide 2020-2029, by segment
  • Premium Statistic VR hardware B2C market revenue worldwide 2019-2029
  • Premium Statistic VR hardware B2C market revenue worldwide 2017-2028, by segment
  • Premium Statistic VR software B2C market revenue worldwide 2019-2029
  • Premium Statistic VR software B2C market revenue worldwide 2017-2028, by segment
  • Premium Statistic VR advertising B2C market revenue worldwide 2019-2029

VR B2C market revenue growth worldwide 2020-2029, by segment

Virtual reality (VR) B2C market revenue growth worldwide from 2020 to 2029, by segment

Virtual reality (VR) hardware B2C market revenue worldwide from 2019 to 2029 (in billion U.S. dollars)

VR hardware B2C market revenue worldwide 2017-2028, by segment

Virtual reality (VR) hardware B2C market revenue worldwide from 2017 to 2028, by segment (in billion U.S. dollars)

VR software B2C market revenue worldwide 2019-2029

Virtual reality (VR) software B2C market revenue worldwide from 2019 to 2029 (in billion U.S. dollars)

Virtual reality (VR) software B2C market revenue worldwide from 2017 to 2028, by segment (in billion U.S. dollars)

VR advertising B2C market revenue worldwide 2019-2029

Virtual reality (VR) advertising B2C market revenue worldwide from 2019 to 2029 (in million U.S. dollars)

VR headsets

  • Premium Statistic Revenue of the VR headsets industry worldwide 2018-2028
  • Premium Statistic Revenue growth of VR headsets worldwide 2019-2028
  • Premium Statistic Volume of VR headsets worldwide 2018-2028
  • Premium Statistic Sales volume change of VR headsets worldwide 2019-2028
  • Premium Statistic VR headset average price worldwide 2018-2028

Revenue of the VR headsets industry worldwide 2018-2028

Revenue of the VR headsets market worldwide from 2018 to 2028 (in billion U.S. dollars)

Revenue growth of VR headsets worldwide 2019-2028

Revenue growth of the VR headsets market worldwide from 2019 to 2028

Volume of VR headsets worldwide 2018-2028

Volume of the VR headsets market worldwide from 2018 to 2028 (in millions)

Sales volume change of VR headsets worldwide 2019-2028

Sales volume change of VR headsets worldwide from 2019 to 2028

VR headset average price worldwide 2018-2028

Virtual reality (VR) headset average price worldwide from 2018 to 2028 (in U.S. dollars)

  • Premium Statistic XR headset vendor shipment share worldwide 2020-2023, by quarter
  • Premium Statistic AR/VR headset companies shipment share worldwide 2022-2023, by quarter
  • Premium Statistic Comparison of VR headsets worldwide 2024, by price
  • Premium Statistic Comparison of VR headsets worldwide 2023, by weight
  • Premium Statistic Apple Vision Pro shipment forecast worldwide 2024-2028
  • Premium Statistic Steam user VR headset share worldwide 2023, by device
  • Premium Statistic Global game developers working on projects for select VR/AR platforms 2024

XR headset vendor shipment share worldwide 2020-2023, by quarter

Extended reality (XR) headset vendor shipment share worldwide from 2020 to 2023, by quarter

AR/VR headset companies shipment share worldwide 2022-2023, by quarter

Augmented reality (AR) and virtual reality (VR) headset companies shipment share worldwide from 2022 to 2023, by quarter

Comparison of VR headsets worldwide 2024, by price

Comparison of virtual reality (VR) headsets worldwide in 2024, by price (in U.S. dollars)

Comparison of VR headsets worldwide 2023, by weight

Comparison of virtual reality (VR) headsets worldwide in 2023, by weight (in grams)

Apple Vision Pro shipment forecast worldwide 2024-2028

Apple Vision Pro shipment forecast worldwide from 2024 to 2028 (in thousands)

Share of Steam users with a virtual reality (VR) headset worldwide as of September 2023, by device

Global game developers working on projects for select VR/AR platforms 2024

Share of game developers worldwide working on game projects for select VR/AR platforms in 2024

Further reports

Get the best reports to understand your industry.

  • Online gaming
  • XR in Europe
  • Steam - gaming platform
  • XR in the United States

Mon - Fri, 9am - 6pm (EST)

Mon - Fri, 9am - 5pm (SGT)

Mon - Fri, 10:00am - 6:00pm (JST)

Mon - Fri, 9:30am - 5pm (GMT)

The Mona Lisa in virtual reality in your own home

Aller au contenu

23 February 2021

research on virtual reality

‘Mona Lisa: Beyond the Glass’ – the Louvre’s first virtual reality project – uses the latest scientific research on Leonardo da Vinci, his creative processes and his painting techniques.

The first VR experience of the Mona Lisa

When a painting is as famous as the Mona Lisa, how can you engage with it on a personal level – get through the barrier of fame to discover its inner secrets? This VR experience is a means of doing just that. ‘The Mona Lisa is fated never to be seen again the way she should be, i.e. face to face. That’s the price of success; like any celebrity, as soon as she appears, everyone wants to see her!’ says Vincent Delieuvin, co-curator of the 2020 Leonardo da Vinci show.

This immersive VR experience, part of the Leonardo da Vinci exhibition in 2020, is also available on smartphone.  

' Like any celebrity, as soon as she appears, everyone wants to see her! '

Vincent Delieuvin

The woman behind the painting

What remains to be said about the Mona Lisa? How can we move beyond the myths about this ultra-famous artwork? ‘Mona Lisa: Beyond the Glass’ sets out to dispel the folklore and tell the real story. This eight-minute VR experience is based on the knowledge compiled by exhibition curators Louis Frank and Vincent Delieuvin after a decade of research in preparation for the landmark 2020 exhibition. 

The experience begins in the Salle des États in today’s Louvre, face to face with the painting of the Mona Lisa. It then takes us on a journey back in time to the original setting, where we meet the real woman da Vinci painted! Mona Lisa – or Lisa Gherardini, the wife of Francesco del Giocondo – comes to life, and shows us how her outfit was made, how her hair was styled... 

research on virtual reality

The secrets of ‘sfumato’

Leonardo da Vinci used some specific techniques that have contributed to his fame but are not necessarily understood. The VR experience gives a detailed view of his painting processes and shows how they brought his work to life. We also find ourselves in the loggia where Mona Lisa might have been sitting when she was painted. ‘We took our inspiration for the loggia from a drawing by Leonardo, an extraordinary villa with a belvedere [and placed it] above the large landscape in the painting. And a surprise awaits you at the end!’ says Louis Frank, co-curator of the Leonardo da Vinci exhibition in 2020.

YouTube content is currently blocked. Please change your cookie settings to enable this content.

  Share article

You may also like

research on virtual reality

A Night at the Louvre: a private guided tour of the landmark exhibition, only un theatres !

To prolong the 'Leonardo da Vinci" experience, The Louvre Museum and Pathé Live have partnered to release an exclusive filmed private tour of the landmark exhibition “ A Night at the Louvre: Leonardo da Vinci ”. Special screenings wil be organized from 16 September worldwide.

29 June 2020

research on virtual reality

Conservation treatment on the Arc de triomphe du Carrousel

The conservation treatment of the Arc de Triomphe du Carrousel, funded by the 2018 'Become a Patron!' campaign, began in November 2022 and will be completed before the summer of 2024. This restoration aims to return the monument to its former glory.

5 October 2023

Restoration

research on virtual reality

Martine Visits the Louvre!

Martine Visits the Louvre , now available for English-speaking visitors!

30 March 2023

  • Accessibility Options:
  • Skip to Content
  • Skip to Search
  • Skip to footer
  • Office of Disability Services
  • Request Assistance
  • 305-284-2374
  • High Contrast
  • School of Architecture
  • College of Arts and Sciences
  • Miami Herbert Business School
  • School of Communication
  • School of Education and Human Development
  • College of Engineering
  • School of Law
  • Rosenstiel School of Marine, Atmospheric, and Earth Science
  • Miller School of Medicine
  • Frost School of Music
  • School of Nursing and Health Studies
  • The Graduate School
  • Division of Continuing and International Education
  • People Search
  • Class Search
  • IT Help and Support
  • Privacy Statement
  • Student Life

Logo: University of Miami (for print)

  • Search Site
  • Virtual Reality Learning Initiative (VRLI)
  • Faculty Training
  • First Year Direction
  • Student & Faculty Groups
  • Sponsors & Industry Partners
  • News & Events
  • Faculty Projects
  • Student Projects
  • Library XR Community Lab
  • Faculty Research Grant
  • Opportunities
  • BPEIVerse: A Virtual Training Hospital in the Metaverse
  • Collection Connections
  • Completed Projects
  • Current Projects
  • Dean’s Blue Hole XR Project

Effectiveness of Instructing Anatomy Topics via Virtual Reality

  • Mangrove City
  • Stained Glass Mixed Reality Opera
  • Smart Assistants
  • VR Rowing Injury Prevention Overlay
  • VR Stress Generator
  • Temikki Garden
  • Marine Music, Memory, and Immersion
  • Evaluating Applied Behavior Analysis Intervention Using Augmented Reality for Children with Autism Spectrum Disorder
  • Faculty Led Projects

PIs: Dr. Daniel Serravite (School of Education and Human Development), Dr. Magda Aldousany  (School of Education and Human Development) Student Team: TBD

Project Description: This pilot study evaluates the effectiveness of virtual reality versus traditional in-person instruction in teaching anatomy using VR for immersive lectures. Outcomes will assess the impact on student learning and engagement, aiming to develop new instructional methods and justify integrating immersive technologies in anatomy education.

Featured Links

University of Miami Split U logo

  • Coral Gables , FL 33124
  • 305-284-2211 305-284-2211
  • Academic Calendar
  • Alumni & Friends
  • Medical Center
  • Hurricane Sports
  • Parking & Transportation
  • social-facebook
  • social-twitter
  • social-youtube
  • social-instagram

Copyright: 2024 University of Miami. All Rights Reserved. Emergency Information Privacy Statement & Legal Notices

Individuals with disabilities who experience any technology-based barriers accessing the University’s websites or services can visit the Office of Workplace Equity and Inclusion .

Hearing voices is common. Virtual reality might help us meet and ‘treat’ them | UniSC | University of the Sunshine Coast, Queensland, Australia

Accessibility links.

  • Find a program
  • Study locations
  • Online study
  • Pathways to study
  • Undergraduate
  • Postgraduate
  • International students
  • Student support
  • Student life
  • Study overseas
  • Work placements
  • Safety and security
  • How to apply
  • Scholarships
  • Key dates and timetables
  • Credit transfer
  • Thompson Institute
  • Forest Research Institute
  • Find an expert
  • Research students
  • Clinical Trials
  • Research Bank
  • Parents and guardians
  • Schools engagement
  • Giving to UniSC
  • Industry engagement
  • Your safety is our priority
  • Venue and event services
  • Our rankings
  • Indigenous Voice to Parliament
  • Diversity and inclusion
  • Aboriginal and Torres Strait Islander engagement
  • Careers at UniSC
  • Sustainability
  • Key statistics
  • Learning and teaching
  • Policies and procedures
  • Strategic Plan 2021-2024
  • Action against sexual assault and sexual harassment

Pro tip: To search, just start typing - at any time, on any page. ×

Searching {{ model.searchType }} for returned no results.

  • 1"> «
  • pageSize && pageNumber »

Hearing voices is common. Virtual reality might help us meet and ‘treat’ them

UniSC News  |  26 Jun 2024

Have you ever heard something that others cannot – such as your name being called? Hearing voices or other noises that aren’t there is very common. About 10% of people report experiencing auditory hallucinations at some point in their life.

For some people these experiences are positive. They might represent a spiritual or supernatural experience they welcome or a comforting presence. But for others these experiences are distressing. Voices can be intrusive, negative, critical or threatening.

Difficult voices can make a person feel worried, frightened, embarrassed or frustrated. They can also make it hard to concentrate, be around other people and get in the way of day-to-day activities.

Although not everyone who hears voices has a mental health problem, these experiences are much more common in people who do. They have been considered a hallmark symptom of schizophrenia, which affects about 24 million people worldwide .

However, such experiences are also common in other mental health problems, particularly in mood- and trauma-related disorders (such as bipolar disorder or depression and post-traumatic stress disorder ) where as many as half of people may experience them.

research on virtual reality

Why do people hear voices?

It is unclear exactly why people hear voices but exposure to prolonged stress , trauma or depression can increase the chances.

Some research suggests people who hear voices might have brains that are “wired” differently, particularly between the hearing and speaking parts of the brain. This may mean parts of our inner speech can be experienced as external voices. So, having the thought “you are useless” when something goes wrong might be experienced as an external person speaking the words.

Other research suggests it may relate to how our brains use past experiences as a template to make sense of and make predictions about the world. Sometimes those templates can be so strong they lead to errors in how we experience what is going on around us, including hearing things our brain is “expecting” rather than what is really happening.

What is clear is that when people tell us they are hearing voices, they really are! Their brain perceives voice experiences as if someone were talking in the room. We could think of this “mistake” as working a bit like being susceptible to common optical tricks or visual illusions .

Coping with hearing voices

When hearing voices is getting in the way of life, treatment guidelines recommend the use of medications. But roughly a third of people will experience ongoing distress. As such, treatment guidelines also recommend the use of psychological therapies such as cognitive behavioural therapy.

The next generation of psychological therapies are beginning to use digital technologies and virtual reality offers a promising new medium.

Avatar therapy allows a person to create a virtual representation of the voice or voices, which looks and sounds like what they are experiencing. This can help people regain power in the “relationship” as they interact with the voice character, supported by a therapist .

Jason’s experience

Aged 53, Jason (not his real name) had struggled with persistent voices since his early 20s. Antipsychotic medication had helped him to some extent over the years, but he was still living with distressing voices. Jason tried out avatar therapy as part of a research trial.

He was initially unable to stand up to the voices, but he slowly gained confidence and tested out different ways of responding to the avatar and voices with his therapist’s support.

Jason became more able to set boundaries, such as not listening to them for periods throughout the day. He also felt more able to challenge what they said and make his own choices.

Over a couple of months, Jason started to experience some breaks from the voices each day and his relationship with them started to change. They were no longer like bullies, but more like critical friends pointing out things he could consider or be aware of.

Gaining recognition

Following promising results overseas and its recommendation by the United Kingdom’s National Institute for Health and Care Excellence, our team has begun adapting the therapy for an Australian context.

We are trialling delivering avatar therapy from our specialist voices clinic via telehealth. We are also testing whether avatar therapy is more effective than the current standard therapy for hearing voices, based on cognitive behavioural therapy.

As only a minority of people with psychosis receive specialist psychological therapy for hearing voices, we hope our trial will support scaling up these new treatments to be available more routinely across the country.

Leila Jameel , Trial Co-ordinator and Research Therapist, Swinburne University of Technology ; Imogen Bell , Senior Research Fellow and Psychologist, The University of Melbourne ; Neil Thomas , Professor of Clinical Psychology, Swinburne University of Technology , and Rachel Brand , Senior Lecturer in Clinical Psychology, University of the Sunshine Coast

This article is republished from The Conversation under a Creative Commons license. Read the original article .

Media enquiries: Please contact the Media Team [email protected]

IMAGES

  1. Infographic: The History and Future of Augmented & Virtual Reality

    research on virtual reality

  2. How virtual reality is helping scientists make new discoveries about

    research on virtual reality

  3. VR report shows the importance of virtuality reality tech to the South

    research on virtual reality

  4. VR Research & Projects

    research on virtual reality

  5. 7 virtual & augmented reality application areas boosted by 5G deployment

    research on virtual reality

  6. How Virtual Reality is Transforming the Engineering Industry

    research on virtual reality

COMMENTS

  1. How Virtual Reality Technology Has Changed Our Lives: An Overview of

    Virtual reality (VR) refers to a computer-generated, three-dimensional virtual environment that users can interact with, typically accessed via a computer that is capable of projecting 3D information via a display, which can be isolated screens or a wearable display, e.g., a head-mounted display (HMD), along with user identification sensors .

  2. Why scientists are delving into the virtual world

    Why scientists are delving into the virtual world. Virtual-reality software and headsets are increasingly being used by researchers to form deeper collaborations or work remotely. By. Rachael ...

  3. Home

    Overview. Virtual Reality is a multidisciplinary journal that publishes original research on Virtual, Augmented, and Mixed Reality. Established in 1995, the journal accepts submissions on real-time visualization, graphics, and applications as well as the development and evaluation of systems, tools, techniques, and software that advance the field.

  4. The Past, Present, and Future of Virtual and Augmented Reality Research

    Virtual reality research is very young and changing with time, but the top-10 authors in this field have made fundamentally significant contributions as pioneers in VR and taking it beyond a mere technological development. The purpose of the following highlights is not to rank researchers; rather, the purpose is to identify the most active ...

  5. Better, Virtually: the Past, Present, and Future of Virtual Reality

    Virtual reality (VR) is an immersive technology capable of creating a powerful, perceptual illusion of being present in a virtual environment. VR technology has been used in cognitive behavior therapy since the 1990s and accumulated an impressive evidence base, yet with the recent release of consumer VR platforms came a true paradigm shift in the capabilities and scalability of VR for mental ...

  6. Recent advances in virtual reality and psychology: Introduction to the

    This editorial introduces the current Special Issue of Translational Issues in Psychological Science which provides novel research findings and perspectives on the use of Virtual Reality (VR) for psychological applications. The variety of topics presented in this themed issue underscore the broad applicability (and potential value) of VR technology for addressing the diverse challenges that ...

  7. Augmented reality and virtual reality displays: emerging ...

    With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation display platforms for deeper human-digital ...

  8. Frontiers in Virtual Reality

    Submit your research. Start your submission and get more impact for your research by publishing with us. Author guidelines. ... Virtual Reality in Industry sean follmer. Stanford University. Stanford, United States. Specialty Chief Editor. Haptics Articles See all (596)

  9. Enhancing learning and retention with distinctive virtual reality

    Considerable research has documented that human memory is inherently context-dependent 1,2.During learning, contextual cues—whether environmental (e.g., a specific room) or internal (e.g., an ...

  10. Virtual reality: A review and a new framework for integrated adoption

    Scholarly research on virtual reality (VR) is characterized by a dynamic tension between VR's potential and the challenges impeding its adoption. Grounded in a mixed-methods systematic review, this research examines the drivers influencing consumer VR adoption by rigorously combining qualitative and quantitative analyses of 158 scholarly ...

  11. Full article: Past, Present, and Future: Editorial on Virtual Reality

    This editorial provides an overview of the state of virtual reality applications in human service provision, potential gaps to be addressed by research in the future, and the development of AI based interactive sequences that may boost use presence. Keywords: Virtual reality. human services.

  12. Virtual reality (VR)

    The term virtual reality was coined in 1987 by Jaron Lanier, whose research and engineering contributed a number of products to the nascent VR industry. A common thread linking early VR research and technology development in the United States was the role of the federal government, particularly the Department of Defense , the National Science ...

  13. Virtual Human Interaction Lab

    Our Mission. Since its founding in 2003, researchers at the Virtual Human Interaction Lab (VHIL) have sought to better understand the psychological and behavioral effects of Virtual Reality (VR) and Augmented Reality (AR). VR is finally widely available for consumers, and every day we are seeing new innovations.

  14. Frontiers

    Enhancing Our Lives with Immersive Virtual Reality. Mel Slater 1,2,3* Maria V. Sanchez-Vives 1,2,4. 1 Event Lab, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain. 2 Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain. 3 Department of Computer Science, University College ...

  15. Is Virtual Reality Bad for Our Health? Studies Point to Physical and

    While in their research article 'Social Virtual Reality (VR) Involvement Affects Depression When Social Connectedness and Self-Esteem Are Low: A Moderated Mediation on Well-Being', Lee Hyun-Woo et al acknowledged that excessive play can lead to adverse psychological effects, the article also highlights that playing social games can be a ...

  16. Integrating virtual reality in qualitative research methods: Making a

    Integrating virtual environments in more 'traditional' research methods can prove very valuable: in our research, we simulated a real-world setting using photorealistic 360° images (instead of computer generated environments) that can be viewed through a head-mounted display and collected data on how respondents react to it by conducting ...

  17. Virtual Reality Clinical Research: Promises and Challenges

    Introduction. Contemporary research on computer-based virtual reality (VR) dates back to the early 1980s, although devices for presenting stereoscopic imagery (ie, using a slightly different image for each eye) such as the stereoscope started in the 1830s [].The exploration of VR use in clinical applications is accelerating rapidly with the advent of more powerful computer and graphics ...

  18. (PDF) A Study of Virtual Reality

    Abstract. Virtual reality (VR) is a powerful and interactive technology that changes our life unlike any other. Virtual reality, which can also be termed as immersive multimedia, is the art of ...

  19. Unveiling the Evolution of Virtual Reality in Medicine: A Bibliometric

    Background: Virtual reality (VR), widely used in the medical field, may affect future medical training and treatment. Therefore, this study examined VR's potential uses and research directions in medicine. Methods: Citation data were downloaded from the Web of Science Core Collection database (WoSCC) to evaluate VR in medicine in articles published between 1 January 2012 and 31 December 2023.

  20. 6. Altering "reality"

    6. Altering "reality". A considerable number of these experts focused their answers on the transformative potential of artificial intelligence (AI), virtual reality (VR) and augmented reality (AR). They say these digital enhancements or alternatives will have growing impact on everything online and in the physical world.

  21. The reality of virtual reality

    The measurements of the RL condition were performed on 4 days in total. 2.2.2. Virtual reality condition (VR) The 3D-360° videos used for the VR condition were recorded with the Insta360Pro VR camera (Insta360, Shenzhen, China), at a frame rate of 60 fps, 4k resolution, and spatial sound.

  22. Virtual reality: a powerful technology to provide novel ...

    Due to its high ecological validity, virtual reality (VR) technology has emerged as a powerful tool for mental health research. Despite the wide use of VR simulations in research on mental ...

  23. Research

    The MIT Media Lab is an interdisciplinary research lab that encourages the unconventional mixing and matching of seemingly disparate research areas. Login; Search; Nav; Find People, Projects, etc. Email: ... With advances in virtual reality (VR) and physiological sensing technology, even more immersive computer-mediated communication through ...

  24. Virtual reality (VR)

    Worldwide. Virtual reality (VR) is a simulated experience similar to or completely different from the real world. VR aims to create a sensory experience for the user, sometimes including sight ...

  25. Learning in the spherical video-based virtual reality context: effects

    ABSTRACT. Spherical video-based virtual reality (SVVR) is an emerging technology in the creation of virtual reality learning contexts. Existing studies have shown that immersion level and spatial ability affect learning performance and experience.

  26. The Mona Lisa in virtual reality in your own home

    'Mona Lisa: Beyond the Glass' - the Louvre's first virtual reality project - uses the latest scientific research on Leonardo da Vinci, his creative processes and his painting techniques. ... is based on the knowledge compiled by exhibition curators Louis Frank and Vincent Delieuvin after a decade of research in preparation for the ...

  27. CLAS PBS professor receives NIH grant to research how to improve

    By Alice Eberhart. Cathleen Moore, professor in the Department of Psychological and Brain Sciences in the College of Liberal Arts and Sciences and Starch Faculty Fellow, received a grant from the National Institutes of Health for $413,267 to study how lifeguard training can be improved using virtual reality.

  28. An Experimental Study on Reading in High-Immersion Virtual Reality

    High-immersion virtual reality (VR) is an increasingly valued environment for language learners. Although reading constitutes a core language skill, practicing reading in VR has received little attention. In this between-subject, quantitative study, 79 intermediate learners of English at a German university were randomly assigned to view an interactive, multimedia-rich story under two conditions.

  29. Faculty Led Projects

    Effectiveness of Instructing Anatomy Topics via Virtual Reality. PIs: Dr. Daniel Serravite (School of Education and Human Development), Dr. Magda Aldousany (School of Education and Human Development) Student Team: TBD Project Description: This pilot study evaluates the effectiveness of virtual reality versus traditional in-person instruction in teaching anatomy using VR for immersive lectures.

  30. Hearing voices is common. Virtual reality might help us meet and 'treat

    Virtual reality might help us meet and 'treat' them', according to research co-authored by UniSC's Dr Rachel Brand, published in The Conversation. Hearing voices is common and can be distressing. Virtual reality might help us meet and 'treat' them', according to research co-authored by UniSC's Dr Rachel Brand, published in The ...