Open access peer-reviewed chapter - ONLINE FIRST

Data-Mediated Environments: Reality after the Metaverse

Written By

Luke Heemsbergen

Submitted: 04 June 2024 Reviewed: 15 July 2024 Published: 10 September 2024

DOI: 10.5772/intechopen.1006622

Navigating the Metaverse - A Comprehensive Guide to the Future of Digital Interaction IntechOpen
Navigating the Metaverse - A Comprehensive Guide to the Future of... Edited by Yu Chen

From the Edited Volume

Navigating the Metaverse - A Comprehensive Guide to the Future of Digital Interaction [Working Title]

Dr. Yu Chen and Dr. Erik Blasch

Chapter metrics overview

8 Chapter Downloads

View Full Metrics

Abstract

This chapter examines augmented reality (AR) not as a mere visual overlay of the virtual on the real but as fundamentally relational media between compute and environment. It does so to build toward a definition of the metaverse as media that perceive and persistently relate the physical world with computed data. Tracing mediation technologies (from print to metaverse) that continually shape our reality, this chapter critiques current electro-atomic divides tied to a continuum of virtuality and reality. This chapter explores how a relational understanding of compute and environment constructs and mediates perceivable reality by drawing from foundational concepts like Milgram and Kishino’s continuum, modern works of art and product-science that Extend Reality, and the philosophy of Karen Barad. Understanding the augmentation of reality through this relational, integrative perspective is crucial for not only developing precision of what the metaverse is but also accurate understanding for experiencing and regulating the futures of connection that the metaverse offers.

Keywords

  • augmented reality
  • metaverse
  • spatial computing
  • intra-actions
  • reality

1. Introduction

This chapter shows how Augmented Reality (AR) is real. It offers a different way of thinking about AR media and other forms of Extended Reality (XR)—that speak to the future of the metaverse in ways that differ from having to bridge the virtuality-reality divide. To summarise common assumptions in XR research, a divide persists between reality-based media and virtuality-based media [1] that signal atoms on the one hand and electrons on the other. Relatedly, I argue it is not enough to say AR overlays the ‘virtual’ onto real life, we must understand that AR relates the ‘digital’ into real life. To provoke the point with a simplistic example, the printing press flicks ink on paper, and this is real. The newspaper or a billboard exists as a real thing. But, if pixels flick as a liquid crystal substrate, we say a virtual experience sits apart from our physical life. Pixels are transient and speak to the non-material virtuality jointly imagined by individuals and computers. Virtuality and reality spectra offer a clear atomic-electronic divide when we conceptualise technology that mediates reality for us. But does this divide remain useful as a way to describe and explain modern life? If the giant billboards that adorn the buildings in our environment with advertisements are crafted in pantones or pixels, which is more real? What are the consequences in pretending one has more reality than the other as we navigate a world saturated in media, and data? As spatial computing maps, reacts to, and mediates our personal and shared environments, will we continue to insist these experiences are virtual?

To answer some of those concerns, and explain where an accurate definition of the metaverse needs to come from and why it matters, the remainder of the chapter is structured to first map the history of some common definitions of how computers have extended Reality. Then the chapter works through some concrete examples of prior art and science that uproots the virtual-real and electronic-atomic versions of reality that are baked into display science and have come to define the industry. By questioning both the science and experience of ‘virtuality’ I can then apply a new relational logic to how AR mediates life. To do this I discuss two products not usually associated with AR or the Metaverse, the Apple Watch and Humane’s AR-AI Pin. Discussing these products as available in 2024 allows a final section of the chapter to consider social and regulatory consequences of my proposed virtuality-reality reconfiguration, and fleshes out a philosophy attuned to the radically relational physical environment where the digital and physical mediate our real-time shared environments.

Advertisement

2. Defining VR, AR, and what may come of them in a metaverse

1994 saw Milgram and Kishino publish their taxonomy that classified mixed reality displays across a continuum strung between virtuality and reality. The majority of AR and VR scholarship cites this work in ways that have contributed to the boundary work of the field itself, offering useful ways to decide what is and is not AR [2]. This includes a visuality bias to AR-VR work—the continuum was about display technologies. Milgram and Kishiono [1] begin by defining objects at either end of their spectrum as either real: ‘any objects that have an actual objective existence’ or, virtual: ‘objects exist in essence or effect, but not formally or actually’. The work then goes on to consider whether the perception of these objects are synthesised or directly observed (think pass through video vs. translucent lenses), and finally whether images are real or virtual based on whether there is ‘luminosity at the location at which it appears to be located’. Luminosity here refers to the ability of the image to reflect or emanate photons (i.e. radiate light), or being see through—like a hologram or other optical trickery that does not occlude photons. It is important to note the virtuality-reality continuum is not just two dimensional, as usually communicated when citing Milgram and Kishino (see Figure 1). Mixing realities according to Milgram and Kishino requires considering how much ‘world knowledge’ technologies can model, the fidelity of the ‘virtual’ additions, and the overall ‘presence’ of the experience usually equated with screen immersion. At each of the high end of these three continuum are references to Naimark’s [3] idea of real-space imaging, where ‘the observer’s sensations are ideally no different from those of unmediated reality’ [1]. An unmediated reality is seemingly based on the biological perceptions of the physical environment.

Figure 1.

Virtuality Continuum by Russel Freeman, as first crafted in [1].

Previously, Mann and Wyckoff [4] had defined eXtended Reality [XR] technologies that expanded visual perception spectrally to mediate infrared, ultraviolet, and other waveforms like radio or sound in novel ways. XR here took ‘real’ things that were imperceptible, and remediated them for human visual perception. In this way, Mann and Wyckoff defined XR as ‘any kind of sensing [technology] + sensory [human] interaction with reality’. Notably, this depends on the idea that reality exists outside of the human sensory limits. Reality can include ‘natural’ phenomenon like infrared or man-made phenomenon like radiowaves that existed out of sight. Mann originated his techniques in the 1970s by building devices to create video sensing feedback loops for human perception. His ‘displays’ made of LEDs or other technologies would sense what cameras could see or radios emitted and show that spectra of sensing back to humans, who otherwise could not see them [5].

As we will see, while the building blocks of the virtuality continuum have held up well for a typology of display science and in turn approaches to building AR, MR, VR, and XR products, the continuum itself holds less well for explaining mediated reality. This is not surprising as the continuum was to help hardware researchers consider the ways that technical designs were ‘juxtaposing “real” entities together with “virtual” ones’ in order guide a path to ‘distinguish among the various technological requirements necessary for realising, and researching, mixed reality displays’ [6].

However, words matter. As Haraway [7] writes, ‘Technologies and scientific discourses can be partially understood as formalizations… but they should also be viewed as instruments for enforcing meanings’. The meaning of virtual and real, and the potential of consequences for each in ‘real life’ have become entrenched. This dyad does not serve what Haraway reflected on through a metaphor of cyborg relations, obfuscating potentials and consequences of life for human with machine, and the resulting implications for identity, society, and politics. For clarity, a helpful technical taxonomy for display science might not set out to create an epistemological rift between electronic and atomic life. Yet, it manifests one nonetheless, in ways that often discredit electronic realness. In this vein, I am trying to reboot definitions of AR and the metaverse that refrain from a dichotomy, divide, or even continuum between virtuality and the reality. To consider the data-mediated environments that are reality, we need new ways of thinking about the metaverse.

I define the metaverse here as forms of connective digital media that perceive and mediate persistent relations between our biospatial world and digital data. This mediation Augments Reality. Biospatial surveillance is capture of data from or inferred from humans, as well as their spatial surrounds [8], and is required to augment perceptions of reality. However, unlike most technical definitions of AR, XR or the metaverse, I would not stress this mediation happens in ‘real time’, as while the temporal aspect of computed data might be relayed to humans in biological ‘real time’, the temporal provenance of the data relation can speak to past events or compute, present feedback loops, or generative future predictions, showing the 4-dimensional generative affordances of spatial computing [9]. Without acknowledging physical-digital relations, the ‘metaverse’ becomes merely a marketing term. At the most basic level, biospatial surveillance is required for the immersion that many corporate understandings of the metaverse promise. You cannot create perceptions of immersion without monitoring and leveraging environmental signals from the body and physical objects. AR media are then better described as relations between computing-data and environment-data made perceptibly real. That’s AR—where divides across physical media and non-physical media are subsumed with radically relational accounts of what is perceptible. Here, it is the physical-digital relation of perception that is novel, and speaks past new platforms, content, or UI expectations. AR engenders a data-mediated environment to be reality.

To define terms like the metaverse, and those associated with it such as AR, XR, spatial computing, or consider the economic or social ‘worlds’ tied to persistent computing, is as much a political task as it is a technical one [10, 11]. Who is making the definitions and why is an important question while these technologies remain malleable and are socialised in evolving ways. Yet despite evolutions in technology and society, some definitional themes remain pertinent. Consider that during the first wave of VR hype, Nicola Green defined to ‘become virtual’ as not merely having ‘access [to] a wholly “other” space and becom[ing] digital’ but rather, the processes of ‘making connections between programmed and nonprogrammed spaces in specific locales, and power-laden social, cultural, and economic relationships’ [12]. A metaverse shows how those relations can be made perceivable not in the lab, or on a window to the WWW, or even on screen, but out and about in the lived world. This chapter offers an pathway to reconsider how we come to definitions of AR and the metaverse by considering the consequences of data-mediated reality as defining the metaverse. One consequence that loops back to the virtuality continuum is acknowledging how misconstrued technical assumptions about perceiving reality and displaying digital mediations of that reality limit our toolkits for defining and controlling reality. Yet the evolving definitions of AR and how these tie into grander plans of the metaverse are also instructive.

Comparing our working definition of the metaverse and AR to the majority of AR literature, we see that while the virtual and real signifiers persist in the latter, a relational realisation is starting to become more visible and performative in academia and common usage. The ‘overlay’ of not real is fading into the acceptance of real life. Azuma’s [13] well known survey of AR techniques define the field as being those technologies in ‘which 3D virtual objects are integrated into a 3D real environment in real time’ [emphasis added]. The integration, as opposed to layering or superimposed speak to more potential for relations between digital and physical. Billinghurst et al. [14] considered making these relations seamless the goal, as in a ‘larger context, Augmented Reality is the latest effort by scientists and engineers to make computer interfaces invisible and enhance user interaction with the real world’. The invisibility speaks to an occlusion of the ‘virtual’ itself and a nascent rebuttal to dualisms of cyberspace and real-space. Yet makers of these interactions still contested specific future visions of AR, with headworn and mobile systems charting unique material design and policies surrounding the technology, and stakeholder perceptions of the technology [15]. Finally, Apple’s 2024 foray into a headworn product they define as ‘spatial computing’, was explained by The Verge.com’s Nilay Patel as AR: ‘virtual projections that are directly related to objects in the physical world’ [Patel 2024, emphasis added]. Here, while the virtual still serves to differentiate from a physical environment, there is explicit recognition that their relations make ‘reality’.

In this way we can start to see understanding AR that it is not based on layering the unreal to the real, or superimposing virtuality over reality. Instead what defines the experience and interactions possible are relating the digital to physical, with less bias toward what is ‘real’. Defining it this way also arrests visual biases, opening other senses and perceptions to augment, like auditory and haptic feedback loops. Regardless of fidelity or levels of immersion, the metaverse will be defined through this pattern of breaking down the boundaries—or squeezing the continuum—of ‘virtuality’ and ‘reality’. This definitional move would disassociate the underlying notions of electronic-atomic divides to recognise a more cyborgian existence of how augmenting the physical world with digital data defines real life in and of the metaverse. This is not meant as a science-fiction inspired take on digital life—where atoms are sucked into the machine (see Tron) or cyber-worlds exist outside our own (see early William Gibson novels) or even the digital-physiological relations present in the novel Snow Crash. Instead, it starts with what is, how we perceive that, and how digital mediation augments this. To show the path to these claims, we now turn to pencils, lasers, and quantum energy making art.

Advertisement

3. Some prior art in extending reality (before the metaverse)

This chapter was in part inspired by two pieces of art that seem to falsify the distinction between electron-based and atom-based reality—or at least media. This falsification serves to question the construction of categories of displaying, perceiving, and acting in atom-based reality and electron-based reality. That discussion helps shake the assumptions that designers, users, and policy makers might leverage for the metaverse, as AR shifts from technological niche into a more complex sociocultural regime [16, 17] that comes to define the metaverse. I explicitly picked two pieces of art that do not directly reference a metaverse. They are not connected to the internet. They are not particularly interactive for users. Instead, they show in isolation how distinctions between the virtual and real can fail even in the physical sense via digital mediation, before more subtle arguments around interactivity and the socialisation of our digital selves [18] or the changing nature of the internet [19] are explored in relation to the metaverse. Reconfiguring the technical perceptions of virtuality and reality through these pieces of art helps draw a new baseline of digital mediation that foregrounds a path for pertinent revision on how we should conceptualise, experience and govern media that relate the physical to the digital. Thus, this section first offers physical evidence to inspire reframing electric-atomic divides when conceptualising the futures of real, digital, interaction. And it does so through an unusual technical approach: art critique.

On a warm spring night the sun set on Austin, Texas, and patrons shuffled in to dadaLab to watch lasers dance away distinctions between the virtual and real. The lasers were the art of Alberto Novello, in a work called ‘blacklight’ [premiered 2023] that questions the difference between photon, pixel, or pigment. His work can be described as laser sculpture and leverages computer controlled pattern precision to bounce laser light off a special phosphor laced canvas. It is not just the laser’s light that makes the show. The canvas extends space and time for the laser’s trace in novel physical form. After their ephemeral existence, intricate paths of laser light hold on the canvas, and then slowly fade away in an atomic dance that opens questions to the physical reality of digital-photonic forms of real-time display technologies.

Here, digital information in a lightbeam, while maybe a ‘virtual’ sculpture, also actually physically resonates with and shifts the atoms of a canvas in real time for the observer—offering something between the ‘permanence’ of physical reality ink and the ephemeral nature of digital patterns. Neither a projection nor a pigment, ‘blacklight’s ability to mediate with physical with the digital shows how ‘reality’ is always constructed through mediation’. And not in a sociological sense, but via hard physics and chemistry, as digitally directed photon emitters materially relate their information on canvas that emits its own glow in its own time.

Phosphorescent materials work through the absorption and re-emission of light energy at an atomic level. Novello’s digital laser display technology offers photons to excite the electrons within the quantum structure of the canvas. They do not overlay the physical with pigment [ink]. Nor is their display the ephemeral electron flows of circuits that render for an instant as pixels on screen or in your eye. Photons excite the electrons of the canvas’ atoms and move them to a higher metastable energy state due to the specific chemical structure of the material. Over time, the electrons slowly release the stored energy as photons, resulting in a prolonged glow. This emitted light is what we perceive as the material glowing in the dark.

The intricacies of quantum level transitions of electrons, specifically the interaction between electronic states, energy levels, and the probabilities of various transitions, allow various timeframes for phosphorescent ‘displays’ to relate the photonic to the physical. Note that the painter Anders Knutsson has previously experimented with these timings in his luminous painting, using pigments with variable energy states to create a physically changing reality of pigment-based art. Knutsson’s art reveals its very real ‘luminous state’ on the canvas when excited. Novello’s work also incorporates digital via precise laser control to create luminous states of exquisite patterns and explicitly shows the malleable relation of pigments and real-time displays. The work makes explicit the digital’s capacity to change the physical world. Photons do not ‘overlay’ these atoms, they excite them in digital patterns, spinning electrons to new states in ways that anticipate their decay and generation of light. Novello’s work engages photons and atoms to deconstruct a spectrum between virtual and real. It offers the quantum reality of laser-media as both particle and wave, emitting and mediating reality of the canvas back to us in a state of semi-permanent controlled decay.

It might be tempting to consider a similar spectrum to virtuality-reality by this example, with phosphorent displays in the middle of line from pigment to pixels. However, doing so creates division through category, where we should instead be focussed on how atomic and electric combine. And this is why deep dive into the physics of a phosphorescent laser light art is merited. It shows it is the relation between elements that mediates perceptions of reality. It is not digital display overlayed on physical reality, it is photons interacting with atoms to spin up their electrons, relating digital information into quantum states that shift the surrounding environment, so that humans might perceive reality, augmented. Reality is ‘extended’ in this sense not by virtual tricks or overlays, but by the relationship between digital information and physical information; quantum energies exchanged and exchanged again to produce perceptions of what reality is. Here the digital is made physical, and perceived as luminous, emanating from physical reality, rather than a layer of virtuality superimposed.

The second art example to consider is Yamagami Yukihiro’s installation titled Shinjuku Calling [2014]. The installation of ‘mixed media’ shows how a layered divide between analogue reality and digital virtuality offers a poor explanation of our lived—and mediated—experience.

Yukihiro’s work cleverly integrates hitherto divided atomic and electronic information. Yukihiro painstakingly pencilled a streetscape of Shinjuku Station across about 2.5 meters of white plywood. Onto these physical graphite pigments, pixels are projected that give a virtual ‘real life’ video of pedestrians and cars navigating the canvas; neon glows augment the pencil lines of stencilled crossing lines and sunset makes the buildings glow. The ‘virtual’ imbues life as we expect and experience it, past the monotone and material sketch. The size, detail, and movement make it a breathtaking work that shifts viewers into a real-time perception of a distant space as they physically walk along the two dimensional—physical but unreal—pencilled detail of a static street scape.

One way to consider the work is the technical ‘magic’ of graphite layered on plywood, with photons dancing on top, which makes something new through layers of virtual and material representations. Yet, the work is not powerful by overlaying the virtual on the physical.

Its power is to make real by relating electrons to atoms: we perceive it as a real-space in full, through the analogue line art and pixelated ghosts of movement and light. Further, we might imagine the ‘real’ inhabitants of Shinjuku, walking across the road, seeing both pigments and pixels via the advertisements around them—would they consider which of these signals is virtual or real? The experience of Yukihiro’s work opens up how pixels relate to pigments, and electrons to atoms, to make reality. It also helps focus discussion how we might think about AR as mediating the real, instead of a confusing layering of terms around virtuality, reality, physical, and digital.

The ‘display technologies’ of these pieces of art allow us to use inductive reasoning to quickly list different ways that humans can perceive reality for our sample of n = 2. These are not meant to describe reality, merely show how we can perceive it through linked ‘display technologies’ of atoms and photons that mediate our physical existence with the world. None of the categories below speaks to virtual or real, they speak to mediating relations between electrons and photons, both very much a part of reality, whether directed at the environment, from screens, or to eye balls, etc. (Table 1).

NameeventPerception
PigmentAtoms reflect ambient photons‘Physical media’
PhosphoricAtoms store and release directed or ambient photons‘Conceptual art’
ProjectedAtoms reflect directed photons‘Screen media’
PixelAtoms direct photons‘Screen media’

Table 1.

Inductive typology of displays of reality, per relations between atoms and photons as observed across two art projects.

These two distinct art forms—a gallery hung installation and a laser-DJ experience—were selected to show how common conceptions of the physical are not synonymous with the real. The environments we live in do not suffer from stark digital-analogue divides. Their physics do not suffer from virtuality-reality divides. At play in these forms of art is the relationship between photons and atoms, including how one directs the other, and what that allows us to perceive.

While artwork—laser or pencil based—might seem far off from conceptual tools to explain data-mediated environments, that point is that reality is sprung in these works not across a spectrum of what is virtuality and what is reality, but from how data comes to be mediated via the environment: pigment, pixel, or otherwise. There is a relationship between data and the physical environment that will come to define the metaverse in ways that differ from how data and the environment define the world wide web, newspapers, laser art, or other media.

Past the physics of display, the science of perception itself can also be considered as we contemplate reality after the metaverse. Various disciplines outside display or computer science are happy to break common assumptions of what makes reality. For instance, a divide between atoms and electrons equating to the physical and not is less than accurate in understanding the human experience from neuro-evolutionary perspectives. Hoffman [20] argues that our physical reality is only perceived in particularly useful ways to keep life going. Neuro-biological reality is mediated by receptor cells, neural pathways, and bodies in ways that have provided best ‘fit’ to succeed, even if this means physical reality itself is not perceived directly or accurately by human minds. Instead, our perceptions are shaped in ways that are particularly useful for survival and reproduction, rather than providing a true representation of an external world. Away from biological imperatives on reality, Harari [21] suggests that sapiens are unique in their capacity to ‘transmit information about things that do not really exist, such as … nations, limited liability companies, and human rights’. This has been done through verbal language, written documents, social institutions, etc. The argument here is that we have never been merely physical. Or, putting it another way, Real Life as human experience must be mediated. Key is that media do not just carry or translate information, they create. Reality is media that we consensually hallucinate into being. Where we share these hallucinations, society exists.

Advertisement

4. Some new products in extending reality (after the metaverse)

The classic societal hallucination of the digital age comes from William Gibson. He describes cyberspace via a 1984 lens that engendered a whole genre of cyberpunk futures:

‘Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation…Lines of light ranged in the nonspace of the mind, clusters and constellations of data.’ [22]

‘Real’ iterations of cyberspace, through the world wide web, a corporatised ‘Web 2.0’ that enabled social media, and more recently, questioning the duality of cyber separate from other ‘spaces’ of interaction [18, 19] have opened concern around the technologies that mediate our existence in socio-technical categories. While mirroring Green’s [12] political economy critiques of VR, the ‘technology’ of cyberspace is receding into the background via terms like ‘spatial computing’ and ‘wearables’ that create embedded, embodied and everyday [19] experiences of connectivity that define human perception and cognition. Even without terms like spatial computing or the metaverse, we can understand how contextually relevant data [computed in geospatial or temporal relations] are already endlessly integrated into our everyday by, for instance, mobile phone messages and use of GPS enabled screen maps. In this sense, real and virtual are not accurately defined through mirrors of themselves [23] in terms of what they do or how they relate. They instead form complex intra-relations [24] where the physical is infiltrated by computational surveillance of space and bodies in generative feedback loops. Below we unpack intra-relations, but for now consider how current ‘non-metaverse’ technologies allow a good life from users finding a good restaurant from their car, feeling reminders on their wrists, or automatically alerting authorities to their car crash.

These two latter haptic feedback loops [triggering alerts for authorities via your car crash, or a gentle tapping on wrist as a reminder of an appointment] are both features of the consumer wearable Apple Watch. Note this watch also ‘connects’ your own computed probability of an irregular heart rhythm and other biofeedback to the network.

The argument could be made that the Apple Watch is the first consumer product understand reality ‘after the metaverse’. Gone are concerns directed toward how virtuality and reality overlay each other by designing for fidelity and ‘presence’ in a VR-like product. Instead, this wrist-internet-connected consumer devices witnesses how data-mediated environments can be perceived in real physical life—it extends our perceivable reality into a metaverse of perceptual and persistent data-body-environment of compute. Past the latent biospatial surveillance of rapid de-acceleration or heart irregularities—what is also called bio-inferred data—there is an intentionality of physical-digital feedback loops that are not unidirectional. Apple Watch now uses what the company calls a ‘double tap gesture’ that registers a ‘touch’ of the formerly-screen-based interface by double tapping your finger to your thumb in the air—in a space previously known to be outside the screen interface. In their language:

‘[The device] processes data from the accelerometer, gyroscope, and optical heart sensor [and] detects the unique signature of tiny wrist movements and changes in blood flow when the index finger and thumb perform a double tap’ [Apple 2023].

Apple Watch shows how physical digitality exist in the metaverse in ways that present physical manifestations of digital logics, economies, and programmable media in complex media flows [25]. In this sense, Apple Watch shows the metaverse is already here, if very unevenly distributed to discrete packages of accelerometers and linear actuators that connect your wrist to automated emergency services and other networks, as it monitors your bio-signals for intentional input. The metaverse requires media that relate the physical world with persistent computed data that Apple is providing to your wrist.

Moreover, current auditory mediation technologies have a space in the metaverse that is often overlooked. Auditory Augmented Reality [AAR] [26] was an early descriptor of digital-physical interactions, but now offers a dearth of conceptualisation—with some exceptions [see 27] on how sound can relate computation to the physical environment. Nevertheless, audio mediation makes an interesting case study on how reality is ‘augmented’. If information is presented aurally, we assume such media are very much part of reality—lest we consider ‘virtual’ the Doppler of a mechanical siren passing on the street, or for that matter, the music in your ears from noise cancelling/adaptive headphones. The audio company Bose experimented with Bose AR that offered an audio-first approach to augmented reality but was later discontinued. The industry of other hearables that augment the perceived environment [28] is growing, while binaural beats are marketed as ‘digital drugs’ [29] that hack the brain’s perceptions and neural feedback loops. To what extent these experiences are tied to networks of spatial computing or the economic or social ‘worlds’ aligned with persistent computing will remain a political task as much as it is a technical one.

While spatial computing products are mostly thought about visually, haptics, auditory, and neurologic cues can all augment reality and mediate computer generated data into our perceptions. It is not then sufficient to describe the metaverse through new forms of a virtuality-reality continuum originally based off display technologies constraints. The metaverse offers a sustained relation of digital information to physical information that can be multifarious. So while in some regards novel or immersive display technologies might be useful to explain or continue research on the metaverse, they are only so far as they works offer novel experiences to relate digital data and the physical environment in ways that make the virtual ‘disappear’, to borrow from Billinghurst et al. [14]. Whether from haptics and auditory stimulants to the visuality that most AR technologies are defined through, we can then suggest that layering the virtual onto the physical is a category error to explain AR and open discussion about the metaverse. It is less about managing the visibility of ‘virtual’ and ‘real’ objects and environments and more about relating computed environmental information back into the biospatial—our biological and physical environments – and vice versa. Specifically, AR generates novel relations through computational surveillance of space and bodies in generative feedback loops—whether this is presented via ‘display technology’, haptic sensations, auditory cues or otherwise. Augmenting each of these perceived senses again shows how a ‘layer’ of digital information might not be the best metaphor to understand augmented reality with. For clarity, the novel ways how AR ‘layers’ onto extant environments in the display-science sense remains an interesting field of research, and one that continues to be innovated on.

So the point this chapter will keep coming back to is that life is not lived in layers, or categories of virtuality and reality, but in what these connect. Otherwise, our descriptions of life suffer a latent digital dualism [18], which cleaves our mediated perceptions of life into atomic and electronic domains to the detriment of understanding the perceived environment, which is also to the detriment of future metaverse research and regulation in that service. So, the remainder of this chapter offers some initial thoughts on what can be done to serve as a corrective to virtual-real divides in AR research and products that open promising avenues in both technical and social terms.

To show one final example of work that reconstructs how AR is real and creates data-mediated environments, consider the work of Humane in the mid-2020s. Humane produced a wearable ‘AI Pin’ in 2024 that allows us to consider what it might mean if AR media, instead of focusing on screens, fidelity, and measures of virtuality, instead were considered in terms of radically relational accounts of perceptible life. For some context note that Humane’s AR product ‘pivoted’ to AI as the near real time generative AI interactions that LLMs provide were accelerating through 2023. Among other issues, this pivot seemed to muddy the product’s launch and reception. Nevertheless, Humane’s ‘AI pin’ is worth bringing up as an alternate entry point to the metaverse than classic imaginings of VR, AR or other screen-based media.

Humane offered an alternative imagined materialisation of how the metaverse actually works and what AR media offer. Humane’s patents, pitches, and product were proud to ditch the technology of ‘displays’ and directly present symbols onto surfaces in and as part of the environment. While its clear their technology happens to flick out particles light rather than ink, as above, it is not that we should focus on whether photons would be any more ‘real’ than pigments, but that each luminates perceptions that users get to ‘hallucinate’ into meaning.

Humane’s product and patents offer a unique way to consider data-mediated environments ‘after the metaverse’ that is not aligned to the virtuality continuum. Here is how Humane describes its own use of lasers and the environment: ‘The laser projection can label objects, provide text or instructions related to the objects and provide an ephemeral user interface… [to among other things] share and discuss content with others’ [30]. In the patent image (Figure 2), hands glow with instructions or interfaces. After the machine’s vision recognises a thermostat on the wall, the hand shows relevant controls. These types of interaction also require biospatial surveillance [8] of users’ bodies and environments to infer cues, and radicalise relations between the physical and digital data flows.

Figure 2.

Patent WO 2020/257506 A1; Chaudri et al.

Considering how Humane mediates relations of data-objects in the world shows the new relations available through AR’s mediation of environment with data; ephemeral and real time—it offers a different reality to life than one dictated by newsprint and billboards. I would not describe it as virtual, however, or as layering information on the physical. It is mediating environmental features together [data and form] that we previously could not perceive. Their work considers the way AR can mediate that other media cannot. It derives from a conception of metaverse that is not caught in janky avatars, or display ‘screen doors’ that ruin immersion. Instead, it relates digital information to the environment in radical ways. Its machine vision responds to the environment and user, and luminesces the pigments of users own hands bridge the electronic-atomic divide.

Considering how technologies like Apple Watch, Humane’s AI-Pin and even advanced headphones mediate, and extend reality offers a new path to understand what is real, and how we can create radical relations of data and environment. To drive the point home let us reconfigure how Gibson [22] described cyberspace. Classically we have ‘A consensual hallucination experienced daily by billions of legitimate operators, in every nation…Lines of light ranged in the nonspace of the mind, clusters and constellations of data’.

To configure what is really different about AR media, we might remix the definition to Augmented Reality: consensual hallucinations experienced daily by billions ‘…[where] Lines of light are ranged in what we perceive, opening the mind to clusters and constellations of our data-shared environment’, though even here the visual bias remains.

AR media are then better described as relations between computing-data and environment made perceptibly real. Divides across physical media and non-physical media are subsumed with radically relational accounts of what is perceptible in our mediated reality.

Advertisement

5. So what? some thoughts on regulation and theory

This section considers the regulator and theoretical implication of rebooting the metaverse as data-mediated environments. Simply put, understanding AR as mediating the real is crucial when forming relevant governing regimes now and to bring about a just future. Augmenting reality is not considering limits on a virtual layer on life, it is ‘Real Life’, considered by mediation. On the one hand, this flip helps focus regulatory power where state institutions like courts, parliament, and regulators are comfortable, removing the ‘cyber’-layer that might misdirect away from what is real.

For instance, this would mean that traditional regulations that separate physical and digital realms may become obsolete. Policies should be designed to govern a unified AR environment where digital augmentations are as impactful as physical objects, and physical and digital data that pertains to the biospatial are equivalent to ‘cyber-bullying’. This would mean data privacy and security regimes that understand the data collected, processed, and displayed in and by metaverse environments must be protected under robust privacy laws akin to medical information rather than consumer information. The work of XRSI Privacy and Safety Framework 2.0 is indicative of how to frame such questions and will hopefully be of use in guiding practitioners and politicians into the future.

At the least, such work needs to address how data is gathered, who owns it, and how it can be used, when we think of data not as separate to, but indicative of our real life. The normative guides from which future recommendations can be built need to be positioned to support those who create XR [31] to enhance accountability, as much as legal frameworks need to be targeted to policy opportunities that offer control. Such policy also needs to remain technologically neutral, meaning applicable consistently, regardless of the specific interfaces, economies, and products used to create AR experiences that relate digital to physical. Technological neutrality is also useful when considering holistic approaches that encompass the physical and digital as interconnected parts of our single reality. This will require effective governance with the involvement of multiple stakeholders, including technologists, ethicists, policymakers, market actors and the public—with this last category inclusive of individuals and communities. Any specific recommendations must come from this wholistic environment, lest industry or government mis-define the envelope of possibility and responsibility.

Finally, we can ask on more theoretical grounds what happens when we conceptualise AR as something other than display technologies that are often based on virtual/real definitions. Here, I am less concerned with product or organisational critique but in considering the ‘augmented subjectivity’ [32] that references the co-production of physical and the digital to define [post-]human experience. Such intellectual concern is, for example, focussed on how AR media offer real-time computationally mediated perception [33].

We can take these social concerns and come full circle via the quantum nuance of what was an erroneous electronic-atomic divide explored above. We begin with the philosophy of physicist Karen Barad. With apologies to Barad’s [24] insights in quantum mechanics, my theoretical interest here is in the radically relational accounts of perceptible life that AR can make visible and knowable after metaverse that relates data to the environment in ways that previous media could not. To simplify a bit, reality for Barad is a dynamic process of becoming, where relations and entities are continuously co-emerging and co-constituting each other through the combined intra-actions. It is not that a virtual and real exist and then interact. Instead, it is more productive to borrow from Barad’s view of intra-action and describe the way entities emerge through their relationships with one another. This is opposed to understanding entities as existing independent to each other that can then interact. In our example, it is not that the virtual affects the real and vice versa, rather, it is that reality comes into being through mutual entanglement—measured via relating photons and atoms and the mediation of that environment. Philosophically, questions then become centred on a shift away from the categories of entities and toward processes and relationships that create entities. AR is relational.

While Barad’s work on intra-action and radically relational existence provides frameworks for rethinking how we understand the nature of reality, they can, when considering how reality is mediated, also emphasise the co-constitutive nature of our data-mediated environments. In this sense, media mediate social reality away from categories of ‘virtual’ or ‘real’ and instead serve as ‘knowledge objects’ that mediate what was previously perceptually inaccessible to humans [34]. This mediation creates and is constrained not just by technical factors but also the imagined publics [or networked publics or refractions therein] that emerge and the economic systems that grind along in ways that might or might not produce a corporate ‘metaverse’ of experiencing reality. As Couldry [35] points out, media offer an ecology of ‘infrastructures’ that make and distribute content in forms that carry particular contexts with them. Maybe this is a tap on the wrist, or maybe a platform for immersive attention seeking designed by Meta. Regardless, as these contexts surveil and distribute biospatial environments to data flows, and then re-constitute into reality, we see [and hear and feel] how reality after the metaverse will be.

Advertisement

6. Conclusion

This chapter redefines Augmented Reality [AR] not as overlay of virtual elements onto the real world but as a relational medium that integrates digital data with our physical environment, ultimately challenging the traditional virtual-reality continuum. Doing so allows us to reconsider what the metaverse does, and how it comes to be. Key points included critiquing the longstanding dichotomy between the virtual and the real in mediating the world through quantum art critique, and related limitations of existing typologies like Milgram and Kishino’s continuum. Past critique, the chapter proposes a relational understanding that focuses on how digital and physical elements co-constitute our experience of reality as a starting point to understand, experience, and regulate the metaverse. This perspective reveals that AR and the metaverse are already part of our lived environment, as seen through current case studies of Apple Watch and Humane’s AI Pin, which seamlessly integrate digital data into our daily lives.

Like any claims to knowledge, this work has several limitations. Conceptually, while redefining AR challenges entrenched perspectives, it may oversimplify the nuanced experiences of virtuality and reality, as well as the technical work that makes these experiences possible. The relational approach does not seek to fully capture the diversity of user interactions and test for empirical validation via varying degrees of immersion and presence experienced in various AR environments, nor their effects. Furthermore, the theoretical basis, while potentially innovative, could now consider empirical validations that substantiate claims across different contexts and applications. Specifically, future research should address these limitations by conducting studies to test the proposed relational framework outside the lab and in the lived experiences across visual, audio, and haptic mediation. This will require interdisciplinary approaches that combine insights from physics, philosophy, and social sciences—not to mention interaction design and display science—to provide a more holistic understanding of AR’s place in mediating reality. By addressing these avenues, we can better understand and navigate the evolving landscape of data-mediated environments and their impact on our perception and regulation of reality.

References

  1. 1. Milgram P, Kishino F. A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems. 1994;77(12):1321-1329
  2. 2. Heemsbergen L, Cadman S. Commercial and research clustering of augmented reality: Discourses and divides between ar apps and applications. AoIR Selected Papers of Internet Research. 2021
  3. 3. Naimark M. Elements of real-space imaging: A proposed taxonomy. In: Stereoscopic Displays and Applications II. International Society for Optics and Photonics; 1991
  4. 4. Mann S, Wyckoff C. Extended Reality. Cambridge, Massachusetts: Massachusetts Institute of Technology; 1991. pp. 4-405
  5. 5. Mann S. Phenomenological augmented reality with the sequential wave imprinting machine [swim]. In: 2018 IEEE Games, Entertainment, Media Conference [GEM]. New York: IEEE; 2018
  6. 6. Milgram P, Takemura H, Utsumi A, Kishino F. Augmented reality: A class of displays on the reality-virtuality continuum. In: Telemanipulator and Telepresence Technologies. International Society for Optics and Photonics; 1995
  7. 7. Haraway D. A manifesto for cyborgs: Science, technology, and socialist feminism in the 1980s. Socialist Review. 1985;15(2):65-107
  8. 8. Heemsbergen L, Bowtell G, Vincent J. Conceptualising augmented reality: From virtual divides to mediated dynamics. Convergence. 2021;27(3):830-846
  9. 9. Heemsbergen L, Bowtell G, Vincent J. Making climate change tangible in augmented reality media: Hello my black balloon. Environmental Communication. 2022;16(8):1003-1009
  10. 10. Liao T. Is it ‘augmented reality’? Contesting boundary work over the definitions and organizing visions for an emerging technology across field-configuring events. Information and Organization. 2016;26(3):45-62
  11. 11. Liao T. Definitional Realities. Sydney: CAVRN; 2023. p. 2
  12. 12. Green N. Disrupting the field: Virtual reality technologies and “multisited” ethnographic methods. American Behavioral Scientist. 1999;43(3):409-421
  13. 13. Azuma RT. A survey of augmented reality. Presence: Teleoperators and Virtual Environments. 1997;6(4):355-385
  14. 14. Billinghurst M, Clark A, Lee G. A survey of augmented reality. Foundations and Trends® Human–Computer Interaction. 2015;8(2-3):73-272
  15. 15. Liao T. Mobile versus headworn augmented reality: How visions of the future shape, contest, and stabilize an emerging technology. New Media and Society. 2018;20(2):796-814
  16. 16. Geels FW. Regime resistance against low-carbon transitions: Introducing politics and power into the multi-level perspective. Theory, Culture and Society. 2014;31(5):21-40
  17. 17. Geels FW. Technological transitions as evolutionary reconfiguration processes: A multi-level perspective and a case-study. Research Policy. 2002;31(8-9):1257-1274
  18. 18. Jurgenson N. Cyborgology. 2011. Available from: https://thesocietypages.org/cyborgology/2011/02/24/digital-dualism-versus-augmented-reality/
  19. 19. Hine C. Ethnography for the Internet: Embedded, Embodied and Everyday. London: Bloomsbury Publishing; 2015
  20. 20. Hoffman D. The Case against Reality: Why Evolution Hid the Truth from our Eyes. New York: WW Norton and Company; 2019
  21. 21. Harari YN. Sapiens: A Brief History of Humankind. New York: Random House; 2014
  22. 22. Gibson W. Neuromancer. New York: Ace Science Fiction Books; 1984
  23. 23. Kelly K. AR will spark the next big tech platform—Call it mirrorworld. Wired. 2019
  24. 24. Barad K. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham: Duke University Press; 2007
  25. 25. Heemsbergen L, Bowtell G, Vincent JB. Physical digitality: Making reality visible through multimodal digital affordances for human perception. In: Materializing Digital Futures: Touch, Movement, Sound and Vision. London: Bloomsbury; 2022. p. 187
  26. 26. Bederson BB. Audio augmented reality: A prototype automated tour guide. In: Conference Companion on Human Factors in Computing Systems. New York: ACM; 1995
  27. 27. Dam A, Siddiqui A, Leclercq C, Jeon M. Taxonomy and definition of audio augmented reality [AAR]: A grounded theory study. International Journal of Human-Computer Studies. 2024;182:103179
  28. 28. Boisvert I, Dunn AG, Lundmark E, Smith-Merry J, Lipworth W, Willink A, et al. Disruptions to the hearing health sector. Nature Medicine. 2023;29(1):19-21
  29. 29. Barratt MJ, Maddox A, Smith N, Davis JL, Goold L, Winstock AR, et al. Who uses digital drugs? An international survey of ‘binaural beat’ consumers. Drug and Alcohol Review. 2022;41(5):1126-1130
  30. 30. Chaudhri IA, Gates P, Relova M, Bongiorno B, Huppi B, Chaudhri S. Wearable multimedia device and cloud computing platform with laser projection system. Google Patents. 2021
  31. 31. Norval C, Cloete R, Singh J. Navigating the audit landscape: A framework for developing transparent and auditable XR. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. New York: ACM; 2023
  32. 32. Rey P, Boesel WE. The web, digital prostheses, and augmented subjectivity. In: Rey PJ, Boesel WE, editors. Routledge Handbook of Science, Technology, and Society. NY: Routledge; 2014. pp. 173-188
  33. 33. Chevalier C, Kiefer C. What does augmented reality mean as a medium of expression for computational artists? Leonardo. 2020;53(3):263-267
  34. 34. Bleeker M, Verhoeff N, Werning S. Sensing data: Encountering data sonifications, materializations, and interactives as knowledge objects. Convergence. 6 Dec 2020;26(5):1088-1107
  35. 35. Couldry N. Media, Society, World: Social Theory and Digital Media Practice. Cambridge; Malden, MA: Polity; 2012

Written By

Luke Heemsbergen

Submitted: 04 June 2024 Reviewed: 15 July 2024 Published: 10 September 2024