3 How It’s Made
This section of the book is less about foundational principles and more a guide to how things practically work. How do we actually capture images? How does a digital movie projector differ from a giant TV? Can we build a smell-o-vision? This is the section of the book where the present future focus will be most apparent and where your instructor is likely to add supplemental materials for the systems that your department uses. The primary medium described in this section is not film or even television, but the graphical user interface.
Before we get into the technical details, we should consider the nature of experience.
For the most part, many media systems require some means of recording perceptible reality. It is possible that a system could produce a synthetic video or something else without a recording in the first instance. Yet these systems maintain a recording that can be reproduced as if a recording had existed in the first instance. This section begins with technologies for recording and then moves into the discussion of transport and reproduction. Paper has been common for many years, although the paper of the past may differ from what you are expecting—the “rags” of nineteenth-century journalism were literally fabric newspapers. The means of recording have a critical imprint on the text.
This is a horizon for future technology. Our current modes for storing experiences are for the most part literary. Walter Benjamin seemed to find that Proust offered something of an attempt to store the experience of a thing through rich description.[1]
And suddenly the memory returns. The taste was that of the little crumb of madeleine which on Sunday mornings at Combray . . . when I went to say good day to her in her bedroom, my aunt Léonie used to give me, dipping it first in her own cup of real or of lime-flower tea. The sight of the little madeleine had recalled nothing to my mind before I tasted it; perhaps because I had so often seen such things in the interval . . . But when from a long-distant past nothing subsists, after the people are dead, after the things are broken and scattered . . . the smell and taste of things remain poised a long time, like souls, ready to remind us, waiting and hoping for their moment.[2]
This above quotation from Proust came from the blog of Z. Stein, a language librarian at the University of Illinois who proposed an interesting way of experiencing the Madeleines: they wrote a recipe that should produce the cookie in question. That said, we can’t truly speak to the exact cookies that Proust had in mind, and as we have learned from the concept of terroir in the production of wine, bread, and cheese, location matters. And surely time matters as well. The space/time of a happy childhood memory is far different from the crushing reality of today (whenever the today that you read this finds you). Madeline also refers to a device for capturing smells in a small box of resin, which then could either be heated for one to sniff or to subject to further chemical analysis so that the underlying fragrance could be reproduced in greater volume.
It would be easy to take the cookie recipe provided by Stein and stop (the recipe is adapted from a website called Sally’s Baking Addiction), as if we had truly discovered how to simulate the experience that Proust had in mind. Unfortunately, the original draft of Remembrance of Things Past did not feature the cookie at all but instead something closer to toast.[3] The choice to switch to the cookie was his editor’s. Readers don’t seem to mind the switch, and rather than reading as out of place, it seems to be aligned with the sense of interiority produced in the story. It likely matters not, as the aesthetics of the account were always what was at stake. Benjamin’s collection of fragments and descriptions in the Arcades Project reminds me much of how I would proceed on the question of the cookie—to my partner’s (quantity food specialist) library of cooking texts, with a special eye toward Dorie Greenspan. Her blend of anthropological field work and technical know-how would help me feel that I was close to the truth of the cookie—if any such truth exists, except for the voracious appetite of the Sesame Street Muppet named for this very monstering. This section of the book is committed to this same quest: to honor the aesthetic, historicize the technology, and be technical all at once. Perhaps this book could be called An Attempt to Produce More Time, or Let’s Build a Time Machine.
3.1. Image Recording
The story of image recording starts with cave paintings. Although interesting, these are far more the province of an anthropological course than one on the evolution of media technology. Images were recorded using the human imagination and some kind of inscription surface. The history of painting is a history of media and one of abstraction, which is also of great interest for readers of this book. In one particularly interesting interlude, impressionist painters’ works were seen as unmarketable in the European market given their lack of realism, emotional context, and tubed paint.[4]
Film is an emulsion on a thin, transparent, flexible strip. Light is collected by the film, which includes a number of filters stacked into an assemblage. Film is the physical record of where light was present in a scene, which speaks to the need in developing staging to deliver adequate light to the subjects to be photographed. Film that could be exposed rapidly and did not require extensive lighting (and thus the likely consent of the photographic subject) was the driving force in the genesis of privacy law. Of course, there is a great deal of content that is not well suited for heavy-handed lighting. Consider the lighting schemes of traditional multicamera situation comedy: everything is utterly flooded with light so that multiple cameras at different angles could capture a full view of the scene without the need for multiple takes. Film is profoundly linear, a chemical medium that is the expression of an industrial technological society. The chemical qualities of the strip improved over time, becoming less volatile. One mile of film from the set is less than one hour of footage; with shooting ratios like 10:1, the sheer process of managing the recording medium would be a considerable challenge in itself.
Early television broadcasts were recorded using film systems, as videotape did not exist. Even the underlying tape technology was not fully developed until after World War II, when the cellulose tape technology of Europe was introduced in other places. Early efforts focused on filmic broadcast recordings using a technology called a hot kinescope.
Early television cameras used tubes, which were replaced by the CCD (charge-coupled device) camera, particularly the TK-76 Hawkeye, which offered a superior mode of television recording.[5] A number of CCD cameras were produced for electronic news-gathering that used a prism to split the inbound light to be sent to three distinct sensor chips. Early complementary metal-oxide-semiconductor (CMOS) sensors had substantially lower quality than CCD cameras, which suffer from shutter issues; today, both are high quality. CCD chips have high quantum efficiency (photons to signal are closely related); the main camera on the Hubble telescope (as of this writing) is a large-scale CCD.[6] CMOS chips use a Bayer filter, which is half green and a quarter blue/red to produce an image.

Mirrorless technologies that bring the sensor closer to the glass and eliminate moving parts are increasingly popular.[7] Likely changes in the future of the camera include longer battery lives and more sensitive chips, which would work better in lower light. Things less likely to change are the lenses that organize the light that is focused on the sensor.
Lorna Roth described the problem of film color standardization: getting film developers to produce emulsions that could adequately render all people, with the wide variety of human skin tones, was difficult.[8] Even the basic idea of color standardization depended on checking against Shirley Cards, which privilege certain ways of looking at film.
Key Takeaways
- Image recording has evolved from film capture to digital capture.
- The standardization of image processing is political.
- Sensor and lens technology is relatively stable.
3.2. Sound Recording
The basic technology of sound recording is the microphone, which is a fundamentally similar technology to the speaker. Before the digital transition, the signal passed through a sound system that was fundamentally similar. The recorded electrical signal would pass through the wire, which then reproduced the sound with no reprocessing. If you played with a crystal radio set as youth, you know that analog radio is magic, as it broadcasts enough energy to both receive the signal and power the speaker.
The traditional types of microphones include those that produce an electrical signal, including:[9]
- Condenser: A diaphragm that produces a signal in a capacitor.
- Moving coil (dynamic): An external magnetic source that produced a signal in a coil that is allowed to vibrate.
- Ribbon: A ribbon of material instead of a coil.
- Crystal: Certain crystals transduce a signal in response to a vibration.
Optical microphones measure vibration using a fiber optic lead or a laser. Changes in the reflection drive the detection of the signal. The key to reproducing a sound is the production of a document with the frequencies of the sounds present at the time of recording. It makes sense, then, that someone could simulate those waveforms, thus allowing entirely synthetic sounds to be created with far greater control and precision than those that came before.
Key Takeaways
- Microphones convert physical motion into an electrical signal.
- Once encoded as an electrical signal, sound can be digitally edited. Sound is then either recorded as an analog signal or processed using a quantizing chip and stored as a digital audio file.
3.3. Storage and Codecs
All information that is recorded is stored in some format. On a film strip, the image was retained as a discrete cell that would move as the strip crossed the light source. In terms of the images processed by digital sensors, the raw volume of data can be striking. Television cameras were far more capable than storage systems; the TK-76 (discussed above) was used with a live truck to transmit the stream live. Early digital field production cameras used heavy cards (like the P2) that would allow high-resolution footage to be captured and stored on an array of flash chips. Those early cards could store gigabytes of data, which meant about 10 minutes of raw footage. At this level, streaming video would be bandwidth prohibitive. For transmission over the Internet, the information must be encoded differently. The H.264 codec allowed much lower bit rates, the VO9 codec came later, and more recently the AV1 codec has allowed Netflix to further reduce the total volume of data flowing through the Internet.[10]
The drawback to encoding the video differently is the loss of data. Courts may not accept transcoded video because it is too easily faked. When data are transcoded, a great deal of what is lost is the internal structure; lost color data may not appear to the viewer, but it is lost as well. If one then tries to do substantial editing to the file after it has been transcoded from, for example, Apple Pro Res 422 to H.264, the corrections will not look as good as if they were made on the original file.
Codecs are the medium of the future. New media experiences exist between ultra-high-capacity sensors and display systems.
In some ways, this discussion of codecs and storage feels obsolete for students today—storage limits and ubiquitous fast connectivity have replaced the scarcity of the distant past. The truth of that abundance is an industrial process, complete with power plants, render farms, and increasingly sophisticated production automation systems. There is also the historical problem of reproduction and retention in these purely digital days. There is no referent material to return to, no physical film strip or paper book. Friedrich Kittler did not even include the storage of optical media in his formulation of the technology, instead focusing on the production, transmission, and reproduction of images.
Recording other experiences is tricky. Recording the aroma of a place is difficult because there are so many overlapping chemicals wafting on the breeze, but also the shape of the room, the ventilation, and the smell of other people present matter quite a bit as well. Consider the device once might use to record a smell described in the introduction to this section, the Madeline.[11] The smells that this system might capture are only those that can be caught in the resin of the container. This is an example of dimensionality reduction, which is inherent in all media storage.
Key Takeaways
- Compression schemes are essential for the efficient movement of media.
- Compression is a form of abstraction.
- Schemes for compression show the ways that signal processing is essential for understanding communication.
3.4. Editing Systems
The editing of information is one of the essential properties of the production of new media. Editing before the advent of digital technology was often destructive; a film strip would be cut and taped back together. With the advent of the digital nonlinear editor, the capacity for the production of new media was dramatically increased. There are a few types of digital editors that we need to consider.
3.4.1. Image Editing (Orthogonal Layer Interfaces)
If you are editing a single image, you have likely used an orthogonal editing interface, which allows you to create a series of layers that may modify an image or mask parts of that image from modification. Some familiar editors in this space include Adobe Photoshop. The layers dimension is present in many different products, providing a high level of control and simplicity for image development. The most recent developments in this field are those that can produce new content in ways that sample the underlying image, such as a content-aware fill. Although this interface metaphor is unlikely to change, the new element here is the likelihood that these elements will appear in other systems.
This model presumes the painter’s algorithm, where the view of an image is composited along a z axis from the top down. The distinction between raster- and vector-based editing systems is falling away.

These systems have for many years integrated elements that could be seen as AI—the magic lasso tool, for example. For many of us with less-than-perfect dexterity, the idea of automatically selecting the background for removal is a game changer. Historically, these are desktop publishing systems. They replaced highly physical processes with digital ones. Beyond the increase in accessibility for technical arts, there are other benefits in decreasing the use of some toxic chemicals. Symbolically, this extends Benjamin’s genealogy of lithography. In this case, however, it is not merely the replication of the image, but the idea that the image then begins to flip back on the indexicality of reality.[12]
Key Takeaways
- Skills developed for one tool which uses layers are easily ported to other software
- Layer based editors may be challenged by entirely generative prompt based systems, some layer based editors already include specialized generative tools and were among the most aggressive early adopters
3.4.2. Digital Nonlinear Editing
The fourth dimension of video and audio is time. Music, after all, is the organization of tones in time. When we organize video or sound clips, the combinations can produce rich results that vastly exceed the sum of the parts. The timeline provides a well-structured way of apprehending the project itself and controlling the dynamic state of the product as the viewer encounters it. Before 2002, this technology was unproven. It did not take long for digital nonlinear editing (DNLE) to prove that Hollywood films could also use these techniques to produce powerful libraries of clips and rapid plastic changes.
As of 2017, only 31 major motion pictures were shot on traditional film.[13] Now only a handful of films use the actual strip, Oppenheimer being a recent example. Conventional workflows use a DNLE system to produce a list of edits to be made to the film proper. Typically, this is produced in the form of an edit decision list (EDL), which is presented as a form of metadata.[14] For sound manipulation, the technology relies on the same organizational theory. Music is produced by organizing tones in time.

Once situated in a stream of time, cinema becomes possible. The dimension of the perception of time is the key to the understanding of cinematic experience. The cinematic itself is unstable; the presumption to this point has been that the frame order of the time axis would drive the experience. Interactive systems challenge this presumption in profound ways. Time axis plasticity was of course physically possible with adequate duplication and ample scissors, tape, and steady hands—as discussed in the introduction of this book, many of the cinematic forms we enjoy today were possible before but simply so impractical as to be impossible. Harmut Winkler argues in Geometry of Time that time-axis plasticity substitutes storage for transmission, which seems to be increasingly apparent over time.[15] Today, AI image creation systems no longer even ask that we store information for manipulation on the strip. Automated advertising production systems will even eliminate the idea that human intent will even enter the process in the first place. The loss of the ritual value of communication would suggest that increasingly advanced video editors will not only displace the strip as a form for understanding the time axis, but also eliminate the ritual of editing entirely.
Key Takeaways
- Editing systems map physical processes like cutting and taping film strips or layering paper in a collage.
- Time-axis plasticity is an essential affordance of digital editing systems, which is not to say that these processes could not be accomplished with optical printers and considerable labor.
- As a student, the most important thing is to learn editorial thinking, as key skills on any software platform are portable.
3.4.3. Integrated Development Environment
The interactive element of media depends on the design of systems that have the capacity to respond to user intervention into the state of that system. Keyboards allow users to provide sophisticated text strings to the system. Pointing devices allow all users to experience the graphical user interface. These systems then are being manipulated by users and produce a variety of different states, depending on the input. These editing systems can range from basic text editing programs (where code can be written) to sophisticated development platforms like xCode.
What these systems are providing is the capacity for a variety of abstract representations to produce the results with limited labor.. The complex structures produced with these systems could include autopoietic systems, which organize and produce content for the users as an ongoing flow.

Each of these approaches to editing will continue, and it is likely that interfaces for each will continue to have an appropriate use. It is in this sense where we find the impact of cultural analytics. It is not controversial to argue that students in a media program should learn basic production skills, like using a DNLE platform and basic design software (Illustrator, Figma, etc.). The claim here is not that all students must minor in computer science, but that the basic basket of skills that we expect a liberally educated person to have now includes one level of computational abstraction lower than Microsoft Office. It is not unreasonable to ask students to have some knowledge of how application programming interface (API) calls function or to accomplish basic data science tasks, even if just in Excel for now, and in just a few years Python or R will be standard.
The challenge to this line of thinking would hold that poor programming education and a legacy of exclusionary behavior in programing communities precludes the arrival of a positive era of scripting. There is a great deal of truth to this argument. There is no reason, aside from protecting wages in the sector, that computer program tasks should be taught as anything other than what they are—procedures for mixing ingredients together. Coding, like rhetoric, is cookery. Some of the greatest appeal for prompt-controlled AI systems is that they might offer an easier way to access tools, like topic models, that otherwise would be inaccessible.
Key Takeaways
- Computer code is an abstract representation of information; like other abstractions, these codes/scripts are media.
- Learning any coding scheme is a good idea and highly portable. Packages often allow R or Python users to move interchangeably across platforms.
- Computational methods are the future, and integrating computational approaches into media studies generally is essential for future relevance (both for students and programs).
3.5. Input/Output Devices
In this section, we will consider the devices that allow us to access the information encoded. Some of these devices are output only, like movie screens, and some are continuously mixed, like cell phone screens.
3.5.1. Screens
Let us begin with an inventory of screens of interest for the future media professional. The most obvious is the silver screen, the reflective surface of the cinema. These screens were painted silver (with aluminum paint) to increase reflection of light. Ideal presentations on these screens would be the brightest and best. Screens today tend to be white (more reflective materials may enhance polarizing stereoscopy for the RealD 3D system; more on this later); the key upgrades in technology today come in more advanced systems for the organization of light, as most films are now delivered digitally. The light source for cinema projection is an important dimension as well. Incandescent mercury lamps have given way to light-emitting diodes, xenon lamps, and now to lasers.
The fundamental change that lasers present is on the level of the model of image production. With all historical formats, a white light source was shown through a filmstrip, whose multiple layers masked light.[16] Physically the difference is clear: a physical filmstrip was replaced by a digital representation that was produced via the lamp, separated by a prism, then modulated by three chips responsible for RGB; if this reminds you of the description of a CCD camera system, you are on point.[17] As of this edition of the book, a suitably advanced digital cinema base technology (Texas Instruments DMD) requires three micromirror chips with a base mirror array of 1920 x 1080, with 4,096 possible brightness levels for the mirrors at .0054 mm pitch.[18] What does that mean? There are three tiny, highly adjustable mirrors in the projection assembly that reflect (or don’t reflect) light in the three primary colors from a single powerful white light source. This light source is a fundamentally limiting technology, as it will dim over time. Lasers need no central light source, as each scanning beam offers 50 times the lifespan of a xenon digital system, with dramatically lower operating temperatures and energy use.[19] Lasers offer superior images at very large sizes, better contrast, and superior color gamut.[20] Much like the earlier relationship between the CCD camera and the digital cinema projector, the laser is an enhanced version of analog television. Analog television relied on a scanning beam that was designed to allow backward compatibility with monochrome televisions. These systems scanned at the rate of the US power grid, just under 60 frames per second, with the beam scanning every other line, with some masked, producing a persistent integration of two fields just under 30 frames per second (29.97) and 480 lines of resolution. The most advanced laser systems use beam multiplex technology and then multiple packs of those laser systems.[21]
If we situate this large screen in an ecology of devices, the differentiation of the cinema product would require special presentation. But these moving images (film strips) were not the first moving images. Before cinema, magic lantern shows were common, with a particular type of special interest known as the phantasmagoria.
Haunting audiences relied on a simple image projection system that then was shown on, behind, and through a scrim. A scrim is a special piece of fabric used in theatrical productions that can appear translucent or solid depending on the direction that light is applied. Light thus is projected through and on the medium. This is in many ways similar to early efforts at photography, with images made visible through silver collecting on a pane of glass. These are ghostly media, as the images have a sense of physical depth.
More solid and intimate are the everyday screens. Historical cathode ray tube (CRT) screens were limited in size, with the largest ever produced being a 45-inch, 440-pound colossus[22] For comparison, the average television display sold today is around 50 inches.[23] Historical use cases for television then would include a monitor of around 20 inches, which would be reasonably priced and physically possible to move. The television experience today has vastly higher resolution and is simply much larger. Historical research in home decorating focused on the ways that the 20-inch machine could be integrated into everyday life, most famously by Lynn Spigel in Make Room for TV.[24] Giant screens would change the underlying advice and eventually replace a key axiom in interior design, that the television in a middle-class home should be hidden.[25] More on this later in the book. Today’s television systems typically rely on organic light-emitting diode (OLED) or quantum light-emitting diode (QLED) technologies. OLED relies on an organic layer between diodes of each color. Future technologies in this space include stacked or transparent pixels, which could further increase resolution. QLED relies on a bright light source that is then attenuated by quantum dots that expand or contract given the electric field conditions locally. QLED is thus very bright, offering pure colors, but it lacks the deep black created by OLED monitors simply turning off a pixel (the backlight is aways on). The most important use cases for QLED technology are where there are extreme brightness and longevity needs: gaming, surgery, and security.[26] In terms of economics and consumer preferences (there is evidence that high-dynamic range is increasingly a factor for viewers rather than total brightness), the challenge for screen producers comes in making enough of the screens that viewers want to buy.[27]

The next major evolution is already under way as of this writing: substantial improvements to the color blue and microlens integration to better focus light. The other two important developments likely for the home market are microLED, which could offer dramatic increases in resolution (but are quite expensive), and stacked OLEDs, which would allow more brightness and sophistication in image production. Black is the fundamental challenge, not bright white light, as lasers give us abundant bright light. The dynamic range of possible screen images, and how the colors gray and black appear, likely drive the evaluation of images.
Key Takeaways
- Screen technology has changed dramatically; pure bright light sources are key innovations.
- Screen technologies include both things that emit light and complex reflectors.
- Future innovations will come in the dynamic range of images and color space.
A. Very Large Screens
Expo ’74 in Spokane featured the world’s largest movie screen, measuring 65 × 90 feet.[28] This early giant screen was developed by the company we know as IMAX and showed an environmental film during the expo. The underlying technology needed to project on such a massive screen was a challenge. A single standard film cell simply can’t project elegantly onto such a large surface. There are two solutions for such a large screen: multiple projectors or a single projector with substantially larger and better film. Having multiple projectors is a challenge, as the synchronization would need to be nearly perfect. Highly synchronized digital systems (for certain kinds of stereoscopy) exist today, but without digital synchronization systems, keeping the projectors in sync is a major challenge, especially when multiple platters of film would need to be rotated through the projectors. The IMAX format was the solution: better film traveling laterally through a projector. There are still a handful of these cameras in use today for use in developing films for these very large screens. For example, the film Oppenheimer has its distinct look because of the IMAX camera. For your reference, most IMAX screens in contemporary cinemas are not nearly as large as what was used in Spokane, and the projectors in use are not high-capacity laser systems but standard digital cinema projectors.

The Sphere in Las Vegas has two different screens. The internal stage screen consists of acoustically transparent light-emitting diode (LED) panels suspended in a wraparound grid with a resolution of 16,000 × 16,000 pixels. This massive, curving screen results in an enjoyable display that projects toward the audience. The exterior is a convex display with approximately 16 pixels per foot, which at a distance appears to be, as David Pierce described it, “a pointillist painting.”[29] Of course, the Sphere is intended to be an event venue; it is not first and foremost a movie theater. It would make sense that the total size of the panel would be similar to the maximum performance of a contemporary laser-based projector, although the actual perception of the pixel space could vary greatly.
B. Very Small Screens
On the far side from the giant screen are the smallest, including the tiny half-inch CRT for specialized uses and production screens of around 3 inches.[30] These displays were limited for portability given that they still had depth; the small black-and-white CRT of my youth with hard-stopping analog dials was not practical to move. Small OLED technologies, replacing liquid crystals, allowed the smartphone revolution, and later wearables, particularly watches.

These small OLED screens have changed how the public enjoys media. Of particular interest with these very small screens is not so much the image quality or how people might enjoy a dance video on their favorite shortform service, but the bundle of sensors and information systems on the phone, as well as its integration into everyday life. Wearable screens on the wrist offer access to both a watch face and to metadata about the user’s various phone apps. The watch itself is not a primary vector for content consumption.Virtual reality headsets bring the projection surface close to the eye. This is about content. Heads-up displays allow a surface in front of someone (a special windscreen or visor) to capture an image projected in a particular color of light. The utility of a heads-up display comes in the transparency of the screen itself, which allows the floating image to be integrated into the world without requiring the eye to refocus. Of similar utility are integrated image elements, as are present in Google Glass. The everyday use of heads-up displays is a distinct challenge, as are the design of tasks and symbol systems that actually make use of the display capacity. Consider some of the newer wearable products, Snapchat Spectacles and Ray-Ban Meta frames. Snapchat Spectacles include a camera and substantial augmented reality (AR) integration for particularly placed experiences; the visual element is not continuous but framed around an existing product (AR filters). In contrast, Ray-Ban Meta frames have no display capacity at all and are instead a way of anchoring a camera and providing a ubiquitous voice interface. Many of the most important upgrades for these technologies add content ingest automation—they are ways of making material for the phone interface, not providing a unique visual display.[31]
Finally, there is the camera obscura, a carefully positioned mirror and prism that allowed Renaissance artists to overlay images onto their canvases. Instead of seeing the old masters as superhumanly skilled, it makes sense that they had sensory extension technologies. Augmented reality is as old as the mirror.
C. Stereoptical
The mid-2000s saw a surge of interest in stereoptical films. These were the next big thing, and films were being reprocessed for display on a stereoptical system. The hype was short lived, as many users did not enjoy stereoptical films, and the quality of such experiences was poor. Stylistically, stereo films seem to insist upon simplistic jump scares and crude objects reaching out of the screen. On a technical level, the stereo film misses were discussed in section 2.7.2 on vision—most visual cues are in fact mono-optical. Depth can be interesting, but it needs to be aligned with the other codes of the filmic experience. Thus a well-produced and -designed Avatar could be a spectacular stereo epic, while a buddy comedy run through an algorithm would not be.
There are two primary stereoptical cues: stereopsis and convergence. Stereopsis refers to each eye receiving a slightly different signal. Convergence is the slight crossing of the eyes to focus on an object at short or intermediate range. The addition of these factors in a scene can produce the perception of depth.

There are three major technologies for this display:
- Anaglyph: A filter is positioned in front of each eye, corresponding to a colored filter on one of two projection cameras. Typically, this is done with red/blue or magenta/green pairs that tend to correspond with the color response of the retina (discussed in section 2.7).
- Polarizing: The same basic design as anaglyph but with an angled, polarizing filter and lens combination for the glasses and projectors.
- Lenticular: Lens layers attached to a flat image, whereby the lens produces a level of stereopsis.
Polarizing systems can work either through the dual projection of polarized light or through RealD 3D, which alternates the polarized images at very high speed.
Demonstration 1: EYE DOMINANCE
With both eyes open, point at a spot somewhere in the room, like a clock. Now close each eye, you will notice that one of your eyes likely steered your hand despite both eyes being open while you were pointing. This is your dominant eye—you now have proof that the integration of your visual field is not perfect.
Demonstration 2: POLARIZING FILTER
Find a pair of polarizing sunglasses (anything from Warby Parker will work). Then find a cell phone—an older iPhone 5S will work beautifully, but newer phones will work as well. Place the sunglass lens between you and the phone. Rotate the glasses. You will see the phone screen darken or even disappear. When the filter and the photon source are aligned, they come through. When they are not aligned, the light is blocked.
Instead of recording the image itself, a hologram records the interference pattern produced when two beams of highly focused light intersect.[32] The film becomes an interference recorder. When hit with another laser, the film captures a reproduction of the interference from the original scanning laser. The advantage to this method of scanning is that many possible points and angles are recorded on the film.
Holography is not a new technology. The necessary elements for holograms are decades old, and unlike many forms of contemporary stereoptical photography, they are technically difficult. If you use a contemporary flagship phone, interesting images cued by the movement of the phone relative to a static image source are impressive.
Other 3D display systems have displaced holograms like AR representations on phone screens and volumetric displays. Like the seemingly small market for virtual reality (VR) headsets, the failure of holography would suggest that the technical evolution of imaging does not drive media forward.
Key Takeaways
- Screens intersect with social logics and cultural motives, both very large to very small, and they mean different things to different people.
- Screens are essential spaces in performance, from the scrim (behind the performers) to the special sense of depth in stereoptical film.
- Screens may function as perceptual expansion technologies, as in the case of the camera obscura or the visualization of the electrical signals in a smartwatch.
3.5.2. Rendering
Bob Ross’s painting programs are enjoyable. When you watch him paint, you notice that he has a precise method for rendering objects. He starts with those farthest away and layers over them. His assumptions about light and color follow a clear painter’s algorithm, where that which is closest to the foreground is rendered last and in the most detail; distances are rendered with a broader brush and less detail. This is much the same way that most computer graphics are rendered.
In a method called ray tracing, objects are supposed to exist in the world, with light (shadows and reflections) not determined deductively but inductively. Cars was the original film to use this method for rendering, and it offers superior results for complex scenes. The methods in rendering work through the objects once again.[33] With increasing power in graphics, the rendering capacity for real-time ray tracing of games is at hand. Whether this will increase the degree to which games are meaningful or impactful is another question.
Photorealism is the end goal of all current rendering technology, which makes sense given that the end perception point is the human eye. Realism, however, is a far more flexible concept. There is in some sense an assumption that the future of graphics technology hinges on the production of increasingly photorealistic images. At the same time, some particularly powerful experiences, such as in the video game That Dragon, Cancer, have demonstrated that a lower level of detail in some parts of the envelope can be offset by enhancements in others. This game, which you may be assigned to play for class, removes critical macro details like faces while emphasizing small details like the textures of surfaces and real sounds.
Key Takeaways
- Rendering processes once were computationally expensive; one would start a render and come back the next day.
- The same technologies that drove faster rendering are those that make AI possible.
- The evolution of rendering algorithms has perceptible effects in user experience (seeing reflections in games).
3.5.3. Sonic
Speakers use magnets to convert an electrical signal into physical motion. Sound is a vibration. For the most part, these sounds are produced by vibrating a cone; other technologies may use incisor diffusion to vibrate larger surfaces.
The key is that the underlying physical properties of the system do not change. Smaller objects cannot resonate at the low frequencies of a great bass performance. An earbud will not rock your body. Yes, woofers do real work.
Innovation in this space will likely come with the use of larger fields of speakers, more precise tuning, and careful, meaningful sound design. Sound is a case where the resolution of the underlying technology (what a speaker is) allows further refinement of the experience of that thing (sometimes paradigm-shifting innovation is less robust than simple improvement). Systems in the future may detect the locations of pared devices and the space around those devices to optimize how many speakers and in which bands those speakers operate.
The most profound dimension of sonic development is not in the entertainment space, but in the creation of adaptive devices. Mara Mills’s history of the miniaturization of the hearing aid is powerful: it was both the original use case for the integrated circuit and a form of technical miniaturization that transformed everyday life for many people.[34] It is the social role and image of technology, rather than the elements of the system which drive reality. At the same time, the demand for invisibility of the device and continued stigma is a powerful factor:
Today, the imperative of invisibility largely persists as a design standard for hearing aids, with the demand for miniaturization often limiting device functionality. Recent examples of fashionable earpieces compete with new models of “completely-in-canal” invisible aids. As a long view of hearing aids makes plain, hearing loss has been stigmatized despite the increasing commonness of the diagnosis, and despite the fact that moderate hearing loss can be remedied by technical means. Just as inexplicable is the obduracy of the stigma that adheres to the technology itself—when hearing aids have otherwise represented the leading edge of personal electronics, and when they exist as one configuration of the same components found in so many other appliances.[35]
This is a powerful example of the benefits and expansion of communication enabled by the device, yet the coding of culture continues to dramatically shape how sensation is produced. Speaker technology is relatively stagnant. When we expand our considerations of what hearing and listening technologies could be, the realm of possibilities and representations dramatically expands.
Key Takeaways
- Sound technologies depend on the ability both to encode physical motion and to reproduce it with magnets.
- Miniaturization of acoustic technologies enables both entertainment and access through hearing assistance technologies.
- On a macro-level, speaker technology is stable.
3.5.4. Haptics
Sensations are produced primarily by small electric motors and electric charges. When these are mapped to other stimuli, a full-faceted haptic experience may be produced. At the same time, the dimensions of perception tied to the position of the body and perception of relative space may not be fully simulated by the system. More importantly, the actual kinematics of a human body are not effectively reproduced by an electric motor bar, point, or puck. Consider your perception of a brush against the skin of your forearm: there is both the friction of skin on skin, but also warmth, pressure, and variation across the stroke. There are at least five dimensions to plot. It would make sense that scandalous applications of the technology have been dominant so far—these are the low-hanging fruit for sensation production, with a simplistic criterion for success.
David Parisi, a major theorist of haptic communication, raised an important question: Are touch screens haptic? His answer: to a degree.[36] What is important to understand about touchscreen systems is that they are not fully haptic—they are not the entire enfolding of sensation, but a limited slice of that envelope. Force reactions on a Nintendo DS or a cell phone screen are intended but for one patch of skin and one set of interactions. The rhetoric of the touch screen is instructive here: the image always features a finger touching the screen; it does not move through the screen to form a contact point with the world beyond.[37]
Among the most common presentations of haptic interfaces are gloves. An early haptic glove was the Nintendo PowerGlove, which struggled to gain popularity because it was difficult to map the controls into the mental space of the player (more on this concept later), and it required special programming for use as a controller.[38] Consider for a moment the dimensionality reduction of the hand. When used for signing, the shape of fingers, the position of the hand, and the facial expression of the signer all come together as basic gestures that communicate far more information than is encoded in the movement of any individual finger.[39] The information that is encoded through gestures vastly exceeds the data from any given finger position. Even if high dimensionality is not required on the side of the software, there is not a particularly good reason why a gross motor task (pointing the hand on the arm) offers higher performance than moving the wrist and the fingers (mouse or stylus input). Iron Chef America for the Nintendo Wii suffered from extremely repetitive haptic inputs; the slice, stir, and flip motions only are entertaining for so many mini-games.[40]

Vestlike technologies could synchronize vibrations to actions in games—a rumble upon an impact, for example. Among commercially produced options, the HugShirt offers twelve points of haptic engagement in an easy-to-wear shirt interface, ostensibly to allow a number of everyday touching sensations, like a hug.[41] The SoundShirt uses similar technology to reproduce the feeling of being in a loud place like a concert or sports event.[42] More advanced suits could use vibration in conjunction with electrical muscle stimulation to produce complex sensations.[43]
What HugShirts and related technologies offer is a good clothing experience. The fundamental limit of most wearables is the wearing—they are uncomfortable or not washable. Considering their extremely low dimensionality, the future of haptic interfaces is bleak. What is especially striking from an interface studies perspective is the degree to which the ostensibly haptic interface of the touchscreen displaces what would have been richer interaction points with old-fashioned buttons. Particular touch experiences mapped to meaningful controls on a small screen seem to be far more important to the average user, and increasingly via the refactoring of the social, those other controls seem to be disappearing. In some ways, this is the revenge of dimensionality. Once again, we have a consistent ontology for optical and sonic media, less so for touch.
Key Takeaways
- Haptics face fundamentally physical challenges, and these should not be easily dismissed.
- Mapping physical controls to actions in software system is particularly challenging.
- Small muscle-control systems like keyboards have been especially successful (more on this in the coming pages).
3.5.5. Aroma and Flavor
Devices for the production of virtual smells and tastes were discussed in section 2.7. For the most part, the strategy here is to simply load a handful of relevant chemicals into a system that can then produce those chemicals on command. What is so difficult about these senses is that they lack the deep similarity of basic inputs that the first three output systems share. There is no electromagnetic spectrum of spicy. There are technologies for taste that can use electrodes in the mouth to produce an electrical signal that tastes like something. An 800-Hz signal in the mouth tastes like lemon.[44]
It is possible that our best technologies in this area are in fact barred from use. Increasingly, consumers are interested in natural foods. The taste of strawberry must come from the shattered cell walls of a morsel, not from a bottle. This is an interesting case where the purely refined sign is not what people really want, as if the indication of cherry is not so much its taste but something else entirely.
Smell and taste will not change, but the ways that we feel about particular aromas and flavors will. The new media of the future in these spaces does not look like the simulation of an entire enfolding, but the production of new experiences and technologies that would be consumed in the world as we know it. This is another important point to remind you that the virtual does not depend on goggles—you already inhabit that virtual world.
Additional material in perfumery and cookery should be found in specialized texts on these matters, despite the love of the author for these deeper sensorial considerations. Perhaps this is the closest you will come to looking outside discourse in this book—that in the cookbooks or across campus (assuming you are at a state university with a full agriculture college) or in your food perception labs you might find something closer to the technical and historical knowledge needed to reconstruct the truth. Of note, the course for which this book was designed was taught for several years at the lecture hall in the food perception building.
Key Takeaways
- Sensory experiences are difficult to construct.
- This book does not contain recipes or gardening tips.
3.6. User Interface Environments
The study of human-computer interaction has become a robust field unto itself. Like social media algorithm advice, where influencers muse about what they think the Instagram algorithim might be, there is a rich world of academic research, consulting firm work product, and practitioner lore.
Usability studies developed by Jakob Nielsen contend that the primary focus for understanding interaction should be the user task.[45] The designer of a system has something the user needs to be aware of and able to manipulate. Usability research deploys a number of different social science strategies for the analysis of user tasks, especially those that can work across the life cycle of the project. The drawback to this approach is that it is involved in problem solving, not necessarily problem finding. Tasks are provided by the client, and the goal is to make the thing work so that the task can be completed. Usability, especially as developed by the Nielsen-Norman group, is taken as the baseline for undergraduate education and is not developed further here, as it is the basis of the design theory in section 4, on methods.
The challenge of these approaches to affect and HCI (human-computer interaction) is the question of ends. Consider these four major paradigms for the study of HCI in the introduction to a major research volume: emotional design, hedonomics, Kansei, and affective computing.[46] Emotional design is best described in this book as being tied to usability studies. The aesthetics and configuration of a system are matched to the design of a system through a number of approaches to the evaluation of both the users and the system. Hedonomics supposes that a system must be designed to maximize the pleasure of the user, while Kansei engineering is an example of a framework that supposes a strong ontological typology of users. Consider for a moment the continuing popularity of personality types: What if we designed systems to match with the results of the Meyers-Briggs inventory? Finally, affective computing is a cybernetic model that would describe a situation where the computer might try to continuously adapt to, and modulate the affect of, the user. The differences between these approaches are useful for our theory. Each one of these approaches must theorize who the user is, how many different types of users there are, whether they can be designed for in advance, and whether a universal design is possible.
Human-computer interaction is not a single concept that one simply learns as a set of best practices, but rather an entire domain that fuses an ongoing trajectory of research and development. Every time Facebook, Snapchat, or any of the other big social networks change their interface, you see this in action. They have particular ends that they must satisfy—keeping you engaged with great content while inducing clicks on advertisements. The balancing act between those goals is not solved directly by an equation. Furthermore, the aesthetic progression produced by these interfaces changes the conditions by which they are designed.
Key Takeaways
- You are not the user.
- User experience research requires a complex sense of the human and the social.
- Meaningful user interface/user experience (UI/UX) work is an ongoing process; it is not a single nomothetic rule set.
3.6.1. Graphical User Interfaces
Among the most powerful abstractions for new media is the graphical user interface. Instead of asking that users compose lines of code to access their information of perform operations, the GUI produces a continuous visual state where a pointing device can allow users to move blocks of information or select items in a metaphorical world of positions and objects. Users’ intentions are then mapped onto visual properties that become the interface with the complex system.

In 2005, Jeremy Reimer wrote a strong history of the graphical user interface (to that point) for Ars Technica.[47] It is linked here and is suggested reading for everyone. A key detail in this history is the discussion of the Mother of All Demos, the occasion on December 9, 1968, when Douglas Engelbart demonstrated the first computing system with a GUI (a mouse moving abstract blocks on a screen), which was an implementation of Vannevar Bush’s ideas (as you learned in section 1.2 of this book). You may notice that aside from a few differences, such as his round screen, the basic elements are unchanged (screens, pointers, keyboards).
GUIs hinge on the use of a pointing device. Typically, this has meant a mouse, trackball, touch screen, or touch pad. These devices allow the user to direct the system to recognize the importance of a functional point in virtual space. More recently, the role of the dedicated pointing device has been supplemented by touching directly on the screen. This is a complex interaction problem, as expert users prefer the mouse for interaction with the graphical environment, although there are some tasks for which a touchscreen is effective.[48]
3.6.2. Keyboards
Keyboards are relatively static but powerful interface systems. There are a number of ergonomic alternatives to the current layout. If the hands are positioned slightly differently, repetitive stress injuries may be mitigated. Some new designs suppose that a projection of a keyboard, paired with sensors for where the fingers move over that light, will replace the physical interface.
A great deal of change can be expected in this area, within some boundaries. It seems that the underlying trajectory toward less typing will continue. Typing has a high level of information entropy; people misspell things all the time. Further, it is reasonable to expect pointing devices to become more sophisticated but not so sophisticated that they will become cognitively taxing. Beyond the cognitive, a new set of limits may come into play: the structures of the human wrist. This also goes for keyboards. Is it really cognitively worth it for people who use a keyboard to learn a non-QWERTY (the standard keyboard) format? For this author, the answer is decidedly no. The future of the keyboard does not hinge on creativity as much as it does on cognitive and physical limits. Any new keyboard innovation will need to replace the functions of the existing keyboard for many users in a way that provides some very real benefit. A change in pointing devices is more likely than a change in keyboards.
It is highly likely that all three of these frameworks will change. You will also notice that this list does not include the standards proposed by Facebook, Snapchat, or any number of other firms. Approaches to the graphical user interface will be various and important for understanding interaction on the whole.
3.6.3. Phone Screens
For a time, Adobe produced a platform for UI/UX development for mobile called XD. The primary purpose of XD was to develop common elements for phone applications and to configure the ways that we might engage in a card sort to create a hierarchical view of information. This is a key consideration—compared with the technical ways of representing reality that we see with various cameras, sensors, and chemistry sets, the central concern for the creation of a UI is not the degree to which it might capture light or sound, but the degree to which an interface might present the structure of information. Usability is the professional practice of making this information available, and the underlying paradigm is an attempt to balance the interests of designers and users. “You are not the user” is the mantra of user-focused design. It requires real, deep user research to get past problematic design codes (more on this research and design codes in section 4).
Foundationally, the “task” of the user orients the project—this is the thing that the user ostensibly wants to do with your interface. My task while writing this was to put on some quality smooth jazz, for which I switched from Microsoft Word to Spotify, and then first looked at a search box where I considered typing in “smooth jazz,” only to realize that I had likely already searched for such lovely music, moving my eyes to the playlists section. Once I focused my weary eyes there, I noticed that the thumbnails of similar lists included artists I would like to listen to. A recent picture of Marion Meadows caught my eye, and this text was written to his classic “Suede.” My task was clearly one of my choosing, and what I describe in the preceding text is called a pattern. When patterns are designed not to fit with the supposed tasks of the user, they can be called “dark patterns.”[49] Students find it helpful to recognize dark patterns to adopt a left-handed phone grip.
The primary input method of the phone today is the thumb swipe. For those of you who are a bit older, you might remember the early days of Facebook or MySpace, where the primary interface was the keystroke. Thumb interfaces call for a much more thorough, prestructured set of options for the user. No more can one enter a selection of jokes about Steve Urkel or Alf into their personal information. At the same time, this structure removes dimensionality from the data, as you no longer know what people want to say but how they might interact with the narrow interface you have presented them. Returning to the idea of the dark pattern, the nefarious developer would put buttons in places where you might hit them by accident, and the friendly fraudster might help your child find a way to nab a few in-game playables. Managers could easily misunderstand second-level metrics from these interfaces. You dwelled on something not because you wanted more of it but because you simply couldn’t look away. As some folks have described the situation to me, once an interface starts to capture whatever it thinks will draw your eye (rather than what you want), you won’t be able to open that app in public again. More salacious content follows, of course, to deal with decreased usage of the app, which surely wasn’t caused by a dark pattern.
What does an interface even look like? A recursive loop of “simplicity” where a user needs to map onto a handful of different interfaces. For the most part, we have undergone extensive social refactoring, bringing us to a handful of interface patterns: buttons along the right side, buttons around the edges, buttons at the bottom, and buttons as a vignette. Although this makes practical sense, it is reasonable to say it plainly: the smaller the screen, the fewer the options. As André Brock argued in his work on the Blackbird web browser, we have seen fundamental changes in how we might understand the Internet, as we have moved from web browsers to web viewers.[50]
3.6.4. Speed, Range, Mapping
There are control relationships for other systems as well. Regarding the haptic interfaces discussed earlier, the key may be that they simply reduce dimensionality too much. The joystick, either as a full-scale interface like from an airplane or as a thumb stick, reduces dimensionality dramatically into four possible categories with multiple possible valences. An important heuristic for understanding interfaces includes speed, range, and mapping. Speed refers to the rate at which user inputs are integrated into the system, range specifies the possible dimensions controlled by a single action, and mapping refers to the degree to which the inputs correspond to the world of the interface. Our swipe interfaces are fast, have a binary range, and are tightly mapped. A joystick has far greater range and is mapped to a more complex system. A keyboard on the level of the individual key presents fast direct mapping, but the range of possibilities that one might create with a keyboard are limited only by the imagination of the writer and the reader. Friedrich Kittler was particularly aware of this in his historical work on the arrival of typewriters, which were aligned with the Lacanian Real. For those of you unversed in psychoanalysis, the Real is the part of the self that is not symbolic (both in not being easily translated into the symbolic or imaginary and actively resisting translation) and where any number of interesting feelings and attachments form. In this sense the keyboard is not an older interface that we overcome, but an essential one that serves an entirely distinctive role in the ecosystem as offering a way of synthesizing reality.
Key Takeaways
- Users learn physical interfaces that allow them to manipulate system states.
- Interfaces have a meaning in themselves (swiping right, for instance).
- Mechanisms for interfacing can be judged on their relative psychological alignment with user needs (more on this in section 4) and their speed, range, and mapping.
3.6.5. Prompting
Jakob Nielsen has argued that the prompt-based interface may be the first new interface in generations.[51] Why is it so new? Because the users would be simply providing their intentions to the computer and allowing it to do the rest. This pronouncement may be a bit much, as it would treat sound, stylus, and even touch screens as elements of the same system, but what it does capture is the sense of energy and excitement at the arrival of a new system. One dimension of why there would be such excitement is the prospect of what this new interface would afford—a new, comprehensive summary of knowledge and easily accessible tools for data classification and summary generation with an easy-to-use workflow. No more wrangling your Python environment. Just ask the classifier what you want to classify. For a fleeting moment in 2022, the hottest new job was going to be selling prompts or prompt coaching, as if the interfaces were mysterious and not in continuous improvement. Consider the role of prompt coaching. Prompts can decrease cognitive load, increase elaboration, and trust, but they do not considerably enhance user experience.[52] Conversely, fostering genuine interaction with the AI system leads to better outcomes and avoids overreliance on the tool.[53] As the field evolves, users are beginning to prefer explainable and even contestable AI.[54]
As these interfaces develop, users expect more sophisticated outputs and even further interactions that clarify their intent either by asking them, by searching the Internet to further enhance what is asked, or through other means by making the prompt stronger. Prompt engineering can refer to the entire user experience of crafting both prompts and responses. Interestingly, the important responses in the prompt stream mirror the processes where answers are generated. Chain-of-thought prompting provides users with the step-by-step reasoning that went into producing their answer. Tree-of-thought prompting shows the multiple possible answer chains that are available. Seeing how one might reach a single answer is a great for a math problem. Seeing an assembly of answers is much stronger when the ask is: When is violent revolution justified? A special version of this can be seen in AI interfaces where a document model is deployed (if you ask for sources for a set of arguments), which then offers a table of the topics, their relevant topic words, and author names. Exactly the sort of outputs you produce when running a topic model via code.
Hallucinations are a major challenge for prompt response systems. Hallucinations are not a single phenomenon but distinct engineering challenges for every model.[55] The rhetorical frame of hallucination itself may miss the mark, as it is possible, and in the future likely, that AI systems will provide misinformation. Misinformation is effectively countered by high user diagnosticity, referring to the idea that users will actively evaluate the epistemic dimensions of information they receive.[56] In other words, in a world of hallucinations, a powerful intervention can be to cultivate epistemic attention—to consider why we know what we know.
Key Takeaways
- Prompt interfaces are turn-taking processes involving the user and the system.
- Intentions may replace specific inputs in some cases.
- Contestable AI and rich prompting patterns are key future developments.
3.6.6. Physiological Interfaces
All media technologies are providing a poor substitute for telepathic communication. The goal for interface technology is the development of interfaces that would allow thoughts to be scanned directly from one brain and then delivered directly into another brain. This seems to be a long way off. The deepest problem here would seem to be neuroplasticity: the brain of each person is decidedly different. Consciousness, as much as it is understood, is not a mechanical property of the brain that can be located in a single point but an emergent force of a number of different processes all blending together.[57] In 2008, Gary Small’s research provided evidence that Google searches prompted the brain to utilize more oxygen than reading. The problem, he maintained, was that the interpretation of more oxygen utilization meant nothing.[58]
N. Katherine Hayles argues this in the context of the development of literacy: there is no single part of the brain that produces the ability to read, and we know that the development of reading and writing appeared as a paradigm shift in human behavior.[59] The key seems to be that multiple parts of the brain were developing reading-like parameters at the same time, when these were linked the possibilities were dramatic.
Consider the model necessary for full telepathy: thoughts must be extracted and transcoded into a meaningful for reproduction through a decoding process at the receiver. Anything less than that, and we are once again dealing with a semiotic process where signs are presented to the senses. Hence our concerns throughout this course with simulation and standardization. Singularity is unlikely.
This does not mean that interfaces involving the brain are not promising. Scanning methods have allowed scanners to read text from the brain. It is possible that new systems will allow those who are locked in to rejoin the world of symbolic production. This is wonderful. Cognitive pupillometry is a well-established concept; as the pupil dilation changes, we can detect shifts in the level of cognitive work.[60] Eye focus scanning allows military helicopter gunners to track targets. Galvanic skin response and body position detection can offer rich interface possibilities. Motion scanners offer great fun for headset games, artistic work, and software interface.

In a world where people elect to carry computers (cell phones), there is much data to be collected about the physical location of phones, such as their relative speed, accelerometer information (how is the phone moving in space), and other sensor inputs. People elect to use these devices to store biometric data as well, meaning that all of these ambient inputs also provide a world of information for interface with virtual worlds. The challenge of these other interfaces is that they are not so neatly intention driven. Monitoring my phone and its motion is far more naturalistic than taping a few dozen electrodes to my face for eFMG. (facial electromyography).
There will be much change in the world of alternative interfaces. It is important to keep in mind that the change here is likely limited by the capacity for input into the human as well as the interposition of semiotic code relationships. This returns to the metaphor of the enfolding of the virtual: it is not simply finding a way to get thoughts into or out of a person, but that the ways that those thoughts refer to each other and others held by other people is an equal dimension of the virtual experience.
Key Takeaways
- Measurements are taken directly of the user and their physiological responses.
- Still mediated not a direct link to reality.
- Naturalism is a key challenge.
3.7. Works Cited
“Advanced Camera for Surveys.” NASA Hubble Space Telescope, January 25, 2023. https://science.nasa.gov/mission/hubble/observatory/design/advanced-camera-for-surveys/.
Agrawal, Harsh, and Leif Pedersen. “Racing for Realism.” Renderman, December 6, 2017. https://renderman.pixar.com/stories/cars-3.
“AI: First New UI Paradigm in 60 Years.” Nielsen Norman Group, accessed March 10, 2025. https://www.nngroup.com/articles/ai-paradigm/.
Alekseenko, Artem. “The TV Installed Base in the U.S. Is Getting Bigger and Newer, According to NPD.” Display Daily, December 30, 2020. https://displaydaily.com/research-us-consumers-buying-larger-tvs/.
Benjamin, Walter. The Arcades Project. Edited by Rolf Tiedemann. Translated by Howard Eiland and Kevin McLaughin. Belknap Press of Harvard University Press, 2002.
Benjamin, Walter. “The Work of Art in the Age of Its Technological Reproducibility.” In The Work of Art in the Age of Its Technological Reproducibility and Other Writings on Media, edited by Michael W. Jennings, translated by Howard Eiland. Harvard University Press, 2008.
Black Lantern Studios. Iron Chef America: Supreme Cuisine. Released 2008. Nintendo DS, Nintendo Wii.
Brock, André. Distributed Blackness. New York University Press, 2020. https://nyupress.org/9781479829965/distributed-blackness/.
Chandler, Nathan. “How the Nintendo Power Glove Worked.” HowStuffWorks, accessed May 9, 2025. https://electronics.howstuffworks.com/nintendo-power-glove.htm.
Chen, Cheng, Sangwook Lee, Eunchae Jang, and S. Shyam Sundar. “Is Your Prompt Detailed Enough? Exploring the Effects of Prompt Coaching on Users’ Perceptions, Engagement, and Trust in Text-to-Image Generative AI Tools.” In Proceedings of the Second International Symposium on Trustworthy Autonomous Systems, 1-12. Association for Computing Machinery, 2024. https://doi.org/10.1145/3686038.3686060.
Choudhury, Rizwan. “Ray Ban Smart Glasses Now Reads, Translates and Captions Photos.” Interesting Engineering, December 12, 2023. https://interestingengineering.com/culture/ray-ban-meta-glasses-now-reads-translates-and-captions-photos.
“Color TV Camera TK-76.” Radiomuseum, accessed May 23, 2025, https://www.radiomuseum.org/r/rca_color_tv_camera_tk_76.html.
Damasio, Antonio. Descartes’ Error: Emotion, Reason, and the Human Brain. Penguin, 2005.
“DLP472TP 0.47-Inch 4K UHD Digital Micromirror Device.” Texas Instruments, accessed May 30, 2025. https://www.ti.com/document-viewer/dlp472tp/datasheet.
Dretzin, Rachel, and Douglas Rushkoff, prods. “Digital Nation.” Frontline. PBS, February 2, 2010. https://www.pbs.org/wgbh/frontline/film/digitalnation/.
Finney, Clare. “More Than Cake: Unravelling the Mysteries of Proust’s Madeleine.” Penguin, July 14, 2020. https://www.penguin.co.uk/discover/articles/more-than-cake-unravelling-the-mysteries-of-proust-s-madeleine/.
Gray, Colin M., Y. Kou, B. Battles, J. Hoggatt, and Austin L. Toombs. “The Dark (Patterns) Side of UX Design.” In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1-14. Association for Computing Machinery, 2018. https://dl-acm-org.oregonstate.idm.oclc.org/doi/abs/10.1145/3173574.3174108.
Green, Phil. “DI – The Conform.” The Digital Intermediate Guide, 2006. https://web.archive.org/web/20240620014251/http://www.digital-intermediate.co.uk/DI/DIconform.htm.
Heger, Monica. “Laser Cinema, Coming Someday to a Theater Near You—Maybe.” IEEE Spectrum, October 1, 2008. https://spectrum.ieee.org/laser-cinema-coming-someday-to-a-theater-near-youmaybe.
Huff, Steve. “The Future of Cameras, Gear and Photography: The Mirror Is Dying.” Steve Huff Photo, June 7, 2018. http://www.stevehuffphoto.com/2018/06/07/the-future-of-cameras-gear-and-photography-the-mirror-is-dying/.
“HugShirt,” Cutecircuit, accessed October 2, 2025, https://cutecircuit.com/hugshirt/.
Hussain, Ibrar, Iftikhar Ahmed Khan, Waqas Jadoon, Rab Nawaz Jadoon, Abdul Nasir Khan, and Muhammad Shafi. “Touch or Click Friendly: Towards Adaptive User Interfaces for Complex Applications.” PLOS One 19, no. 2 (2024): e0297056. https://doi.org/10.1371/journal.pone.0297056.
Jeon, Myounghoon. “Emotions and Affect in Human Factors and Human–Computer Interaction: Taxonomy, Theories, Approaches, and Methods.” In Emotions and Affect in Human Factors and Human-Computer Interaction, edited by Myounghoon Jeon. Academic Press, 2017. https://doi.org/10.1016/B978-0-12-801851-4.00001-X.
Ji, Ziwei, Nayeon Lee, Rita Frieske, et al. “Survey of Hallucination in Natural Language Generation.” ACM Computing Surveys 55, no. 12 (2023): 1–38. https://doi.org/10.1145/3571730.
Kahneman, Daniel. Thinking, Fast and Slow. Farr, Strauss, and Giroux, 2013.
Lee, Sangwook, Taenyun Kim, and Won-Ki Moon. “Does GenAI Deplete Us? The Effects of Using GenAI for Writing on Identity and Ego Depletion.” In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1-8. Association for Computing Machinery, 2025. https://doi.org/10.1145/3706599.3719875.
“Meet the World’s Most Advanced Cinema Projector.” Christie Digital, August 3, 2023. https://www.christiedigital.com/spotlight/meet-the-worlds-most-advanced-cinema-projector/.
Mills, M. “Hearing Aids and the History of Electronics Miniaturization.” IEEE Annals of the History of Computing 33, no. 2 (2011): 24–45. https://doi.org/10.1109/MAHC.2011.43.
Nielsen, Jakob. Usability Engineering. Academic Press, 1993.
Orland, Kyle. “The Quest to Save the World’s Largest CRT TV from Destruction.” Ars Technica, December 23, 2024. https://arstechnica.com/gaming/2024/12/retro-gamers-save-one-of-the-last-45-inch-crt-tvs-in-existence/.
Parisi, David. Archaeologies of Touch: Interfacing with Haptics from Electricity to Computing. University of Minnesota Press, 2018.
Pennington, Adrian. “How New Laser Projection Technology Delivers Huge Energy Savings for Cinemas.” Screen Daily, April 21, 2023. https://www.screendaily.com/features/how-new-laser-projection-technology-delivers-huge-energy-savings-for-cinemas/5181171.article.
Pierce, David. “Pondering the Biggest Orb.” The Verge, December 20, 2023. https://www.theverge.com/tech/24008239/sphere-las-vegas-experience-u2-screen.
Raees, Muhammad, Inge Meijerink, Ioanna Lykourentzou, Vassilis-Javed Khan, and Konstantinos Papangelis. “From Explainable to Interactive AI: A Literature Review on Current Trends in Human-AI Interaction.” International Journal of Human-Computer Studies 189 (September 2024): 103301. https://doi.org/10.1016/j.ijhcs.2024.103301.
“Recording Studio Microphones: The Ultimate Beginner’s Guide.” E-Home Recording Studio, May 28, 2012. https://ehomerecordingstudio.com/types-of-microphones/.
Reimer, Jeremy. “A History of the GUI.” Ars Technica, May 5, 2005. https://arstechnica.com/features/2005/05/gui/.
Rizov, Vadim. “~31 Films Shot on 35mm Released in 2017.” Filmmaker Magazine, April 5, 2018. https://filmmakermagazine.com/105050-31-films-shot-on-35mm-released-in-2017/.
Roettgers, Janko. “Netflix’s Secrets to Success: Six Cell Towers, Dubbing and More.” Variety, March 8, 2018. https://variety.com/2018/digital/news/netflix-success-secrets-1202721847/.
Roth, Lorna. “Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity.” Canadian Journal of Communication 34, no. 1 (2009). https://doi.org/10.22230/cjc.2009v34n1a2196.
Shin, Donghee, Amy Koerber, and Joon Soo Lim. “Impact of Misinformation from Generative AI on User Information Processing: How People Understand Misinformation from Generative AI.” New Media & Society 27, no. 7 (2024). https://doi.org/10.1177/14614448241234040.
“Sign Language 101: A Beginner’s Guide to Learning Sign Language.” Rhythm Languages, March 12, 2023. https://www.rhythmlanguages.com/post/sign-language-101.
Singh, Ankit. “Illuminating the Future: Exploring QLED Technology.” AZoOptics, May 6, 2024. https://www.azooptics.com/Article.aspx?ArticleID=2594.
Sommerfeld, Seth. “Expo ’74 Featured the Biggest Movie Screen on the Planet — IMAX.” The Inlander, May 1, 2024. https://www.inlander.com/culture/expo-74s-featured-the-biggest-movie-screen-on-the-planet-imax-27885953.
Sooke, Alastair. “The Man Who ‘Invented’ Impressionism.” BBC, March 11, 2015. https://www.bbc.com/culture/article/20150311-why-the-impressionists-were-hated.
SoundShirt,” Cutecircuit, accessed October 2, 2025, https://cutecircuit.com/soundshirt/.
Spigel, Lynn. Make Room for TV. University of Chicago Press, 1992.
TeslaSuit, accessed October 2, 2025, https://teslasuit.io/.
Treuhaft, Teshia. “Making Scents: Analog Smell Recording with Amy Radcliffe’s ‘Madeline.’” Core77, July 15, 2013. https://www.core77.com/posts/25167/Making-Scents-Analog-Smell-Recording-with-Amy-Radcliffes-Madeline.
“What Is Holography?” Holocenter, accessed October 31, 2018. http://holocenter.org/what-is-holography.
Whiteman, Honor. “‘Digital Taste Simulator’ Developed That Tickles the Tastebuds.” Medical News Today, accessed October 31, 2018. https://www.medicalnewstoday.com/articles/269324.php.
Winkler, Harmut. “Geometry of Time: Media, Spatialization, and Reversibility.” Paper presented at Media Theory on the Move, Potsdam, Germany. May 21-24, 2009.
zstein3. “Marcel Proust & His Madeleines.” Literatures and Languages, August 10, 2020. https://publish.illinois.edu/litlanglibrary/2020/08/10/marcel-proust-his-madeleines/.
Media Attributions
- camera-rig by Daniel Faltesek is licensed under CC BY-NC
- layers by Daniel Faltesek is licensed under CC BY-NC
- sound-board by Daniel Faltesek is licensed under CC BY-NC
- integrated-development-environment by Daniel Faltesek is licensed under CC BY-NC
- suspended-classroom by Daniel Faltesek is licensed under CC BY-NC
- linc-panorama by Daniel Faltesek is licensed under CC BY-NC
- dan-watch by Daniel Faltesek is licensed under CC BY-NC
- 3D-imaging-systems by Daniel Faltesek is licensed under CC BY-NC
- bongos by Daniel Faltesek is licensed under CC BY-NC
- 11401229143_50139434 by Michael Hicks is licensed under CC BY
- dan-on-eFMG by Daniel Faltesek is licensed under CC BY-NC
- Benjamin, Arcades Project. ↵
- zstein3, “Marcel Proust and His Madeleines.” ↵
- Finney, “More Than Cake.” ↵
- Sooke, “The Man Who ‘Invented’ Impressionism.” ↵
- “Color TV Camera TK-76.” ↵
- “Advanced Camera for Surveys.” ↵
- Huff, “Future of Cameras, Gear and Photography.” ↵
- Roth, “Looking at Shirley, the Ultimate Norm.” ↵
- “Recording Studio Microphones.” ↵
- Roettgers, “Netflix’s Secrets to Success.” ↵
- Treuhaft, “Making Scents.” ↵
- Benjamin, “The Work of Art in the Age of Its Technological Reproducibility.” ↵
- Rizov, “~31 Films Shot on 35mm Released in 2017.” ↵
- Green, “DI—The Conform.” ↵
- Winkler, Geometry of Time. ↵
- Heger, “Laser Cinema.” ↵
- Heger, “Laser Cinema.” ↵
- “DLP472TP 0.47-Inch 4K UHD Digital Micromirror Device.” ↵
- “DLP472TP 0.47-Inch 4K UHD Digital Micromirror Device.” ↵
- Pennington, “How New Laser Projection Technology Delivers Huge Energy Savings for Cinemas.” ↵
- “Meet the World’s Most Advanced Cinema Projector.” ↵
- Orland, “Quest to Save the World’s Largest CRT TV from Destruction.” ↵
- Alekseenko, “TV Installed Base in the U.S. Is Getting Bigger and Newer.” ↵
- Spigel, Make Room for TV. ↵
- Alekseenko, “TV Installed Base in the U.S. Is Getting Bigger and Newer.” ↵
- Singh, “Illuminating the Future.” ↵
- Alekseenko, “TV Installed Base in the U.S. Is Getting Bigger and Newer.” ↵
- Sommerfeld, “Expo ’74 Featured the Biggest Movie Screen on the Planet.” ↵
- Pierce, “Pondering the Biggest Orb.” ↵
- For a photo, see supraguy88, “Trinitron VS. Indextron: Sony’s Smallest Color CRTs,” Reddit, R/Crtgaming, November 18, 2022, https://www.reddit.com/r/crtgaming/comments/yyu432/trinitron_vs_indextron_sonys_smallest_color_crts/. ↵
- Choudhury, “Ray Ban Smart Glasses.” ↵
- “What Is Holography?” ↵
- Agrawal and Pedersen, “Racing for Realism.” ↵
- Mills, “Hearing Aids and the History of Electronics Miniaturization.” ↵
- Mills, “Hearing Aids and the History of Electronics Miniaturization.” ↵
- The discussion of the touch screen genealogy as it intersects with the haptic is perhaps the strongest point. Parisi, Archaeologies of Touch. ↵
- Parisi, Archaeologies of Touch, 280. ↵
- Chandler, “How the Nintendo Power Glove Worked.” ↵
- “Sign Language 101.” ↵
- Black Lantern Studios, Iron Chef America. ↵
- “HugShirt,” Cutecircuit, accessed October 2, 2025, https://cutecircuit.com/hugshirt/. ↵
- “SoundShirt,” Cutecircuit, accessed October 2, 2025, https://cutecircuit.com/soundshirt/. ↵
- TeslaSuit website, accessed October 2, 2025, https://teslasuit.io/. ↵
- Whiteman, “‘Digital Taste Simulator’ Developed That Tickles the Tastebuds.” ↵
- Nielsen, Usability Engineering. ↵
- Jeon, “Emotions and Affect in Human Factors and Human–Computer Interaction.” ↵
- Reimer, “History of the GUI.” ↵
- Hussain et al., “Touch or Click Friendly.” ↵
- Gray et al., “The Dark (Patterns) Side of UX Design.” ↵
- Brock, Distributed Blackness. ↵
- “AI: First New UI Paradigm in 60 Years.” ↵
- Chen et al., “Is Your Prompt Detailed Enough?” ↵
- Lee et al., “Does GenAI Deplete Us?” ↵
- Raees et al., “From Explainable to Interactive AI.” ↵
- Ji et al., “Survey of Hallucination in Natural Language Generation.” ↵
- Shin et al., “Impact of Misinformation from Generative AI on User Information Processing.” ↵
- Damasio, Descartes’ Error. ↵
- Dretzin and Rushkoff, “Digital Nation.” ↵
- Kahneman, Thinking, Fast and Slow. ↵
- Kahneman, Thinking, Fast and Slow, 33. ↵