4 Knowing Things
This section of the book is concerned with the ways that we come to know new things. Knowledge is constantly evolving: the facts of the world are in flux, new facts are being found, and old facts are demoted (hopefully facts that are old and wrong, not just old). The methods tend to be clustered into design, qualitative, quantitative, and computational groups. There is an unfortunate tendency to become obsessed with methodology. Academics advance in their fields when they publish research; clear, predictable methods drive smooth publication processes; and these publications can be widely cited. If methods eclipse their use cases, progress becomes difficult because the world is complex and not easily explained by a single methodological or theoretical construct. Rejecting the production of scholarship that is on its face reasonable for epistemic reasons does not advance the cause of knowledge. These sections consider many different methods, hopefully presenting a provocative case for and against each.
Future media students need to have robust skills to consider the knowledge claims they encounter. This section of the book is about how to produce, and consume, future knowledge.
4.1. Semiotics and Design
Semiotics refers to the study of symbol systems. This is independent of information theory, which considers the capacity of a channel and the capacity of a message to overcome entropy or noise in the channel.[1] Semiotics is concerned with meaning. We can think of meaning on three levels: semantic (Did the communicators reach agreement about meaning?), pragmatic (Did the meaning coordinate action?), and alterity (What meanings were precluded by the presentation of the sign?). An important problem comes in the degree to which meaning can be communicated at all. Communication is constitutive.[2] The following statement is rich with mysticism: you are the meaning that you use and are used by. This does not mean that you are dominated by language or that you have no decisions, but that meaning is a central dimension of human existence.
Communication is not sending packages of meaning to others through tubes. Meaning is constantly produced between people. This does not mean that there is no agreement, but that the slippages between potential agreements produce many productive errors. Network research suggests that overconvergence is a major problem; if messages are too similar, people distrust them.[3] Further, ongoing message divergence is evidence of error correction, which could itself evidence of an effective system. People make meanings, those meanings are unstable, and this instability is productive.
4.1.1. Signs
The sign is the basic unit of semiotic theory. It is important to understand that signs are not stable.
For Charles Sanders Pierce, the sign is triadic. The object, the interpretant (the sign that is created in the mind of the receiver), and the sign are all in relationship.[4] Although the object is involved, but there is no objective basis for the sign, the sign itself and the sign produced by the sign are equally as important. Notice that the idea in the mind that matters in this model is that which exists in the mind of the receiver, not the sender. This model takes the intent of the speaker out of the center of the model. It does not matter what you intended if that cannot be produced as an interpretant.
Ferdinand de Saussure’s model has a slightly less complex circulation.[5] In this context, the thing (signified) is imperfectly represented and in circulation with the signifier (the label). The signified can be almost anything, often including another sign. The signifier is constantly being loaded with additional content. What should be clear in both models is that meaning is continuously in circulation. The ways that we manage this constantly shifting meaning are many and likely the reasons why you will be employed after college. A helpful key distinction made by Saussure that is that between langue and parole. Langue refers to the formal code, while parole refers to everyday speech. This is why we teach multiple methods for determining what meanings are at any given time. There is no encyclopedia or dictionary for symbol systems, and corporations are constantly searching for the ways that terms and ideas function at any given time.
4.1.2. Typology of Signs
Pierce has three kinds of signs:[6]
- Icon: signs that look like things
- Index: representations of action (such as smoke is a sign for fire)
- Symbol: an entirely artificial system (such as the text of this book)
Notice that these categories are not absolutely clear. Smoke may be a probabilistic sign of fire, but it can also be a symbol for something being hot. It is less likely that you thought that smoke was a sign of a smoke monster, a creature made of smoke. It is the probability that the sign is what you were thinking that is what makes it function. You see smoke and reasonably guess fire. This is the core of abductive reasoning: we make a number of probable assumptions and work as if they are confirmed. It should be apparent at this point that there are not easy logical answers or transcendental operators.
These signs can help you make decisions about how particular messages work. An iconic sign that includes an image may function differently than a description of that sign. You can see that in each of these contexts all three dimensions of a sign are always subtly shifting. Consider the iconic representation of a telephone. To some degree, it is likely presented as an older handset on a desk cradle. Younger people may be less familiar with such a phone; however, the icon for phone will likely remain the handset and cradle for some time.
Symbolic signs can be incredibly dense. This book is made up of almost entirely symbolic signs in the form of written text. Unless you are already an adept reader, the symbols in this book would be difficult to guess. Feelings and sensations can be exceptionally difficult to represent with a sign, often hinging on multiple signs that encircle what would be described.
Key Takeaways
- Semiotics offers a theory of how particular words/images/sensations come to mean particular things and how those change.
- The combination of a signifier and a signified as a sign, the signified and signifier are not the same thing (even if they sound like similar words).
- Different kinds of signs produce meaning differently.
4.1.3. Codes
Saussure contended that meaning existed in opposition. We know what a sign means through its relation to other signs.[7] Codes are organized systems of signs. Some codes are more sophisticated than others. People use codes all the time; they are not particularly special. For Roland Barthes, the role of codes as myths allows the interposition of codes and facts:
In fact, what allows the reader to consume myth innocently is that he does not see it as a semiological system but as an inductive one. Where there is only an equivalence, he sees a kind of causal process: the signifier and the signified have, in his eyes, a natural relationship. This confusion can be expressed otherwise: any semiological system is a system of values; now the myth-consumer takes the signification for a system of facts: myth is read as a factual system, whereas it is but a semiological system.[8]
Symbolic forms have the status of facts, like marriage proposals, names for ships or highways, and statements of financial data, among many other possible codes. This is why semiotic critique is so useful, at each stage of recirculation the products of code become the facts that produce reality. It is not required that one fully establish all the grounds on which they might argue, as doing so would be boring and wasteful. You operate using the assumptions inherent in a code as if those assumptions were facts.
It would also make sense that the highly iterated symbolic signs would begin to play an increasing role in society. Once someone has gone to all the necessary trouble to learn a sign and all the content that comes with it, the deployment of that sign again in more sophisticated systems would be efficient. In a highly complex system of signs, how the meanings of certain signs might shift becomes an important topic of negotiation. This takes the form of a kind of meta-talk: you can identify it in many kinds of communication. In relational communication, it can take the form of a “where we stand” conversation with a dyadic partner; in political communication, it can be discussion of the “narrative.”
Key Takeaways
- Signs do not function individually; they exist in a network of relations.
- Design is the manipulation of a system of signs.
- Culture refers to a situated system of meaning in which situatedness is a constellation of signs, including things like rituals, fashions, cuisines, and beyond.
4.1.4. Affordance and Signifier
To avoid confusion, we should start by describing the use of signifier in design practice. For designers, signifier refers to a label, like instructions in an elevator. This is a more specific use of the term than one might find in semiotic theory. As Donald Norman describes in his work on complexity and design, signifiers often appear when a design has failed. The non-signified in this sense would refer to a design that would function with a minimum of symbolic or synthetic signifiers.
James Gibson coined the term affordance to refer to the properties of an environment to a creature.[9] Properties can include many things. A door affords you entry, a car affords you transportation. Not all affordances are clearly visible. The idea of affordance is important because it offers a dialectical conception of the purpose of a design of a thing: it is about what becomes possible, not what is intentional or obvious. The provenance of this idea in the psychology of perception should not be forgotten. At stake in the affordance is the idea of the awareness of the user—this is a theory that enhances our awareness of the user by decentering their subjectivity. In other words, you can only truly understand how a person interacts with a system when you take the limits of that person seriously. Affordance research today is concerned with the ways that we might think of affordances in our lived contexts. Imagined affordance theory poses that understanding affordances requires having some sense of what people think the technologies afford them.[10] Social affordances focus on the ways that users might perceive the social linkages of their platforms first.[11]
Given the preference for seemingly effortless communication, many designers would prefer that awareness of the affordance of a system be as liminal as possible. This is important because it tells you about the way of knowing that is present in design—the idea is that the designer can produce a world of almost effortless meaning. The code of the design invention melds into the thing as if it were an inevitable fact. So, what does affordance mean? Alexander Ronzhyn et al. in a systematic review of the literature offer the following key insights: affordances should be understood on the level of the platform and the specific dimensions of its technical capacity/use, and that really understanding affordances requires exploration of platforms and use cases beyond the core products offered by Silicon Valley.[12]
Key Takeaways
- Affordances are capacities of technology to allow action.
- In communication context, affordances are often imaginary and social.
- Practically, if a special label must be affixed to what should be a commonly known affordance, the design of the thing has failed.
4.1.5. Simple, Complicated, and Complex
The following the definitions area derived from Donald Norman.[13]
- Simple: In design, simplicity refers to the occasion when a system fits with the psychological expectations of the user.
- Complicated: The occasion when a system or thing does not fit with the psychological expectations of the user or is emotionally fraught.
- Complex: When a thing has a lot of parts.
Simple and complicated have somewhat oppositional meanings. The optimal case for conventional design is when a complex thing is made simple. The biggest concern with simplicity is that designers, unchecked, have a penchant for inserting their own psychological position as that of the user. This is why a central tenant of usability theory (discussed later in this chapter) is that “you are not the user.” Design is a process of strategically concealing and revealing for a particular effect.

Once a student learns about brutalism, they see it everywhere. Many university buildings of the 1960s and 1970s are wonderful examples. Unfortunately, Oregon State is not lucky enough to have great examples. The repeated use of form that ostensibly reveals the structure of the building, the raw use of concrete. Brutalism is in a real sense the tangible result of high modernism. This was an approach to building that could transcend the everyday condition.
Nikil Saval concluded in the New York Times Style Magazine that the return of brutalism may be the harbinger of the end of the brutalist ethic:
But the renewed interest in the movement has yet to produce any meaningful change in the culture of what gets built and how. This resurgence has not—not yet anyway—led to any revival of interest in public-minded development. Politics has been divorced from architecture. In fact, love for Brutalism has often led to gentrification. Many social housing projects, such as Erno Goldfinger’s Trellick Tower in London, have become much sought-after private housing. Architecture bookstores sell postcard packs of the greatest hits of Brutalism; you can buy a Trellick Tower mug to sip expensive coffee in your pricey Trellick Tower flat. The aesthetic of Brutalism may at last triumph over its ethic.[14]
Brutalism offers what should be a simple solution to complex problems, yet the legacy of this approach to construction is terribly complicated.
A dualism of simple and complicated resonates with the design of our social networks and experiences in media research. Not all complicated things are bad—some things need to be complicated, while others should not be simple. Design is best when use-structured prompts to drive a chain of questions that can provoke a rich discussion. Even this starting point will fall away.
Often a focus on simplicity turns to a discussion of “friction” and the idea that a designer is supposed to act in all cases to reduce friction. This begs the question: Why is friction bad? When there are real stakes, friction can help us make better decisions. Do you really want to pour all this manure in the middle of the street? Do you want to turn off the engines midflight? Sometimes friction is an experience like a video game. Even games with microtransactions have considerable friction in their loot box mechanic.
Key Takeaways
- Simple is a cultural trope, not a property of the world.
- Simple and complicated refer to the alignment of psychological states with physical designs.
- Friction is a key trope in design today; the cliché “design the friction” is especially important.
4.1.6. Abstraction
Abstraction is a powerful concept. In art, the move toward abstraction allows the artist to be free of the purely iconic or mimetic, to develop works that have qualities that might evoke a feeling without relying on so many established identities. Piet Mondrian and the artists associated with the De Stijl movement attempted to reduce works of art to their most basic elements, in the final form blocks of color set into grids.[15]

In software, abstraction allows the development of many powerful tools instead of taking a nearly pointillist orientation in encoding software to run on the chip itself. At the lowest level, machine code drives the utilization of the gates that make the computer work. At higher levels, programming languages allow users to deploy abstractions. As the programming moves to higher and higher levels, the abstractions become increasingly understood by users. A page written in HTML and CSS may actually be quite readable by a human. Over time, language developers may produce new abstractions for functions that were once accomplished with much more labor at a lower level.

The grid aesthetic is almost second nature for the web today. Users are not expecting a sophisticated layout that requires them to relearn how to use a computer. These tabular layouts were extremely common, and clever websites could be made long before standards for our current layouts existed. Over time, people began to use more devices (such as phones and tablets), pushing the need for layouts that could adapt. Web developers were then writing div descriptions that would operate in a variety of contingencies. Today, the HTML 5 standard includes abstract representations of positions on the page and adaptations. For application programming interfaces (APIs) and software libraries, many of the functions they offer for data analysis of manipulation are not new. In computer science, this is called refactoring. When code is rewritten with base instructions as functions for repetition and ease of reading, that code is improved.
Abstraction as a semiotic process allows the formation of powerful symbolic signs that greatly increase the power of communication. Yet every abstraction conceals and excludes. An important dimension of our study and production of the future is the role of the abstract in granting access to systems for new developers and designers.
Key Takeaways
- Money, art, culture—all are abstractions.
- Abstraction is inevitable.
- Refactoring concrete social forms into abstracts is an ongoing core communication process.
4.2. High Theory
There is a substantial base of scholarship concerned with the structure of novels and the stories contained within. Theory developed in this context can be useful for describing reality. Narrative theory offers insights into stories of multiple types, including formalist theories, which are useful for understanding late Hollywood cinema. Studies of Russian formalist literature were highly influential on relational communication. This account is fundamentally rhetorical, which would speak to the particular effective elements of the text. Consider Jacques Derrida’s foundational lecture, “Structure, Sign, and Play in the Discourse of the Human Sciences,” which demonstrated via deconstruction how the literary form of structuralist writing was an explanation in itself.[16] As a reading strategy, this would be aligned with a theory called close reading, which for literary researchers would involve the dissection of sentences. For speech researchers, close reading would involve the analysis of particular sounds and timings. In conversation research, an entire markup language has been developed to show pauses in speech and micro-expressions. For those of you who have taken a literature class in your general education program, this account of literature isn’t particularly literary at all. Much of what one sees in critique could align with semiotics (discussed in section 3).
Perhaps the study of literature is the study of stories, which are delivered first and foremost via books. Other media have developed specific theoretical vocabulary to explain their particular textual constellations. The future of literary theory is strictly literary; it is a theory set that describes what happens in books, leaving other media for the specialist practitioners of those forms. The novel in this sense could be understood as an extension of information theory: the novel is a document of a culture, and collectively novels could include huge volumes of cultural information. This is where cultural analytics cuts in and notes that the shift in reference to every novel ever written would fundamentally change how we think of literary theory, begging the question of rebooting literary theory in the first place.
Yet this is still unsatisfying. There are clearly a number of literary positions that explore the symbolism within the text and the broader symbol systems in those stories. For undergraduate audiences, hermeneutic critique is presented as the work of the “masters of suspicion” (Nietzsche, Marx, Freud) as designated by Paul Ricoeur—this presentation flattens a number of unique interesting questions presented by each author and tends to position the world in a troubling critical/noncritical binary.[17] In her critique of this binary, Rita Felski argues that the binary misses the range of possibilities and texture that come from any number of approaches to interpreting the text:
In a related essay, I scrutinize some of the qualities of a suspicious or critical reading practice: distance rather than closeness; guardedness rather than openness; aggression rather than submission; superiority rather than reverence; attentiveness rather than distraction; exposure rather than tact (215–34). Suspicion, in this sense, constitutes a muted affective state—a curiously non-emotional emotion of morally inflected mistrust—that overlaps with, and builds upon, the stance of detachment that characterizes the stance of the professional or expert. That this style of reading proves so alluring has much to do with the gratifications and satisfactions that it offers. Beyond the usual political or philosophical justifications of critique, it also promises the engrossing pleasure of a game-like sparring with the text in which critics deploy inventive skills and innovative strategies to test their wits, best their opponents, and become sharper, shrewder, and more sophisticated players.[18]
In this sense, there is a style of theory that is not about the text or the audience at all, but about the critic showing some other textual sophistication. Through critical irony, this version of theory provides distance for the critic and mutes political potential. The value of literary theory in this sense is not as moral pedagogy, the epistemic value of rhetorical/narrative theory, or the cultural capital ostensibly incumbent in high culture. For John Guillory in his new classic Professing Criticism, the case for literary work is strongest in all of those other senses and weakest in the place that seems to matter most to humanities academics: criticism. His critique is biting for all humanities enterprises, as well translational for social sciences and the arts. Literature as moral pedagogy begs the question of morality in the first place. National cultural projects demand literary education, raising the issue of nationalism, however banal.[19] The epistemic value of literary theory has been captured by writing and composition departments, which for any number of bizarre reasons were seen as “less than” their theoretical counterparts. For speech, the division between the practice of rhetorical criticism and public relations/debate coaching/political communication strikes a similar note—If the process of critique is no longer is salient or even about practice, what is the value?
Textual exhaustion is real and can be felt in television, film, and other disciplines as well. Why publish one more interpretation of a vampire show from the 1990s? Felski contends that one answer to textual exhaustion is to produce more increasingly tangential readings, which themselves then stand in for meaningful textual analysis. The second turn that has been more productive is away from the text to the conditions of possibility in which it is produced. For Dennis Y. Tien, the next step in literary theory in the age of AI is to situate the means of production for the text.[20] Criticism as an exploration of a policy structure comes full circle, forcing the critic to recon with their choice not to go to law school. Law school, after all, is the medical school of the humanities. Less cheeky, the future of criticism is in the production of rhetorical scholarship, which could speak to the ways that social movements might change culture, not mere spectatorship of it. Of course, lurking for decades just beyond this section but everywhere else in this book are uses and gratifications, the basic social theory that supposes that we should explain what people actually use texts for in their daily lives. There are attempts to rebuild the value of textual critique, but these tend to take a reactionary line, seeing literary criticism as something that naturally emerges from critics doing their thing.[21]
If we were to remain on the textual level, rather than departing for politics or sociology, there is real potential in Stephen Marcus and Sharon Best’s concept of superficial reading.[22] It is entirely reasonable to trust individual perceptions in many cases. Consider the state of affect and aesthetic theory, whereby the feelings that we experience are real things, not stand-ins for power in our lives. Your joy is not a mere effect structure of the possible prison that is political in itself without some ascription of a deeper meaning.
This is where you will find the answer to what is for some readers an important question: When will this book account for “the digital?” The answer is everywhere, also nowhere. For a time, around 20 years ago, it was fashionable to debate the metaphysical status of technology, which has a particular inflection from the work of Martin Heidegger, who questioned the primacy of the human world view. This is a reasonable point that could take one in any number of useful directions, as there are surely things beyond humanity at the moment. A central problem is that most people do not think like philosophers and are not thinking in discrete categories. Thus the “digital” or debates over the nature of objects seem arcane at best, tend to be tired, and are often downright silly. This is discussed in the position on technological determinism in section 1. The tools of cinema production shaped the output, so no one thinking seriously would entertain the counterfactual: What if we spent half of US gross domestic product in 1920 on making the visual effects from Star Wars Episode IV by hand on a frame-by-frame basis? Conversely, we have never seen a cinema camera set itself up to film a take. That particular object must not have a lot of want to, none of them. Taken literally, both the extreme cultural and extreme object positions make little sense. Deconstruction would suggest that subject- and object-oriented positions are not a binary in the first place. Ian Bogost’s argument for carpentry is extremely persuasive: if one really wants to integrate objects into their practice, they should use them in a careful, intentional way.[23] As you may notice, this book is concerned with the techniques of making and easily could be seen as an introduction to new media carpentry.
Key Takeaways
- Literary theory could describe broad rhetorical/textual elements, the historically specific technology of the book. These theories are most productive when applied to the literature of a subject domain.
- Depth metaphors conceal readings that might detract from understanding the text, and close reading is an important strategy for understanding the text. Care must be taken to be sure that depth is really depth.
- Textual exhaustion is not just a problem in studies of literature; I have made the argument multiple times in this book that representational critique is no longer a primary mode of research.
4.3. Critical/Cultural Studies
Let’s begin with a trio: descriptive, interpretive, critical. Descriptive research tells us about what the world is, what is in it. This is powerful research is foundational work. Interpretive research describes what things mean in context. Critical research deals with the conditions of possibility for our descriptions and interpretation. This requires that researchers attend to the power relationships inherent in the scene they research. In his landmark article on critical rhetoric, Raymie McKerrow frames this in the ways that public address and argumentation scholars would argue for a universal human subject and pure rationality as justifications for the communication discipline—as if this would put communication on an equal footing with philosophy.[24] A critical view toward disciplinary conventions then would reveal that the ways that rhetoric had been positioned were an appeal to a traditional sense of power relations in the academy. Such efforts challenge the white, male, and US-centric views of the communication project.[25] Even the underlying mechanism of critique is subject to critique (see the critique of literary theory in section 4.2.[26] Communication as a field is now decades into critical work, with a long way to go.
Critical research is not a semiotic game, but an active reflexive project in considering the conditions of possibility for research. There are a few obvious initial domains for this—we can be critical of the literature that we cite or do not cite. Although AI systems promise to hold every possible bit scrap of writing in their Markov manifold, they will never output that treasure because it would give the intellectual property litigation game away. A literature review can provide credit, context, and conversation—it does not imply completeness. Scholars should not compare themselves to a fictional robot. For examples of how this critique of literature works, look back at the race arguments in the identity section 2.9—there was so much important critical work to be done on criticism. Affect theory is a particularly rich area today, as it offers a direct attempt to explain the interaction of cultural and social forms through lived experience. Critical research is empirical. The criticism of this claim would be that the number of subjects in affect research is simply too small; the retort would be that a small n is better than an “n of none,” if one does not accept that methods involving high levels of dimensionality reduction can address the human experience. Performance offers important dimensions as well. Performance research can address the ways in which identity is performed, the mode of research conduct or output as performance, as well as research on overt performances. Within the context of how performances exist, the fundamental dynamics of bodies and social roles are especially present.[27]
Tools also require a great deal of criticism. Section 3 of this book discussed a critical approach to understanding production tools; an additional critical idea on video editing tools will appear later in this section. Wendy Hui Kong Chun’s critique of basic social science tools is profound and important—homophily was only one way that basic theory could have evolved—and there were so many more.[28] Computational communication research is concerned with tools but also critical approaches that happen to use computers. The field of critical interpersonal and family studies has made great strides in challenging assumptions about gender, relational labor, uncertainty, and self-disclosure (reframed as self-making).[29] Critical family studies of Reddit (and the discussion of being child-free) offered a much more complex account than might be seen even in other critical work.[30]
Key Takeaways
- Critical challenges to assumptions in research are just as much a part of a successful discipline as iteration and repetition.
- The full stack of human experience is a challenge for all communication research; critical research pushes back against segmentation.
- Empiricism in critical research comes from the appreciation of the extremely high dimensionality of life.
4.4. Narrative
Stories are everywhere. Stories increase emphatic awareness of others’ situations. Stories may even be the basic structure of being itself. Stories aren’t going anywhere—but the content of those stories will change. It is also important to understand that ideas from prior stories are referenced in future stories. This is called intertextuality. The play between the present consideration of a text and all of the meanings loaded into forms and tropes are important.
Joseph Campbell identified a commonly used structure known as the hero’s journey: this formalist template can be identified in many successful stories.[31] This does not mean that those stories are all the same or that there is no variation, but that rhetorical forms evolve over time. Changing stories have a low velocity. The mythic structure is particularly enduring regardless of the choice to include vampires or Mr. Darcy.
Leslie Baxter and Barbara Montgomery provided a different approach to understanding narrative in everyday communication—flowing from Mikhail Bakthin, they see our personal stories as a product of forces that push people together and pull them apart.[32] This is an important idea, as it can give us a way to read stories that are developing as well as those that are less than logically coherent. Reading the future is not a matter of binary logic, but of complex fuzzy interaction across space and time.
Kenneth Burke approached the use of narrative in public life through the idea of the pentad, which comprises five basic units of a drama: scene, act, agent, purpose, and agency.
| Scene | Setting |
|---|---|
| Act | Action |
| Agent | Character |
| Purpose | Reason why |
| Agency | Method of action |
The pentad provides us the foundation of a theory called dramatism, where a formal structure can be applied to any number of communication phenomena. Professor Ragan Fox at California State University, Long Beach has a spectacular full chart that anyone interested in learning rhetoric must review.[33] These appear as “ratios,” where two elements of a story combine to form the core logic of that story.
Where Campbell’s approach leads us to consider the sort of stock stories that populate our worlds, Baxter and Montgomery provide resources for seeing how those stories are translated into action in our everyday lives. Burke, in contrast, provides a guide to how the ratios might be deployed in everyday life. This returns to the point of the semiotic interposition of signs and facts.
Of particular interest in recent years has been transmedia theory, which supposes that a media property would be designed to function across multiple platforms.[34] Of particular interest is the theory of additive comprehension, which argues that across the platforms and over time, a character might show substantial dynamism and that the various affordances of the media where that character exists would operate co-productively. For example, the main product might be a movie, while a secondary product could add depth to characters like television shows, video games, or comic books. A major challenge in transmedia narrative design is that the central tent pole, to use franchising terms, of the system should not require “homework” to be understood.[35] Transmedia that truly are not designed across platforms or that treat the character as static beg the question of the theory. There was also hope that transmedia, interacting with convergence, would be a boon to participatory culture. The challenge was that the theory base pivoted to “spreadable media,” which while leaning into social media strategy does lack some of the open-ended dimensions of earlier eras.[36]
Key Takeaways
- Human experience is often effectively described in narrative terms. Narrative is a way of knowing.
- There are numerous narrative theories; it is merely act structure and the hero’s journey.
- Transmedia theories raise questions about the collection of fragments and incorporate them a narrative structure across discrete media.
4.5. Argumentation
Arguments are another special form of code. How we argue and how we evaluate arguments are constantly changing. The idea of the syllogism is thousands of years old, but ultimately the form of the syllogism is only useful in cases where a clear formula in the context of a dialectical regime of truth is possible. What does that mean? The syllogism depends on the idea that there is a truth, and that by ascertaining proper premises, an accurate statement can be confirmed.
Consider this syllogism:
Dan is a person
People have opinions
Dan has opinions
Or, to put in an abstract form:
Dan is a member of group A
All members of A have property B
Thus Dan has property B
In terms of a structural logic, this syllogism operates through a righthand/lefthand movement that is also used in some computer programming languages. The problem with the syllogism is that it can only handle one operator at a time. What we find in the analysis of complex issues of policy or value is that there may be multiple contingent identities and relationships in any given argument system. You also may have noticed the implicit use of the idea of “all” in the example—a further problem. How do we make arguments when identities are unclear or are in flux?
Argumentation theorists have developed alternative models that can appreciate the complexity and contingency of real speech and reason. Stephen Toulmin’s model supposes that an argument has the following parts: claim-warrant-backing-data-qualifier-rebuttal.[37] The key to this model are the warrants, the inferential leaps that connect claims and data; the data can be other claims. The power of this model is that it opens up a lot of space for nested argumentative functions, where one argument contains many others. Not necessarily the best for analysis of value claims, this model is useful for finding the sites where probabilistic claims about the future become stronger and weaker.
What is spoken:
- Claim: Deflation is worse than inflation.
- Data: Historical financial data for 200 years indicate that depressions were more likely to be correlated with deflation, and that deflation-linked events were longer and more severe.
What is unspoken:
- Warrant: The use of correlation across examples provides a reasonable account of cause and effect.
- Backing: Long-term financial data are the appropriate information for this question. In the discussion of a causal claim, a strong account of correlation can be developed as causation, especially when definitive causation is impossible.
- Qualifier: The claim was structured around the idea of worse than.
- Rebuttal: The argument is designed to respond to the idea that inflation is the paramount fear for economists.
This claim about deflation could then be situated as if it were a fact in a larger system of claims, which we could call a case. The great strength of this approach for understanding practical discourse is the multiplicity of possible warrant and backing formulations. In the introduction to this book, we established the idea of abductive reasoning, the prospect that a claim can be probabilistic and that we might decide between rival probable models of reality or facticity. For many interesting claims of policy, the collection of probabilistic claims is the most important dimension of the argument in the first place. It is not merely that deflation is worse than inflation, but that if the central objection to the development of good social policy is the risk of inflation, avoiding the risk of deflation would be a reasonable consideration. The next line of argument in the debate for this side should be clear at this point: deflation is worse than inflation.
This diagram explains the Toulmin Model by defining four written components of an argument—Claim, Warrant, Backing, and Data—and identifying common problems with each component along with ways to improve them.
A Claim is a statement submitted for judgment and may be based on value, fact, or policy. Claims may be too extreme or unclear. Improvements include adding qualifiers, removing absolute terms, and clarifying essential details.
A Warrant is the reasoning that links the claim and the data and may take the form of analogy, sign, authority, cause, generalization, or principle. Problems include gaps in reasoning, lack of supporting evidence, or using an inappropriate warrant.
Strengthening a warrant may involve adding or revising backing or selecting a more suitable warrant. Backing provides additional support for the warrant and may need elaboration.
Data includes statistics, examples, testimony, factual statements, or accepted principles. Problems include too little data, too much data, or unreliable data, with improvements focused on gathering more relevant information.
The diagram emphasizes that using “even if” statements can strengthen all parts of an argument.
For a full discussion of warrant structures, take an argument theory class in your university’s communication department.
Just as this model is strong for evaluating systems of probabilistic policy claims, it tends to be weaker for evaluating aesthetic and value arguments, but there are ways to bring those in, especially through debates over weighing mechanisms. Another site for critical intervention is not the claim or the data, but the warrant. G. Thomas Goodnight argues that legitimation in argument functions through the inference (warrant) and then critically through the backing, those arguments which might distinguish between warrants.[38]
Luc Boltanksi and Laurent Thevneot offer a theory better suited for value based on justifications.[39] Claims of value happen in different worlds of value. In this model, we are not judging policies but values and aesthetics, given their capacity to work with established regimes of value. This theory gives us a way to deal with the empirical dimension of a values discussion, which is more than the stark utilitarian calculus that accompanies policy. It is in the alignment of the value and the test that a legitimate transfer of value is established. Illegitimate tests are those that attempt to use the wrong values and tests for a situation. It is conceivable that one might argue that something belongs in a different world, thus becoming subject to a different economy of value.
This model of value hinges on the agreement of people in a community to the development of a relevant test. For example, in the world of celebrity, the appropriate test is popularity. The actual means by which the tests function—often aesthetics—are not easily contested and are not of particular interest. You cannot declare: “Fallacy, you felt wrong.” Legitimate orders are those that can be justified.
The standards by which we evaluate argument and the ways that ideas are moderated are always changing. When argument moves beyond a single simple claim of fact, argument itself is contested. The power of the code of argument is that we freely move between the levels of dialectical and rhetorical judgement: the ideas of logical validity and desirability continuously play into each other.
In practical terms, a justification must be provided. It must be a claim that declares a world (a setting), with a polity (a form or organization appropriate to that world), which then has a value (economy of worth), and that can be judged with an appropriate test.
Example: Who is the more important pop star today, Taylor Swift or Cardi B?
This is the world of fame. The polity are those who are a part of the larger manifold of the recording industry. Importance is judged by recent chart performance, the test being Billboard Hot 100 performance. In calendar year 2017, Cardi B outperformed Taylor Swift. Swift’s previous album was a commercial disappointment, one of the fastest-dropping ever. Furthermore, “Bodak Yellow (Money Moves)” by Cardi B displaced Swift’s “Look What You Made Me Do” at the top of the chart. We could thus reasonably say that Cardi B was the more important pop star based on the rules of this community in 2017, the economy of worth, and the appropriate test. If we were to shift the temporal criterion, Swift’s aesthetic shift in Folklore and Evermore drove the periodicity of the Eras Tour, allowing Swift to become perhaps the most famous pop star ever.
Key Takeaways
- Argumentation theory regards the ways that individual arguments function, cases are constructed (arguments are almost never singular), and burdens for judgment are set.
- Argument is a central construct for the study of public address, which is rhetorical; thus it is not a list of dialectical fallacies.
- Contemporary theories of practical argument provide situated theories that allow application of argumentation theory to contexts where conventional evidence may be less relevant. This is a deeper question of whether a field-invariant theory of argument exists.
4.6. Ethnography
As a practice of tracing the everyday performance of individuals, ethnographic research, particularly fieldwork, sees the body and experience as the basis for analysis. Researchers in this tradition conduct interviews and spend time in the field. Rigor comes from demonstrating that one was really embedded in the system of meanings used by a group of people or in a particular context. To maintain a structured system of thinking, the researcher continuously produces field notes, which serve as an intermediate document of what things meant to the individual at a particular point in time. These structured notes then allow the researcher to remember key details to later construct the finished account. Finalized research then is the product of a review of these notes and a reconstruction of the symbolic system. This research is not published as a sort of abstract formula, but as a rich account of the meaning itself. Great ethnographic work may read more like a novel than a lab report.
Dwight Conquergood, a major ethnographer of communication, in his development of a critical ethnographic practice, called for the ongoing reevaluation of the sources of authority for those who would claim to know what meaning is in a culture. The key to this performance ethnography is constant self-reflection: the researcher needs to understand how their mode of presenting themselves and the world produces their own academic authority. Descriptions of hardship often seem to provide ethnographers credibility. Why such an intense struggle? Meaning is owned by a group; it is not hard to find many troubling examples of cultural appropriation, where information is scooped up and resold as a product with little remuneration of the original producers. More instrumentally, there could be multiple explanations of the same phenomena. Ethnographers have many interesting strategies for demonstrating rigor and building credibility. When these are well deployed, and when the account is ethical and engages the community, the results are profound. It makes sense why large corporations like Microsoft pay for academic ethnographic research. These are the insights that can make a game-changing product.
It is the power of reflexive ethnography that is often mistaken as the seemingly magical dimension of design. If the designer is truly, reflexively integrated into the use context of a thing or idea, their designs will resonate in a powerful way. At the same time, it becomes comically excessive when people who are not embedded claim the same sort of authority or make the same claims to meaning as those who have not engaged. But wait—notice that in the creation of authority for the ethnographer in the last sentence, I have already deployed a rhetoric of effort and time, as if that work somehow gives a person the right to make authoritative claims about meaning simply on the basis of duration. Would it make the claim more powerful if I said they experienced hardship?
Ethnography is a powerful approach, and people who have the patience and the reflexive sense to do this well are rare. At the same time, this is not a special property of some people, but a skill that is refined over many years. It is entirely possible to do meaningful work in this area that does not rise to the level of publishable ethnographic research. Less intense than book-length ethnographic studies are rhetorical field methods, where scholars go to rallies and talk to members of the public to find how they are interpreting political messaging.[40]
Within ethnographic methods is the idea that the researcher in their work produces extensive field notes. These notes become an intermediate document that later helps to structure the researcher’s writing. These notes can include key quotes, images, descriptions, and texts from the field. Typically, researchers produce great volumes of notes that are then organized thematically, often with the use of computer software. In this sense, the research is producing a great deal of data, and because the researcher is embedded in the process of data production, the dimensions and depth of the data can develop organically. The goal in the key areas of the study is “thick description.”[41] The insights the ethnographer produces have high dimensionality, which is necessary for communicating one’s findings in a research document. The result of a years-long ethnographic investigation will not be a single paragraph addressing a single hypothesis.
A great study using this method is It’s Complicated, by danah boyd.[42] In this book, boyd interviews many teenagers and their parents to understand what social media meant to teens in the late 2000s, especially as MySpace gave way to Facebook as the primary social media platform. What comes across so clearly in the book are the ways that the technology is used relationally. It isn’t the cell phones that are controlling the people, but people using the phones in their relationships, or blaming their own relational challenges on the devices. Being structured as a book rather than an article allows the reader to have a rich sense of the worlds that boyd encountered, and the depth of the people and their challenges.
Key Takeaways
- Ethnographic field methods offer extremely highly dimensional data.
- Ethnography extends across the research and writing process; it can’t be easily reduced into the format of other research methods.
- Such methods perform authority through intensity of labor.
4.6.1. Conversation Analysis
While closely related to ethnographic methods, conversation analysis supposes that there is an underlying system of conversation structures, which are themselves the core substance of communication research. Conversation analysis draws its strength as a social science from the ways that multiple studies of conversations converge.[43] In this section, I am specifically calling attention to conversation analysis as a distinct contribution to the understanding of speech rather than discourse analysis, which would include conversational features placed in a larger context.[44] Conversation analysis generally offers three kinds of insights: performative roles, the ways that people enact those roles, and finally the pragmatics of how those conversations work (beyond the subject focused dimension).[45] Consider Alan D. DeSantis’s study of the conversational dynamics of cigar shops, where the roles of the smokers and the proprietor of the establishment were overtly situated in the physical setting of the shop to understand conversation dynamics.[46] Jane Edwards argues that there are three key elements of that documentation: transcription, coding, and markup.[47] Transcription refers to the base recording—who said what and in what sequence. Coding applies discrete qualitative analysis of particular utterances. Mark-up refers to metadata encoding of the information. There are highly specific transcription methods that include special information and marks for conversation. Transcripts do not look like a movie script; they include all sorts of other marks that refer to interruptions, sounds, and themes. These are precise documents. Again, per Jacobs, the most common elements included in any transcript system include “words, units of analysis, pauses, prosody, rhythm and coordination, turn-taking, nonverbal aspects and events.”[48] Conversation analysis gets at the critical elements of living with other human beings, something that media researchers, especially future-oriented ones, sometimes lose track of.
Jenny Mandelbaum et al.’s “Micro-Moments of Social Support” is a study of how families offer social support during meals, even if there is no indication that they want more food as a form of support.[49] Of particular note are the extracts like “another egg,” or “hot sauce,” where the analysis holds closely to the language of the families and the ways that they might engage in support, without directly voicing it.
Key Takeaways
- The units of analysis can be micro-expressions, pauses, or tonal shifts.
- Precise descriptive schemes for conversation exist.
- Communication research is after all about forms of talk.
4.7. Law
Courts rely on multiple sources of authority for producing interpretations that resolve disputes.[50] There are times when the interpretive coherence of the legal system and the resolution of a particular dispute are at odds. These disputes come in many forms, including enforcement of contracts, criminal matters, and defamation of character, among others. Judges have multiple theories that help them interpret the situation. Laws are written by legislatures. When empowered by legislatures, executive agencies consult with experts to write regulations. For decades, the courts deferred to the expert judgments of those agencies, but in recent years, the federal courts have preferred to insert their own preferences in place of agency experts. Constitutions both for countries and subnational units offer key guarantees. Constitutions offer a set of counter-majoritarian projections. Treaties, especially for intellectual property, also may factor into decisions. Finally, common law is the accumulation of centuries of tradition in how courts operate in Anglo-American courts. This is why there are arguments that lawyers make in court that are not included in the text of the laws, constitutions, or contracts in question. An example of this is “laches,” which is when a party delays action in a matter that unfairly disadvantages another. If you allowed someone to infringe on your trademark for many years, you would not be permitted to then file suit decades later.
Precedent is set by the Supreme Court of the United States (SCOTUS), which typically hears about 90 appeals each year. Cases are selected by the justices; this is called a writ of certiorari. The court only evaluates questions in the cases before it, and the court is only the original jurisdiction for a handful of cases involving particularly thorny matters between state governments.[51] When we think of this in the context of the design of state institutions or speculative civics, the courts in this sense are a trailing institution. They wait to take action. Within the courts, precedent shapes the interpretation of the law. Precedents bind vertically, meaning that a court above makes a determination that should be followed below. This is called stare decisis.
Generally, the Supreme Court is unlikely to overturn an existing precedent, opting instead to distinguish the current situation from that which formed the precedent in the first place. The law as such is the current interpretation of text and precedent. Ryan Black and James Spriggs have found that depreciation occurs within a 20-year window. It is not a question of all meaning over time, but how the courts have ruled recently.[52] If a case is not cited within the window, it is not likely that it will be included in the current understanding of the law. It is important to find a lawyer who is active in a domain of practice to understand laws in any particular area. A tax attorney may not be up to speed about current cable franchise law, and in areas where the court has been inactive in recent years, results may be far less predictable if half the justices on the court do not have expertise in a certain area. Lawyers use many resources, including reference books, like the restatement of law for a particular area, to understand what the law actually is.
While it would be nice if courts operated on an entirely consistent basis to the truth of their circumstances, they are often constrained by circumstances. Often, courts decide via a balance of equities, meaning that both sides are winning some element of their argument. Why isn’t there a single answer or book with clear directions? There are many situations and different sets of facts. Much like the inability to directly translate the measurements of scientific devices into a policy finding, the analysis of the desirability of a result is difficult to translate into an abstract legal rule. A central tenant of critical legal studies, a subdiscipline appreciated by communication researchers, holds that the law itself is indeterminate, meaning that the text can be interpreted to have a wide variety of meanings. This is not to say that the legal text has no meaning, but that if there is an interpretive question, it is more likely that the question will be resolved in favor of the side with more power. Some versions of critical legal studies see law as an aesthetic form itself, continuously reproducing its own power through various mechanisms of performance and metaphors.[53] Closely related to and derived from critical legal studies is critical race theory, which contends that the ostensible race neutrality of the law sustains racist power relations. Indeterminacy in this sense allows courts to ignore racism rather than actively do racist things—this is why race is understood as a system of power relations rather than a thing any individual might be doing. Advocates should then actively foreground the good of the Black community in their legal work rather than hoping that neutrality yields results.[54] Feminist jurisprudence offers insights into the ostensibly gender neutrality of law, which is not neutral at all. This is especially true in the context of debates over pornography and sexual harassment. Catherine MacKinnon’s work on pornography is especially important: if pornography is a mechanism where sexism is produced in a way that restricts the capacity of women to enact speech, then restricting pornography would be good for freedom of expression.[55] These scholars actively considered each other’s work, noting that identities intersect and that forms of power that include race, class, gender, ability, and other dimensions of identity produce unique forms oppression.
Law is media. It exists as a special text, a set of performance protocols, a system of published directives that organize time and space, and even a popular form.
4.7.1. Freedom of Expression
The First Amendment to the US Constitution reads:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
It confirms Americans’ right to distribute information, practice religion, and organize movements. If you have been doing your reading, you likely know many reasons why this is good. At the same time, you notice that this does not apply to nonstate actors. If anything, private organizations have a First Amendment right in the context of the freedom of the press, to make editorial choices. It is not that we have a clear imperative to speak more, but that communication researchers are interested in the careful balancing of public and private restraint and editorial judgment.
You may notice that this is a limited right, as it does not protect your right to say whatever you want without consequence, and it prohibits both compelled speech and state speech. There are powerful questions here:
- Should we protect lies?
- What do we do about hurtful speech?
- Should it be acceptable to say true things, even if they are hurtful?
- Does the truth actually win in public debate?
- How does speech intersect with intellectual property?
- What is journalism, and should it have special protection?
- How should states respect the expression laws of other states?
- Should well-meaning restraints on communication be struck down if they both stop bad communication and chill good communication?
Freedom of expression is a major topic in communication and for the future. Consider these special questions:
- Who should be responsible for the bad communicative acts of AI systems? What if an AI lies about you, is otherwise hurtful, or offers capacity to a third party to create a bot swarm that ruins your reputation?
- In Section 230 of the Communications Decency Act, balance of equities in defamation cases means that platforms that post information from third parties are not liable for defamatory speech by those users. Should AI creators receive the same protection?
4.7.2. Intellectual Property
At Oregon State, intellectual property (IP) law is covered extensively in the “Introduction to New Media” course. Intellectual property law includes:
| Area | Topics | Protection |
|---|---|---|
| Copyright | The protection of texts and authorship | Life of the author plus 70 years; statutory protection for all registered texts ($250,000 per violation) |
| Patent | Inventions that are useful, not obvious, and correctly filed | Injunction, damages |
| Trademark | Derived from the common law; protection for the uniqueness of a brand or mark; nonexclusive. | Injunction, damages |
| Trade Secret | Protection against theft of confidential information; criminal law | Criminal penalties and action to redress theft |
These regimes of law have distinct sources. Copyright and patent law are derived from Article 1, Section 8 of the US Constitution. Trademark is a common-law protection. Trade secret is primarily a matter of uniform state law with new federal law and is primarily criminal law related to theft.
To be actionable under copyright law, one must have made a derivative work for longer than a transitory period. This means that parts of another work could be included in your work, and despite the originality of your new work, you could be sued. The defense in this case is called “fair use,” which supposes that there are legitimate uses of copyrighted material, such as parody. There is no fair use for patented subject matter. Known as the patent bargain, the idea is that the invention is disclosed for the continuation of the process of science and improvement. Thus new inventions that include another must pay royalties during the 20-year period. In some cases, the period may be extended to make invention in a critical area more rewarding.
Patents were common in our area (new media) until the decision in Alice v. CLS Bank, where the Supreme Court roundly rejected patents for applications that attempted to computerize existing business processes. It is unlikely that you will see business method patents in our area again. At the same time, you are likely to see patents protecting many technologies that we love. Without artificial scarcity, prices in this sector would drop below viable levels.
Consider the following special cases:
- In Oracle v. Google, the Supreme Court found that Google’s use of elements of Java to make the Android operating system function was fair use. This decision balanced the equities of the parties so that infrastructural uses of computer code would not be copyright infringement. At the same time, this legal finding substantially decreases the value of an API, which could chill investment in those products. What if the balance of equities was wrong, and those who produce lower-level code should always be in control of what eventually gets to users?
- Many AI companies now license training datasets and data, which were once free via APIs and other sources and are now strictly guarded. These issues will be considered in section 4.22. here the question is about when a common intersection between datasets becomes an abstract idea. In Baker v. Selden, the Supreme Court found that expressions, not ideas, were the subject of copyright law. At what point does the massive collection of all examples of a form reach the status of pure idea and not mere expression? Can an AI model have a training set so large that it is no longer subject to copyright law?
Key Takeaways
- The law is not a mechanical system; it is a complicated political arrangement.
- Dialectical methods for understanding law produce a continuum of conclusions between positions rather than definitive answers.
- There are multiple distinct domains of law that are essential for the future media student. A traditional introduction to mass communication law approach may not be enough.
4.8. Ethics
To quote former Oregon State media ethics professor Bill Loges, “law feeds on the corpse of ethics.” Often university administrators get the idea that they might merge media ethics and law into a single course to save money and time. This misunderstands both law and ethics. Law is a construct that allows us to determine what is permitted and under what conditions. Ethics concerns the duties and principles of practice. As a collection of professional practices, journalism ethics are fairly robust, at least inasmuch as they tend to be complete. New media ethics, while including journalism ethics at some points, are substantially broader and undertheorized. Large firms in the industry have ethics boards and have contributed to ethical research through large grants to ethicists.
Mark Andrejevic offers a challenge to this ethical framework for new media as superficial and a distraction from a consequentialist consideration of the common good.[56] Ethics boards are merely a distraction form the deeply unethical conduct of these companies, and worse, they suppose that the domain of ethics is the decision of corporate actors themselves—as if the choice to be good or bad is held by a CEO and not by people generally.
Over time, these large corporations have dismissed their ethics boards, and the flavor of ethical theory favored by the AI and crypto industry, effective altruism, has fallen out of favor.[57] This is not to say that the base idea that it could be ethically desirable to act locally on tractable, ignored issues is bad; surely there are many examples of when this is good. When translated into an ethical program for new media generally, it fails. Long-termism defers accountability into the indefinite future and the focus on AI alignment, cosmic ray bursts, and space colonization are sci-fi fantasies, not rugged empirical considerations but aesthetic ornaments.
Ethical research today is returning to the basic consequentialist framing that Andrejevic suggests. In a landmark article on computer science ethics, Helen Nissenbaum puts accountability forward as the key to any tech ethics.[58] The fundamental challenge of new media ethics is that our systems are like the brooms in the movie Fantasia: they have a mind of their own. A computerized system becomes the perfect mechanism for translating human action into a new form, which is then ostensibly inhuman. Accountability is essential, as we must have some mechanism to assign responsibility for consequences so that measures to prevent, pay for, and redress impacts can be meaningfully calculated. Any concept of a utilitarian consequentialism without an underlying criterion like civics or accountability is doomed to produce horrific outcomes by failing to account for the world and how it exists. In the worst case, a utilitarian conception of new media ethics without a criterion for understanding the world becomes a pure negative utilitarian perspective, which is really a call for immediate human extinction—after all, the only way to eliminate suffering is to eliminate people. Rejecting such incomplete ethical systems is an important step in theorizing ethics in the first place. More importantly, prior to the calculation are the aesthetics of that framing.
Another important emerging rhetorical framing for new media ethics comes in the rejection of “human crumple zones.”[59] For boyd, following M. C. Elish, we have fully inverted our ethical imaginary so that human actors take the blame for the bad actions of designers of new media systems. Society then accepts the outlandish claims by various tech titans to be “solving” any number of social “problems” that were produced by neoliberal capitalism, described as if they are simply engineering faults. It is a bad person who stops the magical machine from solving global warming or ending world famine. There is a certain sort of idolatry lurking in the ways that the new media industries perceive their machines as being above people and the world. The tech press then celebrates the romantic geniuses who build these world-destroying mechanisms, profiling them as if they were celebrities. But then, in the final, essential moment, the press divorces the titans from any accountability for their actions. The crumple zones are not at the top—they are everywhere else.
Other ethical frameworks have surfaced in recent years. Virtue ethics supposes that one should act like a virtuous person. This tends to beg the ethical question: What if you pick a bad person? This also fails the Carly Simon test: you are so ethically vain that you think this is not about everyone else. Contractualism supposes that one should act in a way that they might reasonably justify to others so that those others might accept their justifications are legitimate not that they agree with them.[60] Contractualism as an ethical position can be difficult to scale up, as it is fundamentally an interpersonal ethical theory. That said, there are ways that one might scale up this ethical theory inasmuch as they imagine the ways that their policy prescriptions affect others. Much like one can consider distant others and legitimatize the humanity of their perspectives, contractualism can offer a great deal for new media ethics, as it considers deeply the reasons why people take action as well as the rhetorical framing of their action and a facile consideration of consequences. As a deliberative ethic, contractualism can scale up not as a determinate maxim for evaluating any particular action, but as a process of becoming ethical, of setting communicative engagement with others. Even if those others are not on hand, it is possible to imagine them and what they might not reasonably refuse.
Levinasian ethics would insist on the perception of the other as it exists in our encounter with images of others. This is not a new idea; the potential for images of others to become ethically catalyzing has been well documented.[61] This extends to the ethical consideration of iconic images and the ways that mediated encounters can produce the conditions for democracy.[62] Margaret Schwartz overtly situates this in the ways that media transmit trauma and how our feelings are produced by exposure to others. Mediation is not separate from us, it is us.[63] This allows “theorizing the edge of communicability, its violent irruptions, and unbridgeable chasms.”[64] In this sense, media are ethics. This ethical theory is particularly useful as a challenge against ontologically derived ethics like those in ecological or technological critique. The power of this theory is that it challenges assumptions about where ethical knowledge might arise. (Note that the author did not even present a purely deontological extralinguistic theory of ethics.) Consider differences in communication between people of different abilities; the assumptions of reciprocity and mutuality in many ethical theories do not account for the richness or ethics of autism.[65] Or, in more abstract terms, more communication is often not the answer. The response to traumatic mediation might be to react defensively to wounding, ending up in a more reactionary place than where we started.[66] Perhaps instead of being motivated by the images in any direction, mediation might simply leave the public numb.[67] New media ethics derived from the possibility of a truly unlimited alterity might find that AI is an actor—which is so far wanting, as a large language model is not nearly as alien as it might seem.[68] Chasing what seems to be alien is not the answer either, as rather than existing beyond the realm of the symbolic, we find a political-aesthetic decision about what is alien in the first place. In short, there is always some other form of immanent judgment hiding in the decision to limit the horizon of the other.
Abeba Birhane, senior advisor in AI at Mozilla; Jelle van Dijk, professor of engineering at Twente; and Frank Pasquale, professor of law at Cornell, wrote “Debunking Robots Rights,” an article concerned with how the rights of robots are presented as a new ethical framework for new media.[69] The argument for robot rights is addressed in detail and then placed in conversation with the central ethical points of the debate through a consequentialist ethic based on accountability. In this sense, the rights of robots, rather than transforming the paradigm of Western governance, are an exemplary mechanism for corporations to avoid ethical responsibility for their actions. Fundamentally, are robots’ rights a way to vastly expand legal standing, or are they a way of radicalizing corporate personhood to shatter what remains of ethical deliberation? Given a reading of other jurisdictions where these have been considered, the authors come to the conclusion that robot rights are a corporate fiction, not a progressive ethical theory.
Key Takeaways
- There are multiple approaches to ethics and media, offering distinctly different answers to important questions and calling for considerable attention.
- Some ethical positions may be attempts to justify a predetermined set of actions.
- Media ethics are substantially larger than journalism ethics, which are not extensively covered in this volume.
4.9. International Relations
As an academic discipline, international relations (IR) is a diverse collection of approaches and methods, ranging from computational social science to communication and beyond. The focus on international relations here as opposed to “global” media is intentional. IR is expansive and often seems much like communication in breadth and methodological eclecticism. The United States had “kicked the Vietnam syndrome” in the first Gulf War, which combined with the collapse of polarity led to a decade of good feelings. A particularly popular but inaccurate slogan was that “two countries with McDonalds” had never fought a war. As if consumer culture could resolve all human differences. Negotiations for the Rome Statute of the International Criminal Court would have further created a structure for human rights beyond state sovereignty—this was a heady time. Major international relations books, aside from The End of History and the Last Man by Francis Fukuyama, included the Reluctant Sheriff by National Security Advisor Richard Hass, which supposed that in the new world the United States would lead occasional police actions against rogue states (war itself was over), and Soft Power by Joseph Nye, which argued that access to US products and cultural institutions was as effective as military action.[70]
The September 11 attacks and the following War on Terror saw the end of the illusion of peace (the use of economic sanctions to bring starvation and suffering in lieu of hot war is farcical), with increasing costs in maintaining transnational trade networks and the internal contradictions of neoliberalism. This period saw the hollowing out of the US economy (via offshoring and decreased social reproduction spending) followed by a financial crisis in 2008. The final act of this era was the idea that “Internet suitcases” would allow soft power to topple dictators around the world and communication technology itself would be the agent of change, as if liberal democracy were some kind of default condition that would flow from communication technology itself.[71] The world is now mired in rising nationalism, trade wars, wars of territorial aggression, and a rapidly disintegrating global cultural milieu. Thirty years ago, cultural policy was a key instrument for those on the margins to resist the power of the neoliberal empire. Today, that very empire wields cultural policy to drive the production of motion pictures in the United States. There is no next act for Hollywood; the question now seems to be where new centers of cultural influence will appear after the collapse of US cultural hegemony. Technical capacity-building driven by offshoring by Hollywood firms is paradoxical in this respect—the actual capacity to make media is widely distributed.
There are a few major frameworks that we should consider for understanding the future of media. Realism supposes that the international system is anarchic and that states act to maximize power in that system.[72] Realists were skeptical of ideas like soft power and unlimited international free trade; these seemed not to meaningfully maximize the power of the states and would thus lead to severe negative consequences, with leading realist John Mearsheimer taking something of a victory lap in the early 2020s as transnational institutions collapsed.[73] Criticisms of realism are tuned to the extremely narrow view of the actions that states take (or do not take). Worse, realism would justify human rights abuses, as there are no human rights, merely the powers of states. Liberalism in contrast, exemplified by democratic peace theory, supposes that there is an underlying unity that is then expressed through trade relationships.[74] International institutions and especially free trade are the heart of liberalism. More aggressive forms of idealist liberalism may even justify military intervention to promote democracy. If a quick regime change could see that a country joins the international order, it would be in the interests of all parties for that action to be taken. Of course, the relative failure of regime change to produce stable democratic societies speaks to the difficulty in producing order; the same goes for the disturbing trend of democratic backsliding in countries attempting regime change. Given the results, it is difficult to see that there are a priori answers flowing metaphysically to structure the international system.
Constructivism argues that the international system is a layered system of discourses, which could be studied through a variety of means in communication or possibly through other social scientific means.[75] Realism and liberalism then are rhetorical frames that scholars and policy makers use to persuade audiences about the nature of the international system. Constructivism is the most familiar for communication researchers and is likely how we think of the world in the future context. Feminist critiques offer a profound alternative to the masculinist norms of international relations.[76] Gender performs power, shaping access to resources and authority. War is gendered and is co-productive with an international system that exposes people to harm differentially on the basis of their gender, and which relies on the promotion of a particular worldview for the justification of a system of conflict.[77] Representations of women are used in what Lauren Wilcox calls a “protection racket,” where the patriarchal system offers some security but at the cost of the promotion of the world order, which causes violence and destruction in the first place.[78] There are other transcendental approaches, such as the new international, the idea that a socialist link to overcome material deprivation could connect people around the world by contesting the system of binary identities that divide self and other, neighbor and stranger, and beyond.[79]
Communication is in an update cycle. There was a great body of research on international relations during the Cold War; it was in fact a recurring topic, especially when weapons of mass destruction were involved.[80] What does this mean? Communication has a lot of older literature that does not describe the current conditions of the international system and its collapse. There is a lot of work to be done reinventing constructivism for this era, as well as reimaging what peace is without relying on what might be outdated perceptions of which actors are good and bad. This is entirely compatible with statistical international relations (a scientific mode of testing IR theory). No longer are we simply critiquing the illusion of a single, global network—we are learning to live with and address the splinternet.[81]
Key Takeaways
- International relations was once a major domain in communication, but it faded after the end of the Soviet Union.
- This is an important future domain for scholarship and engagement; global thinking did not meaningfully replace the international system.
- There are multiple paradigms for understanding IR that will offer different insights for communication researchers.
4.9.1. War
The Russian invasion of Ukraine is for Kyle Chayka the “world’s first TikTok war.”[82] Ukrainians were in a unique position to cover the attack and successful resistance live on the platform. As the public argument for and against the war unfolded, the Ukrainian side was in a stronger position, as the Russian position focused on a handful of arguments to be made via intermedia agenda setting that were then vulnerable to the direct and real sense of public address produced by the stream of local posts.[83] Psychological operations are very real; one of the best ways to win a fight is to not fight at all. The point here is not that media affect decision making in war, but to take the next step, the Ukrainian effort has been successful not just because it was well argued, but also because it physically succeeded by deploying drone weapons and tactical anti-aircraft missiles to stop the Russians.[84] Anywhere with Internet access, provided by a satellite constellation, could be a combat theater with no risk of lives, for one side at least. Satellite constellations are a relatively new technology relaying information from low-orbiting units back to ground stations that then connect to command centers.[85] As time goes on, strategy will evolve. Attacking the terrestrial Internet infrastructure (base stations and undersea cables) would overstress a constellation, and directed-energy technologies could substantially reduce the cost of direct attack on a constellation, ending the existing deterrence stalemate.[86]
Critical to this conflict are the means by which we observe and control war. For Jeremy Packer and Joshua Reeves, this places media theory at the center of modern warfare.[87] Some defense sources suggest that despite technological advances, the goal is to keep humans “in the loop.”[88] Packer and Reeves disagree: once ubiquitous connectivity is possible in the battlefield, an adequate industrial base can then deploy drones almost disposably. Rather than subsisting on a limited number of expensive weapons controlled by equally expensive people, war is unlocked is the prospect of dozens of weapons per warrior. This is like the shift from Taylorism to Fordism, from a single manager for one worker to one manager for dozens. With AI, the systems can govern themselves in selecting targets and executing their mission priorities. AI becomes a force multiplier that allows a smaller number of people to effect a great deal of effort. Right now (when I am writing this), this appears as a successful David (as opposed to Goliath) strategy; it would stand to reason that a country with a much larger industrial base and population could deploy these technologies with even greater ferocious intensity. The single most important technological bottleneck for war is the same as we have for imaging and everything else: photolithography and the most strategic technology, ASML’s extreme UV chip production system.[89]
Practically, this is forcing countries toward AI sovereignty, expanded communication access, and military procurement. No country serious about protecting its sovereignty can exist without a strong national AI program and control of advanced capacities on local computer hardware. Hardened networks for control of swarms that can be deployed via higher-energy carrier waves can offer a decided defender’s advantage in a drone war, as the ultimate target is not the drones themselves but their command, control, communications, computers, and intelligence. It also makes sense that great powers are invested in developing hypersonic weapons as alternatives to existing systems that will not survive war against swarms of drones.
Key Takeaways
- Media technologies are the command-and-control systems of war as well as the apparatus by which assent for war is manufactured.
- Ubiquitous access to network bandwidth has transformed hard-power military operations.
- The integration of AI into warfare poses further risks to human survival in the longer term.
4.10. Industries and Political Economy
Ownership of the means of communication is essential for understanding power in a society. Initially, this is easily understood through the work of Bob McChesney and the distinction between commercial and noncommercial media.[90] Elon Musk’s purchase of Twitter dramatically changed the organizational behavior of that firm. Similarly, after years of autonomy, Jeff Bezos has changed the editorial direction of the Washington Post.[91] This is the easy part of the story: flamboyant personalities doing deeds are easy to critique, harder are bland corporate mechanisms. Nick Dyer-Witheford argues that the expanding structure of communicative capitalism pulls all activity into its gravity well, thus extending coordination and control across almost all media activities.[92] As these cycles run faster and hotter, the underlying human space that made them function in the first place—workers are deskilled so that they no longer have any particular craft value or pride, emotional workers like ASMRists are alienated from their own sense of warmth, and finally via AI we are fed back our own symbolic processes altogether.[93]
If you are getting a certain flavor of Marxism here, your tastebuds are accurate. Of particular interest is the version of Marxism that comes through cultural studies, which is a Marxism without guarantees.[94] We can know that creators are alienated from their labor, that work is increasingly undemocratic under monopoly control, and the entire system appears to be increasingly unstable and heading toward some kind of authoritarian capitalist blend, what Pollock termed “state capitalism.”[95] All of that said, this does not mean the capitalist system is going to collapse. If anything, it further rationalizes the crisis and moves forward, profiting by solving the problems it created. The moment of the United States in 2025 speaks to this as well—the systematic decoupling of the United States from the world economy is not an endorsement of capitalism as we knew it. A few key concepts for understanding the organization of business today. Neoliberalism is the idea that the capitalism of old has been twisted to only care for prudence, when capitalists in the time of Adam Smith were far more humane.[96] Consider the destructive impulses of former GE CEO Jack Welch. In a never-ending quest to maximize shareholder value, one might destroy their own company—shareholder value could just as easily be realized to a greater degree next week—but alas under neoliberalism we must never defer gratification. Taylorism saw the implementation of management and labor time. Fordism optimized the conditions around the deskilled worker with minimized on-hand inventories via just-in-time supply chain management (thus the need to establish a unified time; see section 1.6). Post-Fordism is the current state of industrial organization. In post-Fordism, workers are asked to manage themselves and ideally to provide all their own tools. Our creative industries are thus full of just-in-time knowledge workers who are contracted by firms that exist as an ethereal nexus of contracts with a stock price.[97]
Media industries theory is a specific collection of media concepts that are intended to explain the organizational behavior of media firms rather than to provide a comprehensive account of labor as a historical form.[98] Of interest in industries theories are the corporate discourses that regulate the affective position of business leaders. Thus there is great concern for changing risk perceptions under the conditions of vertical integration.[99] What one studies are trade journals, negotiations, and industry lore. Essential theoretical core pieces include the idea that “nobody knows” what will be a hit, meaning that industry professionals are always explaining why they should take considerable risk with what is often borrowed money.[100] Further, they are confronted by the ars longa principle, that they compete with all goods ever made in their market space, not merely what is out today. Media firms are a “motley crew” with a variety of professionals both above and below the line, all of whom bring different management challenges.[101] This extends to the ways that we study labor; close investigations of circumscribed agency paint a different picture of the workplace.[102] Media industries theory will offer a compelling explanation of why Netflix is organized around latent manifolds of vibes.[103] Media industries theory will not offer an argument for a revolution.
Media geography offers highly contextual readings of how particular industry clusters operate in their regional contexts, especially through “media capitals” where regional industries tend to focus.[104] These theories are interested in the ways that media capitals form and how media moves through them. We are interested in Hollywood, Silicon Valley, and Shenzhen—not St. Paul, Minnesota, or Madison, Wisconsin (major sites of accounting and health care software development). The center of the influencer universe in 2025 is Miami, Florida. Serra Tinic’s work on the development of the grid in Vancouver is an excellent example, as the question of how Vancouver became a media capital is much more than just about offering some tax breaks.[105] Michael Curtin’s account of the rise of the Chinese film industry shows the ways that formal and informal economies shape developing industries.[106] Curtin’s conclusion that the media capitals of China would form a challenge to Hollywood hegemony seems especially astute. It is not just area studies of particular media capitals, but also studies of the ways that the markets form and function to move media between countries.[107] Postcolonial media theory offers the important insight that the power dynamics of the exchange system were the result of colonial power imbalances. For Sangeet Kumar, this requires an investigation of the protocols and regulator frameworks for media around the world, which functions through the demand of the global scale to map potential markets around the world.[108]
Platform economics refers to specific approaches to understanding online systems that serve as transaction facilitation systems. Platforms typically sell multiple products. Consider Amazon, which sells consumer products, information services, and advertising space. These are three distinct products with different economics. A multisided market is different. Consider a dating application, where some users tend to be more in demand for dates and other users are seeking those dates. These two groups of people need to be connected but have very different prices they are willing to pay for access to the other group. Platformization, as described by Anne Helmond, is the process whereby parts of the Internet ecosystem are adapted for integration into the platforms, as well as the platforms themselves.[109] Rather than waiting to be acquired, the lure of platform access, and thus positive network effects, is enough to encourage firms to adapt themselves to the platforms in advance.
There are two major points of concern. First, platforms often engage in multiple activities to secure users beyond transaction facilitation, actively discourage multi-homing, and employ a variety of strategies to preclude a fluid market.[110] As platforms secure market dominance, a runaway feedback state could be reached where active intervention would be needed to maintain multiple platforms. Why stay on MySpace when all your friends are on Facebook? There may be natural monopolies in some market categories—there is only one (or none) temporal node that can functionally set the agenda across media. The one that sets the agenda is the winner. This is not always the case: TikTok videos, YouTube shorts, and Instagram reels have clearly found space to coexist, yet shorts and reels may not truly be competitive as add-in products that compete with TikTok when their main businesses are a full-service video platform and an affinity-mapping social network. Although Silicon Valley firms are now shedding workers , there was a time when they were actively pursuing acquihire to the degree that it could have been seen as a monopoly (becoming the only firm to work at in a sector). Second, as powerful firms secure control of the market, they lose incentive to maintain the platform, especially if the price for platform access was zero, technically known as platform decay and known widely by Cory Doctorow’s term enshittificaiton.[111] The mechanisms of platform decay are multiple, including a platform floating back to its market support level after venture capital runs out, a monopolist ruthlessly cutting costs after capturing a market, a company forcing failed market dynamics, among others. This is really not a single concept but a set of explanations for an outcome. Another explanation is that this is not a passive feature of a failed market but an active effort of participants. Disruption, as a market discourse, celebrates the attempt to replace a good product at a reasonable equilibrium state in the market with a minimally sufficient product.[112]A well-moderated affinity-mapping social network might produce meaningful positive externalities in terms of social support and democratic life; a brain rot content pump not as much. Research in this area would cut back to basic economic concepts, with special attention to the idea that in many cases, platforms in decay may appear to have near zero or limited consumer cost.
Key Takeaways
- This section connected media industries theory, media geography, and platform economics in a single theoretical framework. These theories are distinct and should be dealt with by your faculty in separate advanced courses, if possible.
- Deterministic theories may not adequately explain the world. Such theories include both crude Marxism and efficient market theory.
- The risks posed by the extreme concentration of power in particular people, places, and platforms should not be underestimated.
4.11. History
A broad theory of media history was offered in the introduction to this book. The course that this book is associated with does not offer media history as a primary content domain. That said, if we consider the media history of the present as part of the media history of the future, we can see the certain details that are important for media historians. Especially important are efforts to collect artifacts, documents, and interviews in the present. Although there is some value in collecting public relations campaigns, the history of the future needs things not produced as articles of the tech industry as artifacts..
A major concern in this area is the historic preservation of the media themselves. This is true of film, where the actual strips are chemically unstable and can dissolve into liquid. For digital media, the preservation problem is more insidious. Cloud computing promised eternal storage on perpetually maintained banks of hard drives. But infinite storage does not solve all the other problems that one might find in historical preservation. We might lack software to open a file either due to neglect or legal issues with accessing a proprietary format.
Consider The Gunstringer, a game you may have been assigned to play for this class. It is among the best implementations of the Kinect for Xbox 360. It is not forward compatible. Your author maintains legacy equipment that can play the game, but at some point this user experience may be lost to time.
Key Takeaways
- See the history arguments in section 1 of this book.
- Historical research requires a great deal of evidence collection, far more than one might be familiar with for other methods like political economy or policy analysis.
- In the strongest sense, history calls for the rigorous collection of evidence in great detail. Beware of histories that start from the story, which is then dotted with evidence.
4.12. Production
Making media is a powerful way of knowing. This has been a major topic of this book and does not need to be rehashed here. The primary objection to this argument of technological determinism is addressed directly in section 1. Methodologically, production studies are situated as an actor-network approach that integrates the physical realities of making media with the everyday experiences and narratives of producers.[113] Production tends to make use of Bruno Latour to defend claims about the actual material effects of technology.[114] A second key assumption of production is the assumption that in the life of cultural industries, there is a great deal of tacit knowledge that can be accessed via participation in the practice itself.[115]
The point of production as method is not that everyone must focus on making things or even make them well, but that being involved in the process of making, and seeing the micro-practices inherent in using the tools, can greatly inform our understanding. A more radical view, would be that making is the key to understanding.
Key Takeaways
- Production explores how the act of making things changes how you understand them.
- This is not a deterministic perspective; the value of the study of executing a production comes in the ways that we perceive the act of making.
- Production changes the pedagogical outlook of a program, and production is empirical.
4.13. Games and Play
Game theory has offered an important set of conceptual tools for the analysis of complex iterative systems. What does that mean? Games are important because they have multiple turns, and within those turns, the players consider the actions that others may take in the context of multiple constraints, mechanisms, and story elements, crafting provisional strategies to reach victory or defeat. Games also refer to an entire domain of media business, ranging from casual app developers to dedicated hardware manufacturers like Nintendo. An entire subindustry has evolved around the constant buffs and nerfs within competitive games online. For those of you less in the know, when something about your preferred element of a game is made stronger, it is buffed. When it is made weaker, it is nerfed. Further, one could explore Johan Huizinga’s theory of homo ludens, the idea that the central being of the human is as a playing creature.[116] Recent uptake of this idea takes us toward the primacy of playable systems and the study of play and fun, themselves the central task for communication.[117] An entire branch of social science has emerged around the experience of play: game theory.[118] Game theory supposes that we could use a selection of toy games and small play scenarios to understand real-world games. This is especially useful for social media strategy, through the development of decision trees and the deletion of dominated branches. While these toys may be an oversimplification, they teach strategy in a deep way.
Katherine Ibster provides two key points that distinguish games from nongame media: choice and flow.[119] The two points are interlinked—it is not merely the game continues to move, but that the choices you make along the stream affect the flow. Flow exists in a tenuous equilibrium between challenge and player skill; great emotional design allows the user to stay in this seemingly ideal zone where they are learning and experiencing change.[120] It can be useful to consider the types of uncertainty generators that are present in games as proposed by Greg Costykian, including the player, a random generator, and other players.[121] Games vary greatly based on where they find the randomness necessary for fun.
Beyond the solitary game, playing together is important: it is the depth of interaction between people that makes these game systems truly deep.[122] Game theory relies on this assumption to provide critical insight into human behavior akin to the results of a system of equations: the assumption in a game involves active people attempting to arrive at some outcome.[123] Games are everywhere and are deadly serious.
Ian Bogost’s conception of procedural rhetoric is particularly useful for understanding the future of media, the ways that particular software affordances can be mapped to the experiences of the game player.[124] Rhetorically, the video game must be understood both through its total semantic content and the coded means of delivery. The peak of games for Bogost is a complex system where a user is made to disidentify with their own position by manipulating a complex system: Sim City.[125] This perspective is Ludological.
Kishonna Gray makes the counterpoint: rather than evacuating games of identity, the identities in the game must be challenged.[126] This is narratological: the game is a story that is a way of understanding the world, not a psychological procedure but a way that people experience stories that make their personal worlds. Games are deeply compelling stories. To see them as less than that or as flat objects that people are not invested in misses their power.
This framing device, a debate between positions, has some truth in it but also is artificial, as are all nomothetic constructs. I present it this way so that we might understand the two lines as they braid into the future: there will be new stories, but also new ways of interfacing with the system that will produce many new affordances. There is some value in debunking such a distinction between play structure and story, but also great utility in reading each. Many academic fields find danger when the nomothetic is replaced entirely by the ideographic. Some academic disciplines tend to prefer cases, business in particular. What stabilizes business is the unified purpose of enhancing professional practices of prudence; in short, they are all on the same page about making more businesspeople. But what does this mean practically? If the horizon of our scholarship is what a game means to some particular person, we will never achieve our scholarly goals. That said, if our theoretical framework omits the people entirely, it also fails.

Marie Ryan argued in an essay over a quarter of a century ago that the real immersive power of the game is in the intertextual enfolding of the user.[127] Team chat and interaction are truly interactive, merely pushing buttons is not. This is an important idea. What does it mean to have enough contact with an interface to say that you are in communication with a conscious system? The common answer to this question is the idea of the Touring test—the idea that a system that could simulate relatively banal interaction over a textual transmission would be AI. During this period, we can see a remarkable number of systems that are capable of producing far more vivacious simulations of human interaction. It is likely that what constitutes immersion will change. As social media systems have reached maturity, it has become clear that they are a vector for hateful and hurtful communication. The friendly banter that would have made a world compelling has fallen away. It is not the raw interactivity of a game system that interests us, but the intersubjective contact that games allow.
Key Takeaways
- Games are likely a distinct form of media.
- The study of play can be understood, like the study of tool or symbol use, as a fundamental study of the human (if one takes an anthropological view).
- There are rich considerations of how games function; fun may not be the key.
4.14. Quantitative Methods
Perhaps the pinnacle of all methodologies is the two-way anonymous, fully controlled clinical trial. The scenario is rotten with perfection: two groups of patients, in the same place at the same time, with a similar well-understood disease that is to be treated with a particular drug. One group receives the drug; the other does not. The hypothesis is that the application of the drug will improve some reliable indicator of the disease state. The study does not confirm the hypothesis but merely rejects the null hypothesis that nothing happened. Perhaps if the researcher collects an effect size. In this ideal world, dozens of other labs with similar circumstances, equipment, and interest replicate the study—the chorus of rejected null hypotheses converges into a single, beautiful conclusion with an ever-advancing set of null hypotheses rejected, contouring the exact mechanisms that might be at play.
This is a classic account of classic null-hypothesis testing. It never happens this way. Before we get into that, we consider both the entire structure of quantitative methods as well as some basic concepts.
Consider the idea that you have some kind of system you are researching.[128] If you can play with the actual system, you do an experiment. If the system itself is not present, you might use a physical model of that system. If actual systems and physical models are not viable, mathematical models might be used, which could consist of analytical solutions based on observed data or simulation models. Science is empirical inasmuch as these models are about the world, they are not entirely notational which would be pure mathematics or computer science. There is a tension between methodologists and scientists that needs to be considered. Critiques of data dredging need to be weighed against the difficulty in studying a system. One could easily excuse analyzing all the data from a particularly inaccessible system. For example, a large-scale experiment done in an elementary school cafeteria over several months would be a rich source of data for many studies. Demanding that an expensive, sensitive, labor-intensive scheme be devised for every hypothesis would be absurd.
Here is how this paradigm plays out in communication, derived from the work of Jonathon Jeroen.[129]
- Experiment: Industry A/B testing, with influencers trying multiple posts and deleting later
- Physical Model: Focus group or local laboratory study, production
- Analytical Solution: Calculate possible ratings for new TV show given key factors divided by investment risk
- Computational Model of System: Random walks across a Markov chain, generative AI result from image system
Here are a few key ideas to get us started:
- Mathematical Moments: First, from middle school level mathematics is central tendency, or mean/median/mode. The second moment is variance, or how much any data point diverges from the mean. Third is skewness or the direction of the tail of the distribution points, and finally fourth is kurtosis, or how pointy the distribution of the data is.
- Levels of Measurement: Stanley Smith Stevens proposed this collection of levels of measure, which is still informative today. Nominal names things. Ordinal tells the sequence of the values with direction, but not magnitude. Freshly brewed coffee is hot, tap water is warm, ice is cold. Intervals suppose that we have a measurement system that provides values to the differences. Coffee at 176 degrees, tap water at 58 degrees, ice at 32 degrees. Ratio is an interval with a true zero value. Critiques of this taxonomy provide important insights; of particular value is the idea that there are subsets within these categories that should not be conflated. Thus not all intervals should be combined during a meta-analysis.[130]
- Discrete vs, Continuous: Nominal values are discrete. They are names, categories, or factors. It would be a mistake to see these as less important than continuous variables, which indicate a specific value within a domain. Temperature would be a continuous variable, Corvallis discrete.
- Precision vs. Recall: For classification models, we are presenting large volumes of data that then are scanned for relevant instances. Precision refers to the accuracy against a reference sample for detecting true positives. Recall is the value in any given population of detected true positives. High precision with low recall is just as bad as low precision with high recall.
- Reliability: Is the measurement consistent? Does a repeated sample from the same population produce the same result?
- Accuracy vs. Precision: Accuracy refers to the idea that a measurement is confirmed by a reference value. Precision speaks to reliability—that repeated measures produce similar results.
- Validity: Research questions must be well aligned with the methodology of the study at hand. Internal validity regards the measures and the research question. External validity refers to the ways that the research might match the world more broadly.
Hypothesis testing is an intrinsically digital way of knowing—it relies on the creation of an if-then construction that can be tested against a reference value. Confusion often comes in the discussion of what is truly tested, meaning the null hypothesis rather than the hypothesis itself. Hypothesis testing research seeks to confirm that something happened, thus rejecting the null hypothesis. Do we necessarily know what happened? No.
Type 1 and type 2 errors come from a foundational 1928 paper by Jezey Neyman and Egon Pearson:[131]
| Row and column headings | Accept Null | Reject Null |
|---|---|---|
| Null Accurate | Correct: nothing for nothing | Type 1: false positive |
| Non-Null Accurate | Type 2: false negative | Correct: something for something |
The recurring remark is that there must also be a type 3 error related to answering the wrong question with the right method. In designing a study to use traditional hypothesis methods, we consider the validity and reliability of the tests intended for the hypothesis.
Reliability means: Do we get the same result twice when testing the same sample? Would you trust a glucometer that returned different blood sugar readings on the same vial of blood? The reliability and margin of error for a test is important in designing an experiment. Studies are only as strong as their least reliable method.
Validity refers to the idea that the test answers the research question. These are divided into internal and external validity. Internal validity is the coherence of the design of the original logical statement “If x, then y.” As you remember from the discussion of logical operators and transistors, these processes go in order and only one way. External validity is the ability to see that the result makes sense in the world, that it can be generalized. If a study of 300 undergraduates at a large state university who claim to have never heard of the rapper Drake (one of the most popular figures in popular music at this writing) crosses your desk, you know that either (1) the data are not legitimate or (2) the undergraduates in the study are a highly atypical population, and any extrapolation from them is risky at best.
There is one more substantial hurdle: the study needs to be physically possible. Consider the study of food and nutrition. To be ethical, a study must be beneficent, meaning that no one is harmed. It would be unethical to do research that involved intentionally starving people to see what a vitamin does. Many social research questions may require surveys that exceed the capacity of the researcher or their funds. Research involving social network data is limited to the data that can be extracted from that network, and Facebook is not cooperative. There are many questions that are not practical to answer.
Consider a study intended for 15-year-olds. Question one: How many minutes of drinking have you seen in the last year on television? Question two: How many drinks do you have per day?
What do you imagine the results would be? Would you be comfortable with this use of self-report data? If the correlation was positive and the null hypothesis was rejected, would it be prudent to ban representations of beer drinking on television? Should you believe the recollections of the 15-year-olds over the last year or their current use reports?
Correlation, even if you assume the study is sound, does not imply causation. Although only a silly person would reject the correlation of being hit by a bus and grievous physical injury.
Correlations are generally reported from -1 to 1 (meaning the slope line and relative noisiness) with 0 being no relationship. Different coefficients have different properties. There are many other tests, including T-tests that compare the differences between populations. For example: if two comparable populations were exposed to some treatment and then asked for attitude change, the analysis of the variance within and between groups would be a useful measurement. These methods are not a perfect truth machine but offer important information about central tendency and effect size. The key is understanding the limits of the tests in question and how they align with your research.
Key Takeaways
- When possible, a well-designed test using a null-hypothesis design is insightful and productive, but these circumstances are in fact rare.
- Analytical solutions are rare in communication research.
- There are multiple distinct considerations that guide how good quant research can be done in media.
4.15. Survey Research
Many of the approaches discussed so far in this section have involved watching people or analyzing structures. A major approach for social research is simply to ask a large number of people, a survey. Survey methods are common in our lives in the context of political polls. We might want to know what people want by asking many of them; it is deceptively simple. The challenge is that these methods are almost too simple. People don’t answer their phones or are annoyed by surveyors with clipboards accosting them on their brisk daily constitutional. The ways that you convince people to take your survey might corrupt your data. That said, surveys continue to be a major form of research.[132] A number of techniques can resolve some of the issues with surveys, including weighting different groups within in your sample.[133]
A rigorous survey must pose questions that can be validated and use validated scales. You can’t just make up a survey entirely out of nowhere and expect it to meaningfully measure your question. Scale validation is an important part of reliability. If a game being “fun” is the dependent variable, we need to know that the measure of fun actually captures this value. Further, self-report data themselves, the heart of surveys, can be flawed. If a survey asks about normative behavior (meaning things one should do), there is a tendency for those surveyed to report doing the behavior.[134] Good survey methods can overcome these drawbacks and offer important insights into the world.[135] Among the least persuasive critiques of the survey is the idea that large language models are now able to simply replace the need to talk to people—if we have a massive thing that can interpret massive banks of text, we could simply ask it. This approach fails to produce results that could replace surveys, as the synthetic data are far less diverse than real human opinions and are not accurate, per the results derived from baselines.[136] Is there a simple technical fix for this problem? Unlikely, as models are increasingly trained on synthetic data, meaning that they will drift further from an index of the empirical world.
Survey research is especially important for uses and gratifications research.
Key Takeaways
- Survey design methods are key to survey success.
- Survey research is a good way to know what people think.
- Alternatives to survey research that are not directly empirical are suspect.
4.16. Iteration
It is unlikely that a social scientist will dream up the exact single experiment that could make sense of reality. The universe is simply too weird for that. We can expect many studies to be developed that circle in on a possible causal relationship that makes sense of the world. Keep in mind that this process never ends. Methods become more reliable, our analysis of validity finer grained.
As we circle in on the hypothesis confirmation step, it is likely that the visualizations and thought processes will generate even more new hypotheses. As you may have noticed in the discussion of policy research, there is no quantitative translational moment. Good science is autotelic—it produces more of itself.
What is required is a sense of meta-awareness of what the field has done before. This is why the literature review section of a paper is so important, as researchers need to know which hypotheses have already been rejected. This is also why replication is necessary. There need to be important checks on both hypotheses that seem to be true and those that are rejected as false. Most of all, science isn’t magic.
Key Takeaways
- Individual studies are less useful than constellations of studies.
- Keeping up to date with the literature is key; other researchers asking you to engage the literature in a target journal is a matter of vanity but relevance.
- The literature in any domain is continuously churning through ideas.
4.17. The Replication Crisis
If Malcolm Gladwell has taught us anything, it is that counterintuitive results sell books. Among the fields that can produce the most fascinating counterintuitive results is social psychology, where seemingly small things are presented as having systemic effects on beliefs over a long period. One of the trickiest representational problems for media research is that many of the effects that we discuss should have some relatively simple experimental evidence.
One of the most commonly cited examples of the lack of reproducibility is priming theory.[137] Priming supposes that exposure to a word or image would unconsciously affect the cognition of a person afterward. For example, one of the most ridiculous examples is the idea that seeing a single image of an American flag can durably increase Republican voting intentions for months to come.[138]
Replication problems are endemic. Artificial intelligence researchers rarely share code or facilitate reproduction of their work.[139] Oncology papers had a 90% failure rate upon replication.[140] In basic biological science, error rates for cell line identification are substantial.[141] What does that mean? When scientists apply a chemical to a sample of cells, they may not know what kind of cells they are.
Does this mean that science is bad or entirely fraudulent? No. It means that science is hard, and the performance of credibility may often imbue unearned ethos for ostensibly scientific results. Researchers need to publish positive results that are interesting to continue their work. Aligning career results with experiment results is a short circuit that will burn down the house of knowledge.
Some methods are not designed to produce identical results—network methods based on random walks vary based on the point at which the walk started. Unless random seeds in the methods are intentionally fixed, the graphic will not render the same way twice. Topic modeling systems will not assign the same topic number on multiple runs. This loops all the way back to the insight from Deborah Mayo on Severe Testing.—if the underlying assumptions for a fully anonymous experimental null hypothesis test are present, it is a fantastic way to learn things about the universe. If not, there is no need to cut corners to use that technique or to pretend that all of the other rich modes of knowledge production are somehow invalid.
4.17.1. P-Hacking
The underlying criterion of many of these studies involves the use of the threshold for significance. In social science, it is .05, which refers to the likelihood that the null would be rejected in error. The default condition is to reject the null if the level exceeds .05. P-hacking refers to the intentional manipulation of a study to arrive at the .05 threshold, which seems both possible and suspiciously likely, as so many studies tend to cluster at the key level.[142] Christine Ashwanden argues the problem is not necessarily that there is a great deal of cheating, but that doing good research is really, really hard:
People often joke about the herky-jerky nature of science and health headlines in the media—coffee is good for you one day, bad the next—but that back and forth embodies exactly what the scientific process is all about. It’s hard to measure the impact of diet on health, Nosek told me. “That variation [in results] occurs because science is hard.” Isolating how coffee affects health requires lots of studies and lots of evidence, and only over time and in the course of many, many studies does the evidence start to narrow to a conclusion that’s defensible. “The variation in findings should not be seen as a threat,” Nosek said. “It means that scientists are working on a hard problem.”[143]
The popular rhetoric of science does little to help in this case. Science is presented as magical and somehow value-free, offering simple answers to political and ethical problems. In reality, science, like publicity, is a process. Easy answers are not coming.
Hypothesizing after the fact is another version of this problem, where many measures are deployed and after some significant result is found a study is reconstructed around it. We should be careful not to delegitimize inductive qualitative strategies. Someone searching for a foothold might test 30 hypotheses. They would then report them and iterate the positive findings to generate more results. Perfection only matters for those with the least at stake. Exploratory data science is important for future hypothesis generation, and at the same time, exploratory work should not be passed off as something it isn’t.
When we consider the rhetoric of statistical design, there is a discourse that supposes that the double-blind controlled experiment is the only way to access the truth. If we reduce what we can know to only be that which is tested in this particular manner with a null rejection, there will be no knowledge left. Authority comes in the debunking of what would be meaningful results, just as the skeptical game of hermeneutics becomes self-defeating when it hyper-signifies, and statistics becomes decadent when the empirical is lost to the purely quantitative.
Key Takeaways
- The structure of scientific publication can lead to bad behavior and poor aggregate research quality.
- Designing a study that rejects a null hypothesis does not necessarily mean you have found any meaningful result.
- Failure to replicate does not mean that the research enterprise is rotten, but that the correct methods should be deployed to answer particular questions.
4.18. Bayesian Method, Effect Sizing, ANOVA, Multiple Hypotheses
For the purposes of this paragraph, you do not need to pretend that you care about football, but you need some awareness of what it is. How do we determine the best college football team in the country? Do we simply count wins and losses? There will be many teams with many wins; are we sure that the farm lads of Iowa play the same level of competition as the engineers of MIT? Perhaps we should simply ask some keen sports fans? There are no easy answers except for this question, where the answer is obvious: the Hawkeyes prevail.
Rankings are hard, especially when there are many teams (around 120) and only 10 or 11 games per season. What do you do as the season progresses, and how quickly should rankings change from week to week?
In the context of chess competition, Arpad Elo proposed a solution that used a slow-moving evaluation of the quality of a player by adding the relative quality of other players to their ranking and dividing.[144] This is an important idea—we can assume that higher-ranked players have likely beaten other good players, and lower-ranked players have likely not won such difficult matches. Translating this back into the football example, we can assume that a 9-0 team from the University of Alabama (an institution known for superior football performance) would likely defeat a team from Concordia College (a team known for its remarkable corncob mascot). Our assessment of Alabama anterior to the test (the game) would be that they were a good football team, and after the win (Concordia has no chance) it would not appreciably increase (we gained little new information). The ranking after would be the posterior measurement.
Instead of seeing the game as a chance to reject the hypothesis that Concordia is better at football than Alabama, the Bayesian method allows us to think about the level of information encoded in the game in an intuitive way. Bayesian methods are preferable because they are more easily sized to datasets, allow researchers to think about the world as it is (they are more empirical), and are concerned with the analysis of variance and effect size. Playoff games would produce far more meaningful information, as they involve seeded interactions between teams known to be excellent.
In his open letter calling for Bayesian methods in the psychological sciences, John Krushke makes the following excellent points:
Some people may have the mistaken impression that the advantages of Bayesian methods are negated by the need to specify a prior distribution. In fact, the use of a prior is both appropriate for rational inference and advantageous in practical applications.
* It is inappropriate not to use a prior. Consider the well-known example of random disease screening. A person is selected at random to be tested for a rare disease. The test result is positive. What is the probability that the person actually has the disease? It turns out, even if the test is highly accurate, the posterior probability of actually having the disease is surprisingly small. Why? Because the prior probability of the disease was so small. Thus, incorporating the prior is crucial for coming to the right conclusion.
* Priors are explicitly specified and must be agreeable to a skeptical scientific audience. Priors are not capricious and cannot be covertly manipulated to predetermine a conclusion. If skeptics disagree with the specification of the prior, then the robustness of the conclusion can be explicitly examined by considering other reasonable priors. In most applications, with moderately large data sets and reasonably informed priors, the conclusions are quite robust.
* Priors are useful for cumulative scientific knowledge and for leveraging inference from small-sample research. As an empirical domain matures, more and more data accumulate regarding particular procedures and outcomes. The accumulated results can inform the priors of subsequent research, yielding greater precision and firmer conclusions.
* When different groups of scientists have differing priors, stemming from differing theories and empirical emphases, then Bayesian methods provide rational means for comparing the conclusions from the different priors.[145]
The advantages of working from a set of priors are clear: when you can debate the nature of the priors, the underlying validity of the study can be determined in great detail. Jeffrey Rouder, Julia Haaf, and Frederik Aust noted that Bayesian models are already becoming common in communication research.[146] Through a comparison of both approaches to a study of a story about refugees, they show that the null hypothesis would have rejected findings that could advance the understanding of political communication. The sticking point would be the lack of a clear moment where the Bayes factor would call for the reporting of significance—the authors rightly critique such an assumption. The context of the discussion, more than some arbitrary number, should drive the evaluation of the significance level. If we have no legitimate reason to believe that a phenomenon is evenly distributed, we should not assume this because it is convenient. Competitors at the National Debate Tournament, the premier college debate tournament in the United States, likely have lower communication apprehension and higher extroversion than the average person. They also speak more quickly, refute arguments more directly, and generally talk louder. It is trivial to establish significance for such a study in a non-Bayesian framework. It is not newsworthy research to visit the zebra enclosure at a zoo and find no horses there. Ethnographic researchers have been known to describe the field as a “duck pond”; if one wishes to do ethnography of high school debaters, they are far more likely to find them at a high school debate tournament than in the weight room preparing for a football game.
Beyond Bayesian developments, methods like analysis of variance (ANOVA), which can deal with variance between multiple groups, structural equation modeling, and multiple hypothesis approaches, are becoming more common. Even the basic techniques of social science are advancing scientifically. This is not a methods course, nor is this a methods book—your takeaway is to know that these alternatives exist and to look carefully at their use as you incorporate future research.
Key Takeaways
- Quantitative methods are evolving—they are not a single, stable answer to all epistemic questions. Breaking the hegemony of null-hypothesis testing is good for academic research.
- Bayesian methods and the rigorous contextualization of priors are key.
- There are many innovations in research design that allow for rich, contingent, quantitative research to flourish. Practically every P value you see needs an effect size.
4.19. Big Data
M. C. Elish and danah boyd argue that the discourse of big data depends on “epistemological duct tape.”[147] The underlying methods of big data are routine and simply bigger. To hold things together, they identify the role of the rhetoric of magic—it becomes something of a strength, as the model exists in a special place where normal rules don’t apply. An especially pressing idea is that of a “face detector,” which is not a true detector of faces but a system that detects things that it was told fall into the category of faces.[148] Although this may seem like a trivial distinction, it is important because big data presents a difference in degree, not kind. In less abstract terms, if you had the wrong model on the small scale, getting bigger won’t make it right. This is not a new problem—meta-analysis theory has been concerned with the viability of connecting various datasets for many years.
The promise of big data for media research is direct detection of phenomena. Self-report data are notoriously unreliable. As one of my colleagues said to me on the street, “Anytime you ask an eighth grader if they do weird stuff, they are going to say yes—like lizard people.” This is an important problem. We can’t trust people to tell us real things about themselves. If you have time to work through the layers of fakeness and facework, ethnographic fieldwork can provide robust data with seemingly reversed data.
Social network scraping methods provide one sort of direct data. We can use APIs for the platforms to access Twitter and directly extract swarms of Tweets. I use the Twitter name here as the products described here were curtailed or changed prior to the name change. Analysis of the Tweets themselves is possible. Does social media make time feel like it is going faster? No it doesn’t; we have app logs that can inform our perspectives on this question.[149] Do they tend to use different apps in specific places? Again, yes, we can turn to the app logs.[150]
Other methods might involve securing sensor input data, such as information about the location of cell phones and their activity. Selfies could be reverse engineered to look for changes in the medical status of users. This is not the passive big data of magic, but an active form of big data that calls for the active ingestion of massive datasets.
Self-reports and surveys are messy and inaccurate. Direct detection of the data offers a real transformation in social research. The challenge is getting access to directly detected data. Companies with such data are not often willing to share it, and the public increasingly wish to protect their data.
Topic modeling and sentiment analysis offer researchers the potential for working with directly detected data. Topic modeling produces a probabilistic model of the topics that should be assigned to particular documents within a corpus. This is limited by the relatively narrow confines of the interpretative frame detected. Sentiment analysis may be more limited, functioning primarily on the basis of join and count methods against existing dictionaries. At the same time, direct detection is promising, as it offers new possibilities for the analysis of information.
Mobile devices allow ubiquitous access to information about people, their location, motion, proximity to other phones, access to signal—there is much to know about human communication through patterns of access to devices writ large. Especially when paired with wearable computers, the direct detection paradigm allows for concrete models of time and space that were previously impossible.
Key Takeaways
- Big data can refer both to datasets that have been merged as well as new large datasets gathered by ubiquitous sensors.
- Direct detection offers better-quality data in important cases.
- Protections inherent in big data that were a result of sure scale have been resolved by AI methods; some deferred ethical and political questions are now overripe.
4.20. Network Analysis
Network datasets can encode information in a number of ways, ranging from simple nondirectional links to directed signed relationships. An unsigned, nondirected relationship would be the simplest expression, a binary concept that two people are linked. Not all relationships are reciprocal. It is entirely possible that in a realistic human network, people do not have reciprocal or symmetrical feelings. Easy examples include love triangles, frenemies, and all those people who follow back on Instagram. Negative ties are prismatic, meaning they contain multiple underlying types of attachment with distinct feelings.[151] There is so much to be represented in a network, and the beauty of the datasets we load to produce networks is that they are inherently relational. Applications are multiple, ranging from PageRank, which is a centrality measure and the basis of the original Google search engine, to studies of the diffusion of information through networks of people, as in the diffusion of innovation framework.[152]
Key measurements of networks include centrality and communities. Centrality is typically calculated in a number of ways, including degree (how many links does a node have), betweenness (when spanning the network in all possible ways, how often is this node used), closeness (on average, how close is this node to the edge), and various measures of prestige that would ostensibly measure the power or influence of a node.[153] Unfortunately, many measures of centrality tend to converge with degree at scale, meaning that someone who connects to lots of people will appear central, regardless of if those links are meaningful. Thus the increased interest in directed networks, signed networks, and new centrality measures such as those which consider the cohesion of the links of a node.[154] Consider this real social network:

The above figure shows the social network of the romantic relationships on Grey’s Anatomy, seasons 1-16. This is a common exercise in our cultural analytics course. The nodes are sized by a prestige measurement and colored by community detection algorithm (the basic Louvain method coded in Gephi).[155] The nodes are placed via a force-directed method.[156] There are many things we can infer from this diagram; one of my favorites is to look at the structure of the later seasons and what the community detection method can tell us about how to structure a long-running narrative.
Real social networks are necessarily dynamic, and they change over time. You surely are not friends with your preschool pals today in the same way that you were then. These processes of revision are an important dimension of social network analysis as well.[157]
Key Takeaways
- Social network analysis is an important tool for understanding groups of people.
- There is no generally accepted null hypothesis for network problems.
- Network models are used to describe many things; of particular importance was the evolution of PageRank at Google.
4.21. Artificial Intelligence
Artificial intelligence can refer to a number of distinct things. Culturally, it refers to a humanlike synthetic lifeform, often with a humanoid body, ranging from C3PO to Lt. Commander Data to Vicki on the show Small Wonder. These are examples of Strong AI, or AI that thinks like a human. Strong AI is distinct from Weak AI, which merely synthesizes human outputs. This is a distinction that matters, as there is growing concern that we might have for Strong AI, which we would not have with various Weak AI tools. When these Weak AI tools are described as “stochastic parrots,” the implication is not that these devices are simply repeating sequences of information that have been previously ingested by the model—the degree to which humans might hallucinate that the model is cleverly communicating with them is fundamentally an issue with those people.[158] Humanizing weak models can be useful for the industry because it might convince potentially skeptical parties that these bots could replace humans in care positions, and perhaps more importantly, it might convince some to believe in robots’ rights (a powerful strategy to change the balance of responsibility), a topic discussed in the legal analysis and ethics sections (section 4.7 and section 4.8). Singularity and prompt interfacing are likewise discussed in section 1.10. Artificial super intelligence is an interesting question for speculation, but it is not particularly useful for understanding the world, as it assumes both that Strong AI is a tractable problem and that it would be solved by a technology with emergent properties. The purpose of this section is to consider the use of artificial intelligence as a method set in new media.
One particularly promising use of AI models are mass inductive methods. At one point, a central construct in the digital humanities was the distinction between distant and close reading.[159] Distant reading offered a new theoretical moment to reconsider which texts were relevant in the first place, to use computational methods to read everything rather than a narrow band of things. This new approach offered different clustering, new methods to direct the effort of close reading, and meaningful rigorous testing for disciplinary axioms. Similar to the broader paradigms of direct detection and exploratory data analysis, large collections of data can offer new inductive windows on the world. That said, there are major challenges, especially ecological ones, with major implications for both the physical world and for communication. The generative outputs of large models have been deployed with great speed, replacing entire domains of human output, which was then quickly fed back into those same models.
For over a decade, “dead Internet theory,” which was first presented as something like a conspiracy theory and then became a lingering truth, now seems to be the state of the Internet.[160] As AI systems are pushed into the ecosystem to replace human inputs, the perception of an Internet devoid of humans becomes more important and undeniable when corporations push artificial comments, personas, and technologies at the same time that they engage in abusive and problematic communication.[161] Similarly, business discourse includes “Google Zero,” where the use of AI summary systems in search engines divert traffic from the Internet itself, destroying both the economy of knowledge production and the communities that produce engagement itself.[162] Won’t central knowledge infrastructures like Wikimedia protect us? Not really—the unauthorized and unbridled scraping operations are damaging the infrastructure of the open Internet and sites like Wikimedia.[163] Perhaps finally, social media are not coming to save us with a flood of text—even Mark Zuckerberg has conceded that the actions of the companies have brought the age of online participatory sharing to an end.[164]
Perhaps this is for the best. After all, some of the problems we have noted here, like abuses of AI, might be curtailed by stricter alignment. The challenge for alignment theory is that alignment itself is among the most likely actions to produce misalignment.[165] There are even intentional attempts at misalignment to match the various preferences of regimes around the world.[166] While sci-fi variants of alignment might be fun to debate, they miss the point of alignment: this is something we do to our models, not something they do on their own. Alignment is neither an indexical sign nor a pure window into reality. Just 20 years ago, social media and search engine technology were seriously presented as a Copernican revolution, calling for the very overthrow of the scientific method itself.[167] But what does extreme hype about Google really mean for alignment? There is a repeated rhetorical frame where large computational models somehow overcome supervision and design to be truly independent—the hype cycle depends on the magical possibility of investment, as noted with search engines, then social media, and now AI.[168] Certainly, some attempts at alignment are better than others, especially attempts at building inclusive tools with community participation. Generically aligning a single, overarching large language model to a singular concept of good via some sort of easy adjustment system like a knob is silly.
The single most important element is to avoid magic tricks. Peter Nagy and Gina Neff argue that much of the discourse of AI at the time of writing this book is structured as magic—it is a performance to solicit investment and defer accountability.[169] This is especially true when so much of the task of basic tagging and support for these light shows relies on human labor around the world, which is what leads Antonio Casilli to say that we are still “waiting for robots.”[170] We cannot wait for humans working in poor conditions to produce the information that we need for research. The use of human labor in the AI industry is something of a dirty little secret that must be kept in mind when evaluating magical claims.
Key Takeaways
- It is essential to distinguish between sales pitches, hype for technologies, and their meaningful use cases. Beware of magical claims.
- Infrastructural analysis of AI systems reveals deep contradictions that could collapse those socio-technical arrangements.
- Distinguishing between mass inference (big data), ubiquitous classification, and generative methods is essential for future media students.
4.21.1. Classifiers
Classifiers are tools for the rapid analysis of large datasets, especially unstructured ones. For the most part, we work in images and text; sounds will surely come along soon. These could come in simple forms like a topic model, or more complex forms like a model trained intentionally, like a tensor with human annotation. So, what do you do with a classifier? The easy answer is to classify, of course. There is so much information in this world encoded in so many ways that a major challenge is how to create abstract representations of that data. The experience of data themselves can be overwhelming, as they are functionally infinite in dimension. Holding the story of a thousand novels in your mind at one time is impossible; even making a graph of that would be beyond cognitive limits. Classifiers can assimilate information. With abstract representations of those novels, we might be able to assess their differences, create baselines among them, and then examine those representations for commonalities or clusters. Consider the underlying mathematical power of a tensor, which we might translate.
This is the preprocessed tensor input for a shirt, expressed as pixel intensities on a highly dimensionality-reduced image with 28 x 28 pixels, followed by the tensor input for pants:
V1 V2 V3 V4 1 0.8274510 0.9019608 0.8784314 0.9176471 2 0.8078431 0.8745098 1.0000000 1.0000000 3 0.8980392 0.8666667 0.7372549 0.6039216 4 0.4156863 0.4588235 0.6588235 0.8588235
V1 V2 V3 V4 1 0.8235294 0.8156863 0.7764706 0.8117647 2 0.8235294 0.8156863 0.7843137 0.7921569 3 0.8274510 0.8078431 0.8039216 0.7764706 4 0.8235294 0.8235294 0.8196078 0.7647059
This is the tensor for the song “Otherside” by the Red Hot Chilli Peppers:
[1] 72 151 72 151 117 1 1097 5 13090 118 1 1523 2 [14] 1381 2625 191 94 55 298 7 137 133 28 6 71 9 [27] 12 2 2787 71 9 12 2 2787 71 9 12 71 9 [40] 12 72 151 72 151 117 1 1097 2763 7 245 1 20 [53] 1 20 278 32 158 3508 7 1043 32 21 1 111 2763 [66] 7 245 2274 7 104 202 5 573 912 2 13091 417 4 [79] 11 4903 7 2509 39 2219 6 27 161 1 115 5 2807 [92] 133 28 6 71 9 12 2 2787 5 3447 13092 4 246 [105] 10 7 706 5 13093 25 7 395 5136 2873 928 2 1637 [118] 4 355 2 6264 133 28 6 71 9 12 2 2787 71 [131] 9 12 2 2787 71 9 12 1 20 1 20 278 32 [144] 158 71 9 12 72 151 72 151 117 1 1097 2763 7 [157] 245 1 20 1 20 278 32 158 3508 7 1043 32 21 [170] 1 111 157 8 12 71 8 25 5 183 320 499 8 [183] 38 180 8 12 2 2787 1 2453 4 92 9 13 32 [196] 61 7 383 1 819 9 45 1 819 9 45 4 86 [209] 32 590 172 72 151 72 151 117 1 1097 3508 7 1043 [222] 32 21 1 111 2763 7 245 1 20 1 20 278 32 [235] 158 3508 7 1043 32 21 1 111 119 72 151 1 20 [248] 1 20 278 32 884 3508 7 1043 32 21 1 111 1 [261] 283 14 1299 129 5 13094 1 265 9 18 9 833 18 [274] 2 556 513 3 27 3 52 55 50 57 133 28 6 [287] 71 9 12 2 2787 4905 80 34 9 801 6 8
As a point of interest, the word encoded as 13090 is “cemetery,” apparently not often used in pop music.
Here is Taylor Swift’s “Teardrops on My Guitar”:
[1] 1774 606 53 8 1 730 93 32 37 31 731 26 1 64 159 49 [17] 1465 43 220 22 8 63 742 220 31 10 46 220 702 28 9 66 [33] 1 532 40 63 444 220 21 1 121 108 53 130 220 2 597 25 [49] 2 2085 12 7 1215 2 123 221 13 1056 8 1346 12 5 1346 686 [65] 220 2 323 10 2 269 1 99 1026 1 526 5 349 31 63 184 [81] 49 20 27 128 1 47 1774 1812 131 8 52 63 92 13 1 64 [97] 474 4 110 63 458 31 1588 2 465 16 2373 1 324 1 94 30 [113] 39 116 222 134 807 113 134 21 58 46 88 10 500 401 182 4 [129] 27 246 958 93 220 2 597 25 2 2085 12 7 1215 34 1 70 [145] 4 1 91 2 123 221 13 1056 8 1346 12 5 1346 686 220 2 [161] 323 10 2 269 1 99 1026 20 27 128 1 47 31 1 343 152 [177] 288 98 1 157 38 2 256 90 125 147 654 45 4 437 29 100 [193] 362 387 93 220 2 597 25 2 2085 12 7 1215 2 123 56 787 [209] 28 327 16 8 6 193 7 200 220 2 323 10 2 269 1 99 [225] 1026 4 136 13 17 258 30 20 27 128 1 47 220 2 62 2059 [241] 18 26 234 55 327 4 220 21 13 1 91 6 266 202 1774 606 [257] 53 8 1 526 5 349 31 63 184 49 90 428 246 401 13 95 [273] 63 1405 108 4 246 28 136 13 1 83 6 188 314 1774 1405 6 [289] 8
What does this offer communication? Among the fundamental problems in this field is the sheer enormity of it. There is so much communication, and as cultural analytics theory would suggest, it has increased, but so far there is no perfect accessibility to that information. Classifiers make it possible to bring in far more information to produce documents where before there were only experiences.
Essential to the function of these systems are Markov chains. This may seem like an esoteric term, but it is straightforward. A Markov chain is a selection of discrete states with varying probabilities of sequential transition. Let’s say you were thinking about a hypothetical Instagram feed being provided for you with a few distinct types of posts: fashion, fitness, and food. When you see a food post, you are 50% likely to see another food, and 25% likely to see each of the others. When you see a fashion post, you are 50% likely to see a post about fitness, 25% likely to see more fit fashion, and 25% likely to see food. When you see fitness, you are 75% likely to see more fitness and 25% likely to see fashion, but never food.
| Source | Target | Weight |
|---|---|---|
| Food | Food | 50 |
| Food | Fashion | 25 |
| Food | Fitness | 25 |
| Fashion | Fashion | 25 |
| Fashion | Food | 25 |
| Fashion | Fitness | 50 |
| Fitness | Fitness | 75 |
| Fitness | Fashion | 25 |
The reason why we care about Markov chains is that this core mathematical logic is what drives all flow media as well as AI. Getting a sense of how the weights are produced, and especially of how the results of those random walks are preprocessed and postprocessed, is essential work for communication criticism today. Why, you ask? Because that is the essential work of production.
Key Takeaways
- Classifiers are powerful tools for understanding wide fields of text.
- Multiple underlying models can drive classification outcomes.
- Classification and regression are the key analytic tasks done by AI/ML systems.
4.21.2. Generators
Of particular interest at this time are generative models, especially large language models. At the center of these models are Markov chains. What one does with a Markov chain is a random walk, where starting one particular node means the states of the network can be walked, producing an output. In the case of a Markov chain of social media states, this can produce a relatively satisfying experience of flow. Given the attention paradigm for generators, words themselves need not be encoded at all; the relationships between the words can drive learning processes. Claude Shannon was on to this in 1948. Large language models can generate strings of salient words using the underlying probabilities of the words. Given the effectiveness of topic modeling, it would stand to reason that an interface designed to meaningfully parse prompts could use a topic model to then access the right model, which was already learned.
Image generation tends to rely on diffusion models. These models start with pure Gaussian noise (impure noise could be produced via a Markov process) and then work backward to bring the noise to the image encoding.[171] Voice reproduction is less complex, as the generative stack on this space relies on an established prompt. Encoding the vocalics of the speaker is straightforward compared to denoising. These models rely on a text to be spoken and a sample of the vocal data, which then are connected to a massively trained model of discrete audio tokens.[172]
The implications of generators should be clear. Why would we need any of the sections of this book about how to make things if the future simply copies what we have already made? More than a decade ago, we were told by university administrators that no additional production equipment would be purchased because cell phones could be used instead. But more equipment is purchased regularly. The semantic gap is real—at what point is it simply much easier to make something than to engage in tens of thousands of prompt-engineering activities to potentially get a computer to produce the things that you need?
Key Takeaways
- Markov chains are a key technology that you need to understand.
- The weights in the model mostly drive the outputs in a somewhat predictable way.
- Information theory foundations are key to understanding generators; see the classic research of Claude Shannon.
4.22. Datasets
All AI systems require data on which to train. Many APIs have already been shut down due to abuse by AI firms. So much inquiry has been exhausted. The challenge (and why human labor is needed) is that much of the easily accessible data have already been collected. This poses unique challenges because datasets will be increasingly expensive. To use negotiation slang, “I know what I got” will be what everyone with data says from here on out. The days of cheap data are over. Some argue that a major goal of the Department of Government Efficiency (DOGE) in early 2025 was the acquisition of high-quality US government data.[173] The internal documents of government agencies are far better structured and chock-full of expert, often confidential information. Another idea, if finding more data isn’t an option, is using synthetic data to train AI models on information we have produced for that purpose. This new synthetic data avoids the ethical problems of privacy violations, abuse of humans, and data exhaustion. What it risks, though, is “model collapse,” where the AI systems trained on their own outputs suffer “irreversible defects.”[174]
Even if we could find more data, there are serious questions about the viability of the “more data” paradigm. Meaning is essential for viable artificial general intelligence (AGI)—the attention-only structure can’t get there. Emily Bender and Alexander Koller call this discipline mere “BERTology.”[175] In this sense, the next steps are of a different kind, not a different degree. Cognitive psychologists have noted that the pool of good training data is already exhausted, and overly aggressive efforts to connect cognitive science to AI are leading to a pernicious state where cognitive researchers presume that AI outputs reflect human cognitive structures, and then AI researchers presume cognitive research is about actual humans.[176] Perhaps most interestingly, DeepSeek showed that a substantially smaller, open-source model could achieve similar results.[177]
This leads us to another question: Can we look at the training data to understand the outputs of a model? With smaller models, definitely. Consider my Taylor Swift example from above. It is not that difficult to look at the encodings of the lyrics of several artists and find clusters (typically I use Kanye West, Taylor Swift, Usher, Lil Wayne, Foo Fighters, Red Hot Chili Peppers, and Vulfpeck). As models expand, this method becomes impractical. It is difficult to know definitively why one image, string, or sound leads to a particular output. Lyrics used in my teaching are also a simple classifier, not a true generative stack. On another level, one might not want to examine the content of their training data—it might not be safe, it might include images of violence or abuse, or one might unintentionally download illegal material.[178] As users want more control of their models and outputs, the material on which they were trained might become something that people want to know, especially for production models. For closed-source systems, training set criticism may well be beyond our range of options.
Key Takeaways
- Training set criticism is a key idea, but it is fundamentally limited by the sure size of the datasets.
- Large AI/ML methods can become a self-fulfilling prophecy—they become the object of the research rather than the empirical world.
- It is unclear whether expanded datasets are the future of AI. Smaller, more focused models are already succeeding.
4.23. Agent-Based Models
Artificial agents offer access to new methodologies beyond classification. What if we were no longer bound by what we could observe in the world? Synthetic data are risky, as they can collapse your model. What if, with adequate dimensionality, one could produce an artificial world with artificial agents that could then play out the phenomena of interest? This is the promise of agent-based models and the paradigm of generativism. The landmark work in this area is Growing Artificial Societies.[179] The approach here proceeds from toy games, especially sugarscapes, a simulation of cells. Not all human simulations can be reduced to toy games, which is where AI agents come in. If the agents can be situated in high enough dimensionality, they could tell us a great deal about human communication. This is reasonable given how much of our online world is already populated by artificial agents, and studies of those agents are emerging as a key domain of media research.[180]
Central to the contributions of agent-based models are differing nomothetic explanations. There are four main explanations: deliberation, equilibrium, evolution, and conflict. Deliberation and conflict have dominated our research agenda for decades. These are focused on individual agents and their social dramas. Conflict models are especially attractive, as they offer robust and often complete models of the world even if those models are themselves incomplete. Agent-based simulations can bring evolution and equilibrium back into social research and the humanities, which could change the underlying explanations for the better. For students in economics, this isn’t controversial; after all, equilibrium models are common in that field. For Joshua Epstein and Robert Axtell, the fundamental question is: “Can You Grow It?”[181] Generativism shifts from the question of explanation to production and thus offers a real alternative paradigm, not simply a computationally enhanced version of what we have now. The fundamental challenge comes in translating toy games into full-world simulations, which is the promise of artificial agents today and why, 30 years into this paradigm, progress could really start moving.
Key Takeaways
- Agent-based models can change our modes of social description.
- Agents are a key element of generativism.
- Agent simulations are often playable.
4.24. Works Cited
Anderson, C. “The End of Theory.” Wired, June 23, 2008. https://www.wired.com/2008/06/pb-theory/.
Andrejevic, Mark. “Data Civics: A Response to the ‘Ethical Turn.’” Television & New Media 21, no. 6 (2020). https://journals.sagepub.com/doi/abs/10.1177/1527476420919693.
Anker, Elizabeth. “A Reactionary Turn in Literary Studies: On Jonathan Kramnick’s ‘Criticism and Truth.’” Los Angeles Review of Books, June 28, 2024. https://lareviewofbooks.org/article/the-reactionary-turn-in-literary-studies-on-jonathan-kramnicks-criticism-and-truth.
Anthony, Kathryn E., Timothy L. Sellnow, and Alyssa G. Millner. “Message Convergence as a Message-Centered Approach to Analyzing and Improving Risk Communication.” Journal of Applied Communication Research 41, no. 4 (2013): 346–64.
Aschwanden, Christie. “Science Isn’t Broken.” FiveThirtyEight (blog), August 19, 2015. https://fivethirtyeight.com/features/science-isnt-broken/.
“As Internet Enshittification Marches on, Here Are Some of the Worst Offenders.” Ars Technica, February 5, 2025. https://arstechnica.com/gadgets/2025/02/as-internet-enshittification-marches-on-here-are-some-of-the-worst-offenders/.
Barthes, Roland. Mythologies. Translated by Jonathan Cape. Noonday Press, 1991.
Baxter, Leslie, and Brian Asbury. “Critical Approaches to Interpersonal Communication.” In Engaging Theories in Interpersonal Communication: Multiple Perspectives, edited by Dawn O. Braithwaite and Paul Schrodt. SAGE, 2014.
Becker, Christine. “Fin-Syn Begin Again? The Rhetoric of Deregulation.” Paper presented at Media in Transition 3, Cambridge, MA. 2003. https://cmsw.mit.edu/mit3/papers/becker.pdf.
Begley, C. Glenn. “Reproducibility: Six Red Flags for Suspect Work.” Nature 497 (2013): 433–34. https://doi.org/10.1038/497433a.
Bender, Emily M., and Alexander Koller. “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5185–98. Association for Computational Linguistics, 2020. https://doi.org/10.18653/v1/2020.acl-main.463.
Bendett, Samuel. “Battlefield Drones and the Accelerating Autonomous Arms Race in Ukraine.” West Point Modern War Institute, January 10, 2025. https://mwi.westpoint.edu/battlefield-drones-and-the-accelerating-autonomous-arms-race-in-ukraine/, https://mwi.westpoint.edu/battlefield-drones-and-the-accelerating-autonomous-arms-race-in-ukraine/.
Best, Stephen, and Sharon Marcus. “Surface Reading: An Introduction.” Representations 108, no. 1 (2009): 1–21. https://doi.org/10.1525/rep.2009.108.1.1.
Billig, Michael. Banal Nationalism. SAGE, 1995.
Binmore, Ken. Game Theory: A Very Short Introduction. Oxford University Press, 2007.
Birhane, Abeba, J. van Dijk, and F. Pasquale. “Debunking Robot Rights Metaphysically, Ethically, and Legally.” First Monday 29, no. 4 (2024). https://doi.org/10.5210/fm.v29i4.13628.
Bisbee, James, Joshua D. Clinton, Cassy Dorff, Brenton Kenkel, and Jennifer M. Larson. “Synthetic Replacements for Human Survey Data? The Perils of Large Language Models.” Political Analysis 32, no. 4 (2024): 401–16. https://doi.org/10.1017/pan.2024.5.
Black, Ryan C., and James F. II Spriggs. “The Citation and Depreciation of U.S. Supreme Court Precedent.” Journal of Empirical Legal Studies 10, no. 2 (2013): 325–58.
Blondel, Vincent D., Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. “Fast Unfolding of Communities in Large Networks.” Journal of Statistical Mechanics: Theory and Experiment 2008, no. 10 (2008): P10008. https://doi.org/10.1088/1742-5468/2008/10/P10008.
Boateng, Godfred O., Torsten B. Neilands, Edward A. Frongillo, Hugo R. Melgar-Quiñonez, and Sera L. Young. “Best Practices for Developing and Validating Scales for Health, Social, and Behavioral Research: A Primer.” Frontiers in Public Health 6 (June 2018): 149. https://doi.org/10.3389/fpubh.2018.00149.
Bogost, Ian. “The Rhetoric of Video Games.” In The Ecology of Games, edited by Katie Salen. MIT Press, 2008.
Bogost, Ian. Alien Phenomenology, or What It’s Like to Be a Thing. University of Minnesota Press, 2012.
Bogost, Ian. “Video Games Are Better without Characters.” The Atlantic, March 13, 2015. http://www.theatlantic.com/technology/archive/2015/03/video-games-are-better-without-characters/387556/.
Bogost, Ian. Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games. Basic Books, 2016.
Boltanski, Luc, and Laurent Thévenot. On Justification: Economies of Worth. Translated by Catherine Porter. Princeton University Press, 2006.
boyd, danah. It’s Complicated: The Social Lives of Networked Teens. Yale University Press, 2014.
Brenner, Philip S., and John DeLamater. “Lies, Damned Lies, and Survey Self-Reports? Identity as a Cause of Measurement Bias.” Social Psychology Quarterly 79, no. 4 (2016): 333–54. https://doi.org/10.1177/0190272516628298.
Caldwell, John. Production Culture. Duke University Press, 2008.
Carter, Travis J., Melissa J. Ferguson, and Ran R. Hassin, “A Single Exposure to the American Flag Shifts Support toward Republicanism up to 8 Months Later,” Psychological Science 22, no. 8 (August 2011): 1011–18, https://doi.org/10.1177/0956797611414726.
Casilli, Antonio. “Waiting for Robots: The Ever-Elusive Myth of Automation and the Global Exploitation of Digital Labor.” Sociologica (2021): 112–33.
Caves, Richard E. Creative Industries: Contracts between Art and Commerce. Harvard University Press, 2002.
Chandler, Daniel. “Semiotics for Beginners: Signs.” Last modified April 11, 2006. https://www.cs.princeton.edu/~chazelle/courses/BIB/semio2.htm.
Chang, Hua-Ling Linda. “Semiotics.” University of Chicago Theories of Media, winter 2007. http://csmt.uchicago.edu/glossary2004/semiotics.htm.
Chayka, Kyle. “Ukraine Becomes the World’s ‘First TikTok War.’” New Yorker, March 3, 2022. https://www.newyorker.com/culture/infinite-scroll/watching-the-worlds-first-tiktok-war.
Chayka, Kyle. “Mark Zuckerberg Says Social Media Is Over.” New Yorker, April 23, 2025. https://www.newyorker.com/culture/infinite-scroll/mark-zuckerberg-says-social-media-is-over.
Chun, Wendy Hui Kong. Discriminating Data. MIT Press, 2021. https://mitpress.mit.edu/9780262548526/discriminating-data/.
Coase, Ronald. “The Nature of the Firm.” Economica 4, no. 16 (1937): 368–405.
Cohn, Carol. “Women and Wars: A Conceptual Framework.” In Women and Wars. Polity, 2013.
“Computational Science,” Wikipedia, March 19, 2025, https://en.wikipedia.org/w/index.php?title=Computational_science&oldid=1281353015#cite_note-5.
Costikyan, Greg. Uncertainty in Games. MIT Press, 2013.
Crenshaw, Kimberlé. “Race, Reform, and Retrenchment: Transformation and Legitimation in Antidiscrimination Law.” Harvard Law Review 101, no. 7 (1988).
Curtin, Michael. “Media Capitals: Cultural Geographies of Global TV.” In Television after TV, edited by Lynn Spigel and Jan Olsson. Duke University Press, 2004.
Curtin, Michael. Playing to the World’s Biggest Audience: The Globalization of Chinese Film and TV. University of California Press, 2007.
De Certeau, Michael. The Practice of Everyday Life. 2nd ed. University of California Press, 2002.
Derrida, Jacques. Specters of Marx: The State of the Debt, the Work of Mourning and the New International. Routledge, Chapman & Hall, 1994.
Derrida, Jacques. “Structure, Sign, and Play in the Discourse of the Human Sciences.” In Twentieth-Century Literary Theory, edited by K. M. Newton. Macmillan Education UK, 1997. https://doi.org/10.1007/978-1-349-25934-2_24.
DeSantis, Alan D. “Smoke Screen: An Ethnographic Study of a Cigar Shop’s Collective Rationalization.” Health Communication 14, no. 2 (2002): 167–98. https://doi.org/10.1207/S15327027HC1402_2.
Deuze, Mark. Media Work. Polity, 2007.
Doyle, Michael. “Kant, Liberal Legacies, and Foreign Affairs.” Philosophy and Public Affairs 12, no. 3 (1983): 205–35.
Dunham, J. H., and P Guthmiller. “Doing Good Science: Authenticating Cell Line Identity.” Promega, accessed October 17, 2025. https://www.promega.com/resources/pubhub/cell-line-authentication-with-strs-2012-update/.
Dyer-Witheford, Nick. Cyber-Marx: Cycles and Circuits of Struggle in High-Technology Capitalism. University of Illinois Press, 1999.
Eaton, Kit. “Unpacking the Secret $2 Million Internet in a Suitcase.” Fast Company, June 13, 2011. https://www.fastcompany.com/1759428/unpacking-secret-2-million-internet-suitcase.
Edwards, Benj. “AI Bots Strain Wikimedia as Bandwidth Surges 50%.” Ars Technica, April 2, 2025. https://arstechnica.com/information-technology/2025/04/ai-bots-strain-wikimedia-as-bandwidth-surges-50/.
Edwards, Jane A. “The Transcription of Discourse.” In The Handbook of Discourse Analysis. John Wiley & Sons, 2005. https://doi.org/10.1002/9780470753460.ch18.
Elish, M. C.. “Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction.” Engaging Science, Technology, and Society 5 (March 2019): 40–60. https://doi.org/10.17351/ests2019.260.
Elish, M. C., and danah boyd. “Situating Methods in the Magic of Big Data and AI.” Communication Monographs 85, no. 1 (2018): 57–80. https://doi.org/10.1080/03637751.2017.1375130.
Epstein, Joshua, and Robert Axtell. Growing Artificial Societies. MIT Press, 1996. https://mitpress.mit.edu/9780262550253/growing-artificial-societies/.
Everett, Martin G., and Thomas W. Valente. “Bridging, Brokerage and Betweenness.” Social Networks 44 (January 2016): 202–8. https://doi.org/10.1016/j.socnet.2015.09.001.
Faltesek, Daniel. Selling Social Media. Bloomsbury Academic, 2018.
Faltesek, Daniel. “Strategically Matching Messaging to the Platform: The Case of ‘Biolabs’ on Instagram, Facebook, and Reddit.” Communication and the Public 8, no. 2 (2023). https://journals-sagepub-com.oregonstate.idm.oclc.org/doi/full/10.1177/20570473231173784.
Felski, Rita. “Critique and the Hermeneutics of Suspicion.” M/C Journal 15, no. 1 (2011). http://journal.media-culture.org.au/index.php/mcjournal/article/view/431.
Folkenflik, David. “Jeff Bezos Revamps Washington Post Opinion Section, Leading Editor to Quit.” NPR, February 26, 2025. https://www.npr.org/2025/02/26/nx-s1-5309725/jeff-bezos-washington-post-opinion-section.
Fukuyama, Francis. The End of History and the Last Man. Free Press, 1992.
Geertz, Clifford. Interpretation of Cultures. Basic Books, 1973.
Gibson, James Jerome. The Ecological Approach to Visual Perception. Psychology Press, 1986.
Goldfarb, Avi, and Jon R. Lindsay. “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War.” International Security 46, no. 3 (2022): 7–50. https://doi.org/10.1162/isec_a_00425.
Gómez, Sergio. “Centrality in Networks: Finding the Most Important Nodes.” In Business and Consumer Analytics: New Ideas, edited by Pablo Moscato and Natalie Jane De Vries. Springer International, 2019. https://doi.org/10.1007/978-3-030-06222-4_8.
Gooding, Matthew. “ASML: The Most Successful Tech Company You’ve Never Heard Of.” Tech Monitor, August 6, 2021. https://www.techmonitor.ai/technology/future-of-asml-photolithography-semiconductor-chip-euv.
Goodnight, G. Thomas. “Legitimation Inferences: An Additional Component for the Toulmin Model.” Informal Logic 15, no. 1 (1993): 1. https://doi.org/10.22329/il.v15i1.2468.
Gregg, Melissa. Counterproductive. Duke University Press, 2018. https://www.dukeupress.edu/counterproductive.
Guo, Daya, Qihao Zhu, Dejian Yang, et al. “DeepSeek-Coder: When the Large Language Model Meets Programming—The Rise of Code Intelligence.” January 26, 2024. arXiv:2401.14196. https://doi.org/10.48550/arXiv.2401.14196.
Haass, Richard N. The Reluctant Sheriff: The United States after the Cold War. Council on Foreign Relations Press, 1998.
Halgin, Daniel S., Stephen P. Borgatti, and Zhi Huang. “Prismatic Effects of Negative Ties.” Social Networks 60 (January 2020): 26–33. https://doi.org/10.1016/j.socnet.2019.07.004.
Hall, Sean. This Means This, This Means That: A User’s Guide to Semiotics. Laurence King, 2012.
Hall, Stuart. “The Problem of Ideology—Marxism without Guarantees.” Journal of Communication Inquiry 10, no. 2 (1986): 28–44.
Hariman, Robert, and Francis Beer. “Realism and Rhetoric in International Relations.” In Post-Realism: The Rhetorical Turn in International Relations. Michigan State University Press, 1996.
Hariman, Robert, and John Louis Lucaites. No Caption Needed: Iconic Photographs, Public Culture, and Liberal Democracy. University of Chicago Press, 2007.
Harris, Christine R., Noriko Coburn, Doug Rohrer, and Harold Pashler. “Two Failures to Replicate High-Performance-Goal Priming Effects.” PLOS One 8, no. 8 (2013): e72467, https://doi.org/10.1371/journal.pone.0072467.
Havens, Timothy. Global Television Marketplace. British Film Institute, 2008.
Havens, Timothy. Black Television Travels: African American Media around the Globe. New York University Press, 2013.
Havens, Timothy, and Amanda Lotz. Understanding Media Industries. Oxford University Press, 2012.
Hayes, Dade. “Meta Chatbots Using Voices of John Cena and Kristen Bell Got Sexually Explicit, Even with Minors—Report.” Deadline Hollywood Daily, April 28, 2025. https://www.msn.com/en-us/technology/artificial-intelligence/meta-chatbots-using-voices-of-john-cena-kristen-bell-got-sexually-explicit-even-with-minors-report/ar-AA1DMpPS.
Helmond, Anne. “The Platformization of the Web: Making Web Data Platform Ready.” Social Media Society (2015). https://journals-sagepub-com.oregonstate.idm.oclc.org/doi/10.1177/2056305115603080.
Herman, Ed, Robert Waterman McChesney, and Edward S. Herman. The Global Media: The Missionaries of Global Capitalism. Cassell, 1998.
Hintz, Elizabeth A., and Kristina M. Scharp. “‘I Hate All the Children, Especially Mine’: Applying Relational Dialectics Theory to Examine the Experiences of Formerly Childfree Regretful Parents.” Journal of Social and Personal Relationships 40, no. 11 (2023): 3781–99. https://doi.org/10.1177/02654075231194363.
Hollihan, Thomas A., and Kevin T. Baaske. Arguments and Arguing: The Products and Process of Human Decision Making. 3rd ed. Waveland Press, 2015.
Honrada, Gabriel. “China Plans to Blow Starlink out of the Sky in a Taiwan War.” Asia Times, January 14, 2025. http://asiatimes.com/2025/01/china-plans-to-blow-starlink-out-of-the-sky-in-a-taiwan-war/.
Huizinga, Johan. Homo Ludens : A Study of the Play Element in Culture. Roy Publishers, 1950. http://archive.org/details/homoludensstudyo50huiz.
Hutson, Matthew. “Artificial Intelligence Faces Reproducibility Crisis.” Science 359, no. 6377 (2018): 725–26. https://doi.org/10.1126/science.359.6377.725.
Isbister, Katherine. How Games Move Us: Emotion by Design. MIT Press, 2016.
Jacobs, Scott. “On the Especially Nice Fit between Qualitative Analysis and the Known Properties of Conversation.” Communications Monographs 57, no. 3 (1990): 243-49. https://doi.org/10.1080/03637759009376200.
Jacomy, Mathieu, Tommaso Venturini, Sebastien Heymann, and Mathieu Bastian. “ForceAtlas2, a Continuous Graph Layout Algorithm for Handy Network Visualization Designed for the Gephi Software.” PLOS One 9, no. 6 (2014): e98679. https://doi.org/10.1371/journal.pone.0098679.
Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York University Press, 2006.
Jenkins, Henry, and Sam Ford. Spreadable Media: Creating Value and Meaning in a Networked Culture. New York University Press, 2013.
Jenkins, Henry, and Denise Mann. “Transmedia Workshop.” Presented at the Society for Cinema and Media Studies Conference. Los Angeles, CA, March 22, 2010.
Joseph, John E. “Ferdinand de Saussure.” Oxford Research Encyclopedia of Linguistics, June 28, 2017. https://doi.org/10.1093/acrefore/9780199384655.013.385.
Jullien, Bruno, and Wilfried Sand-Zantman. “The Economics of Platforms: A Theory Guide for Competition Policy.” Information Economics and Policy 54 (March 2021): 100880. https://doi.org/10.1016/j.infoecopol.2020.100880.
“Jurisdiction: Original, Supreme Court.” Federal Judicial Center, accessed November 20, 2018. https://www.fjc.gov/history/courts/jurisdiction-original-supreme-court.
Kant, Immanuel. Perpetual Peace. 1795. http://fs2.american.edu/dfagel/www/Class%20Readings/Kant/Immanuel%20Kant, _Perpetual%20Peace_.pdf.
Kant, Immanuel. Groundwork for the Metaphysics of Morals. Translated by Mary Gregor and Jens Timmerman. Cambridge University Press, 1998.
Krushke, John. “An Open Letter to Editors of Journals, Chairs of Departments, Directors of Funding Programs, Directors of Graduate Training, Reviewers of Grants and Manuscripts, Researchers, Teachers, and Students.” 2010. https://web.archive.org/web/20101210132444/http://www.indiana.edu/~kruschke/AnOpenLetter.htm.
Kumar, Sangeet. The Digital Frontier: Infrastructures of Control on the Global Web. Indiana University Press, 2021.
Kyeremeh, Emmanuel, and Markus H. Schafer. “Keep Around, Drop, or Revise? Exploring What Becomes of Difficult Ties in Personal Networks.” Social Networks 76 (January 2024): 22–33. https://doi.org/10.1016/j.socnet.2023.06.001.
“Laion-5B: A New Era of Open Large-Scale Multi-Modal Datasets.” Laion website, accessed May 25, 2025. https://laion.ai/blog/laion-5b.
Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network-Theory Clarendon. Oxford University Press, 2005.
Lemley, Mark. “The Splinternet.” Belo Horizonte 15, no. 1 (2021): 245–91.
Lindia, Matthew S. “Phenomenology of the Turing Test: A Levinasian Perspective.” Journal of Communication, August 15, 2023, jqad026. https://doi.org/10.1093/joc/jqad026.
Linfield, Suzie. The Cruel Radiance. University of Chicago Press, 2010.
Lotz, Amanda. Netflix and Streaming Video: The Business of Subscriber-Funded Video on Demand. Polity, 2022. https://www.politybooks.com/bookdetail?book_slug=netflix-and-streaming-video-the-business-of-subscriber-funded-video-on-demand–9781509552948.
MacKinnon, Catharine A. “Pornography, Civil Rights, and Speech Commentaries.” Harvard Civil Rights-Civil Liberties Law Review 20, no. 1 (1985): 1–70.
Maddox, Jessica. “What Do Creators and Viewers Owe Each Other? Microcelebrity, Reciprocity, and Transactional Tingles in the ASMR YouTube Community.” First Monday 26, no. 1 (2021). https://doi.org/10.5210/fm.v26i1.10804.
Makhortykh, Mykola, Aleksandra Urman, Felix Victor Münch, Amélie Heldt, Stephan Dreyer, and Matthias C. Kettemann. “Not All Who Are Bots Are Evil: A Cross-Platform Analysis of Automated Agent Governance.” New Media & Society 24, no. 4 (2022): 964–81. https://doi.org/10.1177/14614448221079035.
Malik, Alisha. “Meta Spotted Testing AI-Generated Comments on Instagram.” TechCrunch, March 21, 2025. https://techcrunch.com/2025/03/21/meta-spotted-testing-ai-generated-comments-on-instagram/.
Mandelbaum, Jenny, Darcey deSouza, Wan Wei, and Kaicheng Zhan. “Micro-Moments of Social Support: Self-Service-Occasioned Offers at the Family Dinner Table.” Communication Monographs 89, no. 3 (2022). https://nca.tandfonline.com/doi/abs/10.1080/03637751.2021.1985152.
Matheson, Calum Lister. “Desired Ground Zeroes: Nuclear Imagination and the Death Drive.” PhD thesis, University of North Carolina at Chapel Hill, 2015. https://search.proquest.com/openview/52c5bf8a45854a24707a829a282fa1c0/1?pq-origsite=gscholar&cbl=18750.
Mayer, Vicki, Miranda J. Banks, and John T. Caldwell, eds. Production Studies: Cultural Studies of Media Industries. Routledge, 2009.
McCloskey, Diedre. The Bourgeois Virtues. University of Chicago Press, 2006.
McDonald, Peter. “Homo Ludens: A Renewed Reading.” American Journal of Play 11, no. 2 (2019). https://files.eric.ed.gov/fulltext/EJ1211610.pdf.
McKerrow, Raymie E. “Critical Rhetoric: Theory and Praxis.” Communication Monographs 56, no. 2 (1989): 91–111. https://doi.org/10.1080/03637758909390253.
Mearsheimer, John J. The Tragedy of Great Power Politics. Reprint ed. W. W. Norton, 2003.
Mearsheimer, John J. “Great Power Rivalries: The Case for Realism.” Le Monde Diplomatique, August 1, 2023. https://mondediplo.com/2023/08/02great-powers.
Mercer, Andrew, Arnold Lau, and Courtney Kennedy. “For Weighting Online Opt-In Samples, What Matters Most?” Pew Research Center, January 26, 2018. https://www.pewresearch.org/methods/2018/01/26/for-weighting-online-opt-in-samples-what-matters-most/.
Moeller, Susan D. Compassion Fatigue: How the Media Sell Disease, Famine, War and Death. Psychology Press, 1999.
Montgomery, Barbara M., and Leslie A. Baxter. Dialectical Approaches to Studying Personal Relationships. Psychology Press, 2013.
Moretti, Franco. Distant Reading. Verso, 2013.
Mowitt, John. “Trauma Envy.” Cultural Critique, no. 46 (2000): 272–97. https://doi.org/10.2307/1354416.
Muzumdar, Prathamesh, Sumanth Cheemalapati, Srikanth Reddy RamiReddy, Kuldeep Singh, George Kurian, and Apoorva Muley. “The Dead Internet Theory: A Survey on Artificial Interactions and the Future of Social Media.” Asian Journal of Research in Computer Science 18, no. 1 (2025): 67–73. https://doi.org/10.9734/ajrcos/2025/v18i1549.
Nagy, Peter, and Gina Neff. “Imagined Affordance: Reconstructing a Keyword for Communication Theory.” Social Media Society 1, no. 2 (2015): 2056305115603385. https://doi.org/10.1177/2056305115603385.
Nagy, Peter, and Gina Neff. “Conjuring Algorithms: Understanding the Tech Industry as Stage Magicians.” New Media & Society 26, no. 9 (2024): 4938–54. https://doi.org/10.1177/14614448241251789.
Nakayama, Thomas K., and Robert L. Krizek. “Whiteness: A Strategic Rhetoric.” Quarterly Journal of Speech 81, no. 3 (1995): 291–309. https://doi.org/10.1080/00335639509384117.
Neyman, J., and E. S. Pearson. “On the Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference: Part I.” Biometrika 20A, no. 1/2 (1928): 175–240. https://doi.org/10.2307/2331945.
Nilay, Patel. “How Independent Websites Are Dealing with the End of Google Traffic.” The Verge, May 30, 2024. https://www.theverge.com/24167865/google-zero-search-crash-housefresh-ai-overviews-traffic-data-audience.
Nissenbaum, Helen. “Accountability in a Computerized Society.” Science and Engineering Ethics 2, no. 1 (1996): 25–42. https://doi.org/10.1007/BF02639315.
Norman, Donald. Living with Complexity. MIT Press, 2010.
Nye, Joseph. Soft Power. Public Affairs, 2005.
O’Brien, Matt. “Tech Industry Tried Reducing AI’s Pervasive Bias. Now Trump Wants to End Its ‘Woke AI’ Efforts.” ABC News, April 27, 2025. https://abcnews.go.com/Technology/wireStory/tech-industry-reducing-ais-pervasive-bias-now-trump-121209908.
Packer, Jeremy, and Joshua Reeves. Killer Apps: War, Media, Machine. Duke University Press, 2020.
Peace, Nathaniel. “Space Denial: A Deterrence Strategy.” Joint Force Quarterly 111 (2023). https://ndupress.ndu.edu/Media/News/News-Article-View/Article/3569640/space-denial-a-deterrence-strategy/.
Pham, Minh. “Internet Micro-Satellites: Filling the Sky and Connecting the World.” University of Southern California Viterbi School of Engineering, May 4, 2025. https://illumin.usc.edu/internet-micro-satellites-filling-the-sky-and-connecting-the-world/.
Pinchevski, Amit, and John Durham Peters. “Autism and New Media: Disability between Technology and Society.” New Media & Society 18, no. 11 (2016): 2507–23. https://doi.org/10.1177/1461444815594441.
Pollock, Friedrich. “State Capitalism: Its Possibilities and Limitations.” In The Essential Frankfurt School Reader, edited by Arato and Gebhardt. Continuum, 1985.
Pomerantz, Anita. “Conversation Analytic Claims.” Communications Monographs 57, no. 3 (1990): 231-35. https://doi.org/10.1080/03637759009376198.
Posner, Richard A. The Problems of Jurisprudence. Harvard University Press, 1993.
“Ragan Fox Breaks Down Kenneth Burke’s Pentadic Ratios.” Musings in Pop Culture & Pedagogy, October 21, 2013. https://ragan.blog/2013/10/21/ragan-fox-breaks-down-kenneth-burkes-pentadic-ratios/.
Ramesh, Aditya, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. “Hierarchical Text-Conditional Image Generation with CLIP Latents.” April 13, 2022, arXiv:2204.06125. https://doi.org/10.48550/arXiv.2204.06125.
Rhee, Lisa, Morgan Quinn Ross, Huyen TK Le, Yung-Ju Chang, and Joseph Bayer. “Social Media vs. Messaging: Using GPS Data to Explore Social App Ecologies in Context.” SocArXiv Papers, last edited October 24, 2024. https://osf.io/6pe3t_v1/.
Robinson, G. D. “Paul Ricoeur and the Hermeneutics of Suspicion: A Brief Overview and Critique.” Premise 2, no. 8 (1995). http://individual.utoronto.ca/bmclean/hermeneutics/ricoeur_suppl/Ricoeur_Herm_of_Suspicion.htm.
Ronzhyn, Alexander, Ana Sofia Cardenal, and Albert Batlle Rubio. “Defining Affordances in Social Media Research: A Literature Review.” New Media & Society 25, no. 11 (2023): 3165–88. https://doi.org/10.1177/14614448221135187.
Ropek, Lucas. “Elon’s DOGE Is Reportedly Using Grok AI with Government Data.” Gizmodo, May 24, 2025. https://gizmodo.com/elons-doge-is-reportedly-using-grok-ai-with-government-data-2000606753.
Ross, Morgan Quinn, Joseph Bayer, Lisa Rhee, Ivory Potti, and Yung-Ju Chang. “Tracking the Temporal Flows of Mobile Communication in Daily Life.” New Media & Society 25, no. 4 (2023): 732–55. https://doi.org/10.1177/14614448231158646.
Rouder, Jeffrey N., Julia M. Haaf, and Frederik Aust. “From Theories to Models to Predictions: A Bayesian Model Comparison Approach.” Communication Monographs 85 (December 2017): 41–56. https://doi.org/10.1080/03637751.2017.1394581.
Runyan, Anne Sisson, and V. Spike Peterson. Global Gender Issues in the New Millennium. 4th ed. Routledge, 2018. https://doi.org/10.4324/9780429493782.
Ryan, Marie-Laure. “Immersion vs. Interactivity: Virtual Reality and Literary Theory.” Post-Modern Culture 5, no. 1 (1994). http://www.humanities.uci.edu/mposter/syllabi/readings/ryan.html.
Sandu, Bogdan. “Piet Mondrian: Paintings, Geometry & Colors.” Russell Collection, April 17, 2025. https://russell-collection.com/piet-mondrian/.
Saval, Nikil. “Brutalism Is Back.” New York Times Style Magazine, October 6, 2016. https://www.nytimes.com/2016/10/06/t-magazine/design/brutalist-architecture-revival.html.
Scanlon, T. M. What We Owe to Each Other. Harvard University Press, 2000. https://www.hup.harvard.edu/books/9780674004238.
Schlag, Pierre. “The Aesthetics of American Law.” Harvard Law Review 115, no. 4 (2002): 7–8.
Schwartz, Margaret. “Review of Transmitted Wounds: Media and the Mediation of Trauma.” New Media & Society 22, no. 7 (2020): 1324–26. https://doi.org/10.1177/1461444820929997.
Shannon, Claude. “A Mathematical Theory of Communication.” Bell System Technical Journal 21 (October 1948): 379–423, 623–56.
Shumailov, Ilia, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross Anderson, and Yarin Gal. “AI Models Collapse When Trained on Recursively Generated Data.” Nature 631, no. 8022 (2024): 755–59. https://doi.org/10.1038/s41586-024-07566-y.
Silver, Nate, and Reuben Fischer-Baum. “How We Calculate NBA Elo Ratings.” FiveThirtyEight (blog), May 21, 2015. https://fivethirtyeight.com/features/how-we-calculate-nba-elo-ratings/.
Silvestri, Lisa. “A Rhetorical Forecast.” Review of Communication 13 (April 2013): 127–42. https://doi.org/10.1080/15358593.2013.789121.
Stuart, John R. Language as Articulate Contact: Toward a Post-Semiotic Philosophy of Communication. State University of New York Press, 1995.
Sturgis, Patrick, and Rebekah and Luff. “The Demise of the Survey? A Research Note on Trends in the Use of Survey Data in the Social Sciences, 1939 to 2015.” International Journal of Social Research Methodology 24, no. 6 (2021): 691–96. https://doi.org/10.1080/13645579.2020.1844896.
Tenen, Dennis Yi. Literary Theory for Robots: How Computers Learned to Write. W. W. Norton, 2024.
“The Futures of Game Studies.” Velvet Light Trap 81, no. 1 (2018): 57.
Tinic, Serra. On Location: Canada’s Television Industry in a Global Market. University of Toronto Press, 2005.
Valente, Thomas W. “Social Network Thresholds in the Diffusion of Innovations.” Social Networks 18, no. 1 (1996): 69–89. https://doi.org/10.1016/0378-8733(95)00256-1.
van Dijk, Teun A. “Critical Discourse Analysis.” In The Handbook of Discourse Analysis. John Wiley & Sons, 2005. https://doi.org/10.1002/9780470753460.ch19.
van Rooji, Iris, Olivia Guest, Federico Adolfi, Ronald de Haan, Antonia Kolokolva, and Patricia Rich. “Reclaiming AI as a Theoretical Tool for Cognitive Science.” Computational Brain & Behavior, 2024, 616–36.
Velleman, Paul F., and Leland Wilkinson. “Nominal, Ordinal, Interval, and Ratio Typologies Are Misleading.” American Statistician 47, no. 1 (1993): 65–72. https://doi.org/10.2307/2684788.
Vogler, Christopher. The Writer’s Journey: Mythic Structure for Writers. Michael Wiese Productions, 2007.
Wang, Chengyi, Sanyuan Chen, Yu Wu, et al. “Neural Codec Language Models Are Zero-Shot Text to Speech Synthesizers.” January 5, 2023, arXiv:2301.02111. https://doi.org/10.48550/arXiv.2301.02111.
Weil, Elizabeth. “ChatGPT Is Nothing Like a Human, Says Linguist Emily Bender.” New York Magazine, March 1, 2023. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html.
Wellman, Barry, Anabel Quan-Haase, Jeffrey Boase, Wenhong Chen, Keith Hampton, Isabel Díaz, and Kakuko Miyata. “The Social Affordances of the Internet for Networked Individualism.” Journal of Computer-Mediated Communication 8, no. 3 (2003): JCMC834. https://doi.org/10.1111/j.1083-6101.2003.tb00216.x.
Wenar, Leif. “The Deaths of Effective Altruism.” Wired, March 27, 2024. https://www.wired.com/story/deaths-of-effective-altruism/.
West, Robert, and Roland Aydin. “The AI Alignment Paradox.” Communications of the ACM 68, no. 3 (2025). https://dl-acm-org.oregonstate.idm.oclc.org/doi/10.1145/3705294.
Wilcox, Lauren. “Gendering the Cult of the Offensive.” In Gender and International Security, edited by Laura Sjoberg. Routledge, 2009.
Wood, Julia T., and Robert Cox. “Rethinking Critical Voice: Materiality and Situated Knowledges.” Western Journal of Communication 57, no. 2 (1993): 278–87. https://doi.org/10.1080/10570319309374452.
Yoshino, Kenji. Covering: The Hidden Assault on Our Civil Rights. Random House, 2007.
Young, Vershawn Ashanti. Your Average Nigga: Performing Race, Literacy, and Masculinity. Wayne State University Press, 2007.
Media Attributions
- pierces-triad by Daniel Faltesek is licensed under CC BY-NC
- sassure by Daniel Faltesek is licensed under CC BY-NC
- postal-1000 by Daniel Faltesek is licensed under CC BY-NC
- Vassily_Kandinsky, Composition 8 by Kandinsky is in the Public Domain
- blocky-HTML-Design by Daniel Faltesek is licensed under CC BY-NC
- toulmin-model-graphic by Emily K. K. Faltesek is licensed under CC BY-NC
- vintage-games by Daniel Faltesek is licensed under CC BY-NC
- greys-network by Daniel Faltesek is licensed under CC BY-NC
- markov-chain by Daniel Faltesek is licensed under CC BY-NC
- Shannon, “Mathematical Theory of Communication.” ↵
- Stuart, Language as Articulate Contact. ↵
- Anthony et al., “Message Convergence.” ↵
- This strong resource was developed for a course at the University of Chicago in 2007. Linda Chang, “Semiotics.” ↵
- Joseph, “Ferdinand de Saussure.” ↵
- Hall, This Means This, This Means That. ↵
- Chandler, “Semiotics for Beginners.” ↵
- Barthes, Mythologies. ↵
- Gibson, Ecological Approach to Visual Perception. Chapter 8 is the classic on affordance theory. Your instructor should provide you a copy of this chapter for additional discussion. ↵
- Nagy and Neff, “Imagined Affordance.” ↵
- Wellman et al., “Social Affordances of the Internet for Networked Individualism.” ↵
- Ronzhyn et al., “Defining Affordances in Social Media Research.” ↵
- Norman, Living with Complexity. ↵
- Saval, “Brutalism Is Back.” ↵
- Sandu, “Piet Mondrian.” ↵
- Derrida, “Structure, Sign, and Play in the Discourse of the Human Sciences.” ↵
- Robinson, “Paul Ricoeur and the Hermeneutics of Suspicion.” ↵
- Felski, “Critique and the Hermeneutics of Suspicion.” ↵
- Billig, Banal Nationalism. ↵
- Tenen, Literary Theory for Robots. ↵
- Anker, “Reactionary Turn in Literary Studies.’” ↵
- Best and Marcus, “Surface Reading.” ↵
- Bogost, Alien Phenomenology. ↵
- McKerrow, “Critical Rhetoric.” ↵
- Nakayama and Krizek, “Whiteness.” ↵
- Wood and Cox, “Rethinking Critical Voice.” ↵
- Young, Your Average Nigga; Yoshino, Covering. ↵
- Chun, Discriminating Data. ↵
- Baxter and Asbury, “Critical Approaches to Interpersonal Communication.” ↵
- Hintz and Scharp, “‘I Hate All the Children, Especially Mine.’” ↵
- Vogler, The Writer’s Journey. ↵
- Montgomery and Baxter, Dialectical Approaches to Studying Personal Relationships. ↵
- “ “Ragan Fox Breaks Down Kenneth Burke’s Pentadic Ratios.” ↵
- Jenkins, Convergence Culture. ↵
- Jenkins and Mann, “Transmedia Workshop.” ↵
- Jenkins and Ford, Spreadable Media. ↵
- Hollihan and Baaske, Arguments and Arguing. ↵
- Goodnight, “Legitimation Inferences.” ↵
- Boltanski and Thévenot, On Justification. ↵
- Silvestri, “Rhetorical Forecast.” ↵
- Geertz, Interpretation of Cultures. ↵
- boyd, It’s Complicated. ↵
- Jacobs, “On the Especially Nice Fit between Qualitative Analysis and the Known Properties of Conversation.” ↵
- van Dijk, “Critical Discourse Analysis.” ↵
- Pomerantz, “Conversation Analytic Claims.” ↵
- DeSantis, “Smoke Screen.” ↵
- Edwards, “Transcription of Discourse.” ↵
- Edwards, “Transcription of Discourse,” 330. ↵
- Mandelbaum et al., “Micro-Moments of Social Support.” ↵
- Posner, Problems of Jurisprudence. ↵
- “Jurisdiction: Original, Supreme Court.” ↵
- Black and Spriggs, “Citation and Depreciation of U.S. Supreme Court Precedent.” ↵
- Schlag, “Aesthetics of American Law.” ↵
- Crenshaw, “Race, Reform, and Retrenchment.” ↵
- MacKinnon, “Pornography, Civil Rights, and Speech Commentaries.” ↵
- Andrejevic, “Data Civics.” ↵
- Wenar, “Deaths of Effective Altruism.” ↵
- Nissenbaum, “Accountability in a Computerized Society.” ↵
- Elish, “Moral Crumple Zones.” ↵
- Scanlon, What We Owe to Each Other. ↵
- Linfield, Cruel Radiance. ↵
- Hariman and Lucaites, No Caption Needed. ↵
- Schwartz, “Review of Transmitted Wounds.” ↵
- Schwartz, “Review of Transmitted Wounds.” ↵
- Pinchevski and Peters, “Autism and New Media.” ↵
- Mowitt, “Trauma Envy.” ↵
- Moeller, Compassion Fatigue. ↵
- Lindia, “Phenomenology of the Turing Test.” ↵
- Birhane et al., “Debunking Robot Rights.” ↵
- Fukuyama, End of History; Nye, Soft Power; Haass, The Reluctant Sheriff. ↵
- Eaton, “Unpacking the Secret Million Internet in a Suitcase.” ↵
- Mearsheimer, Tragedy of Great Power Politics. ↵
- Mearsheimer, “Great Power Rivalries.” ↵
- Doyle, “Kant, Liberal Legacies, and Foreign Affairs”; Kant, Groundwork for the Metaphysics of Morals; Kant, Perpetual Peace. ↵
- Hariman and Beer, “Realism and Rhetoric in International Relations.” ↵
- Runyan and Peterson, Global Gender Issues in the New Millennium. ↵
- Cohn, “Women and Wars.” ↵
- Wilcox, “Gendering the Cult of the Offensive.” ↵
- Derrida, Specters of Marx. ↵
- Matheson, “Desired Ground Zeroes.” ↵
- Lemley, “The Splinternet.” ↵
- Chayka, “Ukraine Becomes the World’s ‘First TikTok War.’” ↵
- Faltesek, “Strategically Matching Messaging to the Platform.” ↵
- Bendett, “Battlefield Drones and the Accelerating Autonomous Arms Race in Ukraine.” ↵
- Pham, “Internet Micro-Satellites.” ↵
- Honrada, “China Plans to Blow Starlink Out of the Sky in a Taiwan War”; Peace, “Space Denial.” ↵
- Packer and Reeves, Killer Apps. ↵
- Goldfarb and Lindsay, “Prediction and Judgment.” ↵
- Gooding, “ASML: The Most Successful Tech Company You’ve Never Heard Of.” ↵
- Herman et al., Global Media. ↵
- Folkenflik, “Jeff Bezos Revamps Washington Post Opinion Section.” ↵
- Dyer-Witheford, Cyber-Marx. ↵
- Maddox, “What Do Creators and Viewers Owe Each Other?” ↵
- Hall, “The Problem of Ideology.” ↵
- Pollock, “State Capitalism.” ↵
- McCloskey, Bourgeois Virtues. ↵
- Gregg, Counterproductive; Coase, “ Nature of the Firm.” ↵
- Havens and Lotz, Understanding Media Industries. ↵
- Becker, “Fin-Syn Begin Again?” ↵
- Caves, Creative Industries. ↵
- Mayer et al., Production Studies. ↵
- Deuze, Media Work. ↵
- Lotz, Netflix and Streaming Video. ↵
- Curtin, “Media Capitals.” ↵
- Tinic, On Location. ↵
- Curtin, Playing to the World’s Biggest Audience. ↵
- Havens, Global Television Marketplace; Havens, Black Television Travels. ↵
- Kumar, Digital Frontier. ↵
- Helmond, “Platformization of the Web.” ↵
- Jullien and Sand-Zantman, “Economics of Platforms.” ↵
- “As Internet Enshittification Marches on, Here Are Some of the Worst Offenders.” ↵
- Faltesek, Selling Social Media. ↵
- Caldwell, Production Culture. ↵
- Latour, Reassembling the Social. ↵
- De Certeau, Practice of Everyday Life. ↵
- Huizinga, Homo. ↵
- McDonald, “Homo Ludens: A Renewed Reading”; Bogost, Play Anything. ↵
- Binmore, Game Theory. ↵
- Isbister, How Games Move Us. ↵
- This is an important claim in a number of different works on games. The “flow state” is theorized as a form of positive interested attention, as opposed to obsession or addiction. Isbister, How Games Move Us, 6. ↵
- Costikyan, Uncertainty in Games. ↵
- Ryan, “Immersion vs. Interactivity.” ↵
- Binmore, Game Theory. ↵
- Bogost, “Rhetoric of Video Games.” ↵
- Bogost, “Video Games Are Better without Characters.” ↵
- “The Futures of Game Studies.” ↵
- Ryan, “Immersion vs. Interactivity.” ↵
- “Computational Science,” Wikipedia. ↵
- “Computational Science.” See specifically the graphic identified as Jeroen’s own work at https://en.wikipedia.org/wiki/Computational_science#/media/File:Ways_to_study_a_system.png. ↵
- Velleman and Wilkinson, “Nominal, Ordinal, Interval, and Ratio Typologies Are Misleading.” ↵
- Neyman and Pearson, “On the Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference.” ↵
- Sturgis and Luff, “The Demise of the Survey?” ↵
- Mercer et al., “For Weighting Online Opt-In Samples, What Matters Most?” ↵
- Brenner and DeLamater, “Lies, Damned Lies, and Survey Self-Reports?” ↵
- Boateng et al., “Best Practices for Developing and Validating Scales for Health, Social, and Behavioral Research.” ↵
- Bisbee et al., “Synthetic Replacements for Human Survey Data?” ↵
- Harris et al., “Two Failures to Replicate High-Performance-Goal Priming Effects.” ↵
- Carter et al., “A Single Exposure to the American Flag Shifts Support toward Republicanism up to 8 Months Later.” ↵
- Hutson, “Artificial Intelligence Faces Reproducibility Crisis.” ↵
- Begley, “Reproducibility.” ↵
- Dunham and Guthmiller, “Doing Good Science.” ↵
- Aschwanden, “Science Isn’t Broken.” ↵
- Aschwanden, “Science Isn’t Broken.” ↵
- His name is literally the name of the function. This is one of many implementations of the process. Silver and Fischer-Baum, “How We Calculate NBA Elo Ratings.” ↵
- Krushke, “Open Letter.” ↵
- Rouder et al., “From Theories to Models to Predictions.” ↵
- Elish and boyd, “Situating Methods in the Magic of Big Data and AI.” ↵
- Elish and boyd, “Situating Methods in the Magic of Big Data and AI,” 71. ↵
- Ross et al., “Tracking the Temporal Flows of Mobile Communication in Daily Life.” ↵
- Rhee et al., “Social Media vs. Messaging.” ↵
- Halgin et al., “Prismatic Effects of Negative Ties.” ↵
- Valente, “Social Network Thresholds in the Diffusion of Innovations.” ↵
- Gómez, “Centrality in Networks.” ↵
- Everett and Valente, “Bridging, Brokerage and Betweenness.” ↵
- Blondel et al., “Fast Unfolding of Communities in Large Networks.” ↵
- Jacomy et al., “ForceAtlas2.” ↵
- Kyeremeh and Schafer, “Keep Around, Drop, or Revise?” ↵
- Weil, “ChatGPT Is Nothing Like a Human.” ↵
- Moretti, Distant Reading. ↵
- Muzumdar et al., “Dead Internet Theory.” ↵
- Malik, “Meta Spotted Testing AI-Generated Comments on Instagram”; Hayes, “Meta Chatbots.” ↵
- Nilay, “How Independent Websites Are Dealing with the End of Google Traffic.” ↵
- Edwards, “AI Bots Strain Wikimedia as Bandwidth Surges 50%.” ↵
- Chayka, “Mark Zuckerberg Says Social Media Is Over.” ↵
- West and Aydin, “The AI Alignment Paradox.” ↵
- O’Brien, “Tech Industry Tried Reducing AI’s Pervasive Bias.” ↵
- Anderson, “The End of Theory.” ↵
- Faltesek, Selling Social Media. ↵
- Nagy and Neff, “Conjuring Algorithms.” ↵
- Casilli, “Waiting for Robots.” ↵
- Ramesh et al., “Hierarchical Text-Conditional Image Generation with CLIP Latents.” ↵
- Wang et al., “Neural Codec Language Models Are Zero-Shot Text to Speech Synthesizers.” ↵
- Ropek, “Elon’s DOGE Is Reportedly Using Grok AI with Government Data.” ↵
- Shumailov et al., “AI Models Collapse When Trained on Recursively Generated Data.” ↵
- Bender and Koller, “Climbing towards NLU.” ↵
- van Rooji et al., “Reclaiming AI as a Theoretical Tool for Cognitive Science.” ↵
- Guo et al., “DeepSeek-Coder.” ↵
- “LAION-5B.” ↵
- Epstein and Axtell, Growing Artificial Societies. ↵
- Makhortykh et al., “Not All Who Are Bots Are Evil.” ↵
- Epstein and Axtell, Growing Artificial Societies, 19. ↵



