CULTURAL ANTHROPOLOGY, Vol. 36, Issue 3, pp. 509-537, ISSN 0886-7356. DOI: 10.14506/ca36.3.11

CARE AND SCALE: Decorrelative Ethics in Algorithmic Recommendation

NICK SEAVER

Tufts University

Orcid ID icon https://orcid.org/0000-0002-3913-1134


My phone buzzed against my thigh. Hey Nick, thanks for completing the survey. Anything in particular you’re hoping for with this playlist?

It was late 2015, and I was texting with The Yams, a music recommendation company based in Brooklyn, New York. As the “About” page on their website put it, The Yams was “a service that gives you music recommendations and personalized playlists made by an actual, knowledgeable human who understands your taste and knows what they’re talking about.” I had filled out an online form with some of my favorite musical artists, providing my cell phone number, and The Yams had texted me back. Or rather, Alex had texted me back.1

The humanity of Alex and his colleagues was The Yams’ main selling point, and every interaction with the service seemed designed to remind me of this personal scale. The company offered no software, working instead through informal text messages, which arrived in my pocket among conversations with friends and family. Their sparsely designed website, set in a plain, monospaced font, signaled a “low-tech” ethos. The Yams was conspicuously not a “tech” company.

Yet, like many tech companies, The Yams had launched by offering their product for free, to attract new users. Alex texted to let me know that this strategy had worked better than expected and that they were overwhelmed by demand—my playlist would be ready soon. A week later, my phone vibrated with a link. The playlist had been assembled on Spotify and was titled “To Nick, from David.” David and I had never met, but he had compiled twenty songs and a 250-word summary for me, describing the artists and record labels responsible for the music. “These songs aren’t tied by a defining tempo or rhythm,” he wrote, “but rather a feeling—one I hope you’ll enjoy.” The playlist’s title invoked a gift relationship between us; its description set an ineffable “feeling” against quantifiable traits like tempo and rhythm. David signed off cheerfully: “Anyways, hope you dig it, I had a great time piecing things together for this one.”

Although The Yams created their playlists on large streaming media platforms like Spotify and YouTube, they positioned themselves as a distinctively human alternative to the algorithmic recommendations those platforms offered to users. “Deciding what to listen to is too important to leave solely in the hands of technology companies, algorithms, and whatever appears in our social feeds,” their website read. The Yams had joined a growing reaction against algorithms, favoring a smaller-scale approach often called “curation.” Algorithms, these critics contended, could never really understand music because they could never really care for music the way a human could. Curation, its advocates would often note, shared etymological roots with care (Jansson and Hracs 2018), and care was a distinctively human capacity. Software lacked qualities like passion and taste; machine learning might identify patterns in data, but it could never truly appreciate them. From notoriously hip Brooklyn, The Yams offered a small, careful alternative to algorithmic recommendation: personalization from actual persons.

I first learned about The Yams from one of their nominal opponents, Ben Fields, a computer scientist with a PhD in automatic playlist generation (Fields 2011). At the time, I had been conducting fieldwork for several years with scientists and engineers like Fields, who designed and built music recommender systems. I attended their international professional conferences, where researchers presented new techniques for analyzing musical sound or user behavior; I had embedded myself in one of the more popular music recommendation companies operating in the United States; I interviewed academic researchers, corporate CEOs, and people working at every level of these organizations, down to their summer interns. The vast majority of them professed a deep care for music, describing their programming work as driven by an interest in supporting musicians and listeners. The position ascribed to them in popular discourse—representatives of careless, inhuman algorithms—was thus a source of tension: if the curators of The Yams felt that they truly cared for music, my interlocutors did too.

But as much as people like Fields wanted to act carefully, they were certain that a curatorial service like The Yams could never really work. There were too many songs and too many listeners in the world to connect them all by hand. A Spotify user today, for instance, is one of more than 356 million worldwide, navigating a catalog of more than seventy million tracks (Spotify n.d.). Large numbers like these have been cited as the reason to pursue algorithmic recommendation since the emergence of the field in the mid-1990s. For the makers of recommender systems, a system only works if it works at large scale, mediating between many users and many items.

After finally receiving my playlist, I joked with Fields on social media about how it had been delayed. “That’s totally shocking,” he replied sarcastically, “Who could guess that human intervention doesn’t scale.” A few weeks later, evidently searching for mentions of their name online, The Yams responded to our conversation from their corporate account: “Hey guys we’re not trying to ‘scale,’ just wanna find people that appreciate what we do. Our 1st week was nuts though!” A few years after that, as I sat down to draft this essay, The Yams had shut down their playlisting service and revised their “About” page into the past tense. When I texted Alex to check in, an automated chatbot replied to my message: Hey! We’re not around.

* * *

The brief, careful life of The Yams exemplified a matter of widely shared common sense: care and scale are naturally opposed to each other. To demonstrate its commitment to care, a company distances itself from efforts to scale; in an industry where financial success depends on scaling up, such companies fail. Computer scientists balk at the idea that algorithms might be replaced by careful, human intervention because human care can’t scale up. Care happens in small-scale moments of human interaction; scale is achieved by tireless and impersonal machinery (Mol, Moser, and Pols 2010, 14).2

Among anthropologists, too, we find this commonsense opposition in play. Calls for attention to care often figure it as an alternative to the large scale, which “helps shift … overwhelming largeness … toward more intimate and personal relationships” (Boke 2016). Conversely, anthropologists tend to “discern something dehumanizing—even violent—about scale” (Carr and Lempert 2016, 19), seeing it as the enemy of the small-scale human encounters (and societies) that have long defined the discipline’s objects and methods. Thus, in the dispute between “algorithms” and “curators,” anthropologists would seem predisposed toward the careful, small-scale, human side.

The apparent opposition between care and scale also shapes current debates about the ethics of algorithmic systems, both inside and outside of the academy. As algorithms have become matters of popular concern, many critics have described how these technologies replace careful human judgment with automated rules, to damning effect. Newly computerized government agencies may process more claims faster, but they disproportionately exclude marginalized people (Eubanks 2018); a search engine that seeks to index all the World Wide Web automatically perpetuates the racism it finds there (Noble 2018). As the introduction to Life by Algorithms, a collection of anthropological work on algorithmic systems, puts it: “The common sense and situational logic of humans is displaced by and subordinated to the logic of automation and bureaucracy” (Gusterson 2019, 2). The problem with these systems, many critics have suggested, is scale itself: the pursuit of scale magnifies the harms produced by algorithmic systems, turning them “from local nuisances to tsunami forces” (O’Neil 2016, 30), and the drive to scale makes caring for those who might be affected by such systems effectively impossible (Gillespie 2020; Hanna and Park 2020; Bender et al. 2021).

For people who want to make ethical recommender systems, the fact that care and scale seem intrinsically opposed creates a problem. This article describes how they try to solve it. They do so not by giving up on care or abandoning their desire to scale, but by reimagining the terms of their relationship—redefining what care and scale mean in the process. While we might take issue with this reimagining, the makers of algorithmic recommendation do not lack ethics, as some critics might argue. Rather, they are always making ethics, trying to understand, evaluate, and reconfigure the field of possible choices. Instead of thinking about ethics as a good kind of decision-making mostly absent from the tech sector, I am interested here in the ethical reasoning that already takes place there.3 How do the people who work within these systems understand their own ethical positions? These local ethical vernaculars shape what people do as they respond to critiques or try to do the “right” thing, and, as I will describe for the world of software startups, they appear to be in transformation.

Ethics, care, and scale are all concepts of substantial anthropological interest in their own right, which have accrued meanings that diverge from their common usage in my field sites. In what follows, I bring together anthropological and vernacular theorizing, following calls by scholars in all these subfields to look for how ethics, care, and scale are mobilized in everyday life (e.g., Das 2012; Carr and Lempert 2016; Puig de la Bellacasa 2017). Thus my use of these terms often accords first with usage in the field, only later bringing in the anthropological twists that allow us to think of care as potentially unpleasant or scale as a matter of comparison rather than largeness.

Thinking through the ethical entanglement of care and scale with my interlocutors provides an opportunity for examining anthropology’s own tacit assumptions about how these concepts relate. The case of music recommendation also offers a window into the broader world of the software industry—a sector with profound global influence, as its values and techniques are picked up by governments, universities, and other institutions around the world (Caplan and boyd 2018). As popular critiques of the tech sector gain purchase, and companies and investors shift their strategies in response, understanding these emerging forms of ethical reasoning is a crucial task.

LOOSENING THE DOUBLE BIND

In classic anthropological terms, we can say that people who want to pursue care and scale simultaneously find themselves in a double bind (Bateson 1972, 206). As Kim Fortun and Mike Fortun (2005, 47) argue,

Double-binds cannot be resolved; one must work within their constraints and contours, continually making decisions about how best to uphold competing values. It is in these judgments, in our view, that ethics “happen.”

Following Fortun and Fortun (2005), I start from the position that ethics can be found in the way people negotiate among apparently competing values. But where they claim that double binds cannot be resolved, I suggest that this does not always hold true: one way to loosen a double bind is to redefine its terms, working not just within its constraints, but on those constraints, rearranging the space of ethical decision-making. In Gregory Bateson’s (1972, 207) original theorization of double binds, the impossibility of an “escape from the field” is what makes them so binding. But in many apparent double binds, the constitution of the field itself is in play.

This is how Brian Whitman, cofounder of a music recommendation company called The Echo Nest, sought to escape the double bind—to pursue care and scale at once. Whitman founded The Echo Nest in 2005 with Tristan Jehan, a PhD classmate from the MIT Media Lab. Based in the Boston area and built around algorithmic techniques they had developed at MIT, the company very soon became the archetype for the “algorithm” side of the curator-algorithm dichotomy (cf. Morris 2015). The Echo Nest’s “music intelligence” service eventually provided data and recommender infrastructure for most of the music streaming companies operating in the United States—until it was acquired by Spotify in 2014.

When I interviewed Whitman in 2018, he reflected on his early work at The Echo Nest, describing how press coverage routinely designated companies as essentially either “human” or “machine”:

Every interview I ever got, or every panel I was on, they would just start with that man-versus-machine garbage—as if these were the only two options you had, basically. And that pissed us off.

For Whitman, as for many of my interlocutors, that “man-versus-machine garbage” was frustrating: it set caring humans against scalable machines, locating humans like Whitman on the machine side of the dichotomy. Not only did this categorically malign their work, but it also seemed inadequate to capturing the many ways that humans and machines might come together in systems labeled “algorithmic” or “curatorial.” As he posted on social media shortly before The Echo Nest was acquired, “Real fleshy caring humans power all the popular music discovery platforms, including ours. No successful service uses only ‘robots.’” Offering a vernacular version of long-standing arguments from science and technology studies (e.g., Hughes 1987; Haraway 1991), practitioners like Whitman argued that these systems were best understood not as either human or machine, but as sociotechnical arrangements of people and technology.

Around 2012, Whitman began to formalize an alternative to the standard framing in his public talks, replacing essentialized humans and machines with the values they had come to represent: care and scale. Against the idea that care and scale were opposed (Figure 1), he presented a two-dimensional diagram that figured care and scale as separate axes (Figure 2). In this coordinate space, care and scale were not opposites, but rather independent of each other.

This simple representational move made it possible to distinguish among approaches to recommendation that had previously been grouped together. Into the space, Whitman placed some common techniques (Figure 3). In the careful, small top-left were “editorial” curators like The Yams. In the large, careless bottom-right was pure audio analysis—a technique that automatically analyzed patterns in musical sound, but which often produced strange and unsatisfactory results when used in isolation. In the abject bottom-left—both careless and small-scale—Whitman polemically placed collaborative filtering, the oldest and most popular technique used for algorithmic recommendation. Because collaborative filters rely on users’ listening histories to make recommendations, they cannot recommend tracks that have never been listened to, nor can they make recommendations to brand-new users. This, Whitman contended, was a failure to work at adequate scale, which manifested as neglect for new or obscure music.

Care and scale, as opposites. Figure by Nick Seaver.

Figure 1. Care and scale, as opposites. Figure by Nick Seaver.

Care and scale, as two dimensions. Figure redrawn by author from Whitman (2012).

Figure 2. Care and scale, as two dimensions. Figure redrawn by author from Whitman (2012).

The care and scale space, populated by techniques. Figure redrawn by author from Whitman (2012).

Figure 3. The care and scale space, populated by techniques. Figure redrawn by author from Whitman (2012).

Whitman’s diagram resembled the conventional “competitive matrix” used by startup founders to position their companies among rivals while seeking funding (Walk 2020). And like those diagrams, his left the superlative top-right corner conspicuously empty, cleared for his own company. The Echo Nest distinguished itself, according to Whitman, by its simultaneous concern for care and scale. In an article titled, like this one, “Care and Scale,” he wrote: “We prioritize care for the data and scale over ease and speed and the results work” (Whitman 2013, 3). Where other companies might push a product too quickly to market in the interest of rapid growth, Whitman told me that he thought The Echo Nest was different:

Music was all-important, to everyone in the company, in a way that I don’t think other companies had… . If anyone’s going to do it, it better be us, because we actually care about doing it.

From Whitman’s perspective, his work had been unfairly lumped in with a set of naive and careless algorithmic competitors. “We’ve always felt that we were the good guys in a team of automatons,” he told me.

Whitman’s diagram was a rhetorical accomplishment, a reframing that made it easier to distinguish among large-scale systems that might be more or less careful and to refuse the idea that smallness and care necessarily went together. He offered a vision for how one might be a “good guy” in an increasingly vilified field. And while this refiguration was promotional and self-serving, it was also clarifying. By pulling care and scale apart, Whitman opened up a space for contesting assumptions about their interrelation and for imagining new configurations of these values.

THE DIRTY WORLD OF HUMAN VALUES

I first encountered this style of reasoning at the place I first met Whitman: Miami Beach, during the 2011 meeting of ISMIR, the International Society for Music Information Retrieval. It was there that I learned firsthand how people working in machine learning, and in computer science more generally, blurred the lines between their technical expertise and the way they made sense of the broader social world.

Since 2000, ISMIR has served as a meeting place for academic and industry researchers who use computers to analyze musical data. Many researchers working at the major music streaming companies, like Whitman, began their careers as graduate students at ISMIR. The conference features topics ranging from digitizing medieval scores to automatically identifying melodies. But by 2011, much of the research presented there was oriented in one way or another toward the design of music recommender systems.

That year, ISMIR’s keynote speaker was David Huron, a cognitive musicologist well known for his research on the psychology of musical enjoyment. But his keynote, titled “Designing the Future,” departed from that work in a philosophical direction. Instead of focusing on music, Huron presented a general theory about the relationship between technology and what he called “the dirty world of human values.”

The basic human problem, as Huron posed it, is that people have many potentially conflicting values. Ordinarily, we resolve these values by compromising: we might choose to eat our cake, rather than to keep on having it; or, we might decide to eat only half our cake, so we can have (some of) it, too. However, Huron claimed, according to moral and ethical philosophers (though he did not say which ones), such compromises should not be necessary.4 This was a problem that technology could solve. Despite its amoral reputation, Huron argued, engineering was best understood as a form of “applied moral and ethical philosophy.” The design of technology was not motivated by the pursuit of efficiency or the application of scientific knowledge, but rather “by the goal of reducing or eliminating value conflicts.”

This was an unusually abstract argument for the ISMIR stage, and Huron’s audience seemed somewhat bewildered. To explain what he meant to a crowd of computer scientists, Huron turned to a technical and metaphorical language familiar to them: the language of vector mathematics.

Vectors, I would soon learn, were a ubiquitous feature of music recommender systems and conversations about them. While I vaguely recalled from a distant math class that vectors were essentially arrows—lines that pointed in some direction with a particular length—they seemed to mean something much more generic for people at the conference. In talks and along hallways, I heard researchers refer to all sorts of vectors: user vectors, song vectors, artist vectors, album vectors. When I finally asked a graduate student to explain, he told me that it was simple: a vector was “just a list of numbers.” A song vector was a list of numbers that represented a song; a user vector, a list that represented a user. Those lists of numbers could be derived in many ways: a song’s vector might represent its acoustic qualities, some pattern of users who listen to it, or a combination of those and more. To talk about vectors was to talk about data in the abstract, being intentionally vague about that data’s sources.

Most machine learning systems parse the entities of the world by first rendering them as vectors. While a computer scientist might casually describe vectors as merely lists of numbers, to treat those lists as vectors is to imagine them as representing points in a space full of other vectors. Draw a line from the origin to your point, and you have the arrow I recalled from high school math class. Represented as vectors, objects are defined by their orientation—if two song vectors contain similar numbers, then their arrows will point in roughly the same direction. Machine learning systems analyze the angle between vectors to produce a measure of similarity, or correlation. If vectors point in the same or in opposite directions, they are said to be correlated with each other (Figure 4a). If they are at right angles, they are orthogonal to, or independent of, each other (Figure 4b). Algorithmic recommender systems use these correlations to produce recommendations: users whose vectors point in the same direction are likely to like the same things.5

(a) Correlated and (b) orthogonal pairs of vectors. Figure by Nick Seaver.

Figure 4. (a) Correlated and (b) orthogonal pairs of vectors. Figure by Nick Seaver.

So, instead of talking about conflicts among values, Huron spoke of correlations among value vectors. He illustrated his theory of technical ethics with a striking pair of values: sex and procreation. For most of human history, Huron argued, people have valued sex and procreation but have been unable to pursue one without the other. In vector terms, sex and procreation are correlated: they point in the same direction (Figure 5). The advent of contraception, Huron claimed, rearranged these vectors, pushing them to (almost) 90º apart by making it possible to move in the direction of the sex vector without simultaneously moving toward procreation (Figure 6). As Huron put it, contraception decorrelated these values, making them orthogonal to each other. Such was the power of engineering, which could alter the bearing of value vectors “through the manipulation of physical or social reality,” resolving value conflicts by decorrelating them out of existence.

Huron’s vectors reflected no underlying lists of numbers, but this was not unusual. Even in technical use, vectorization is a tool for abstraction, for transforming ordinary tabular data into malleable orientations in multidimensional space (cf. Mackenzie 2017). Vector spaces are the symbolic terrain on which much of the labor of machine learning works, and they provide a widespread metaphorical language across the software industry. Startup founders describe their employees as vectors; venture capitalists describe the companies they fund as vectors; in ordinary conversation, engineers will describe unrelated things as “orthogonal” to each other.

Sex and procreation represented as correlated vectors. Figure redrawn by author from presentation by David Huron (2011).

Figure 5. Sex and procreation represented as correlated vectors. Figure redrawn by author from presentation by David Huron (2011).

Sex and procreation represented as decorrelated vectors. Figure redrawn by author from presentation by David Huron (2011).

Figure 6. Sex and procreation represented as decorrelated vectors. Figure redrawn by author from presentation by David Huron (2011).

We can now see Whitman’s diagram of care and scale as a claim to decorrelation, achieved not through the invention of any particular technology, but through the organization of his company. When I asked, Whitman told me he did not recall Huron’s keynote, but he shared its vectoral imagination. He described the origins of his diagram to me in explicitly decorrelative terms, as though he had performed a statistical operation in his head: “In my mind, that’s how you do the 2D PCA [principal component analysis] of all the other companies.” Principal component analysis is a classic statistical technique for deriving orthogonal vectors from sets of data, meant to capture the major dimensions along which a data set varies. But again, these vectors reflected no underlying data.

Although vector metaphors may appear formal and data-driven—perhaps even like scientistic attempts to claim authority—they are in practice ad hoc and impressionistic, obscuring the heterogeneous mess of the empirical world under smooth, continuous surfaces. Huron’s example, for instance, depended on heteronormative assumptions about what constitutes sex. It begged the question of what might count as the invention of contraception. His diagram implied that contraception facilitated procreation without sex, as well as sex without procreation—perhaps taking for granted the eventual development of new reproductive technologies like in vitro fertilization.

Yet Huron had identified a dynamic that anthropologists of new reproductive technologies had noted in the 1990s: technology had transformed the field of possible human choices, and “procreation can now be thought about as subject to personal preference and choice in a way that has never before been conceivable” (Strathern 1992, 34). Where anthropologists might be studiously neutral about the value of such changes, Huron endorsed decorrelation itself as desirable. In his account, engineers seeking to “design the future” were working toward a decorrelated world in which people could choose freely among their values, with no choices impinging on any others.

This was a culturally specific value regarding how values should relate—a typical fantasy of engineering’s value neutrality and an instance of what Louise Amoore (2020, 10) has identified as the common algorithmic “promise to render all agonistic political difficulty as tractable and resolvable.” It is easy to see, in resistance to contraception and new reproductive technologies, for instance, how people might value correlation itself and resist technical efforts to decorrelate. Nevertheless, Huron concluded: “A utopian world is one where all value vectors are orthogonal.”6

DECORRELATION AS AN ETHICAL TECHNIQUE

I call Huron’s philosophy a decorrelative ethics, although to many readers it may seem to be essentially the opposite of what we mean by ethics. As Andrea Ballestero (2015) has noted, mathematical operations are commonly considered too rigid to be moral. While decorrelation offers a way to approach value conflicts, it is not concerned with making hard decisions, but rather with making decisions less hard, disaggregating values into more manageable form. Easy choices and formulaic operations do not sound very ethical (cf. Zigon 2019).

However, as Ballestero (2015) argues in her article about the ethics of formulas, for people who become familiar with a particular “calculation grammar,” technical practices are much less formulaic and much more morally potent than they seem. Ballestero (2015, 266) describes how, for Costa Rican technocrats, a pricing algorithm links social and technical concerns, providing a “distinctively technical place for ethics” that is nonetheless “charged with potential for transformation.” The notion of a “grammar” captures the open-ended and transposable quality of technical devices like vector spaces, which can readily be adapted to new situations. In the metaphorical spread of vectors, we see such a grammar in action: for people like Whitman and Huron, decorrelation is a conceptual tool that provides a structure for thinking about social concerns. Ethically charged techniques are not necessarily tools for abdicating moral responsibility; they can also serve as “instruments that sharpen people’s ethical awareness of their own decisions” (Ballestero 2015, 266). Calculation grammars provide equipment for ethical reasoning.

Decorrelation appears explicitly at moments of what Jarrett Zigon (2009, 260) calls “moral breakdown,” when conflicting moral demands collide and cause people “to stop and consider how to act or be morally appropriate.” Zigon’s moral breakdown is analogous to what Fortun and Fortun (2005) identified as a double bind, and we can interpret Whitman’s diagram as an effort to work through the collision of curatorial and algorithmic demands, which raised the question of what the makers of music recommendation should do if they wanted to do “good,” meeting a rising popular critique of their work.

For those familiar with its grammar, decorrelation provides a resource for sense-making in everyday life, part of what Zigon (2013, 202) elsewhere calls a “moral assemblage.” Such assemblages provide people with a set of context-specific “ethical practices” to draw on in moments of conflict (Zigon 2013, 202). Joining Ballestero’s and Zigon’s thinking together, we can understand decorrelation as an ethical technique—like Zigon’s ethical practices, but with a technical twist. When competing moral demands (like the injunction to care and the injunction to scale) collide, decorrelation offers a conceptual tool that people working in the world of machine learning have ready to hand for thinking through their problems.

But decorrelation is not only a way of thinking. As Huron suggested, technologies may produce decorrelation materially, by altering the conditions in which ethical decisions are made. We do not have to endorse Huron’s vernacular technological determinism or the content of his contraceptive example to recognize that technologies can transform the conditions of ethical reasoning. This is a canonical argument made by critical scholars of ethics and technology (e.g., Winner 1980; Akrich 1992; Verbeek 2006). Ethical techniques are at once discursive and material, altering both the conditions of human decision making and how people understand those conditions.

Michael M. J. Fischer (2001) has described such technomoral situations as “ethical plateaus”—conceptual and technical terrains on which ethical reasoning and debate play out. The metaphor of the plateau foregrounds the tectonics of the ethical sphere—everything that supports and constrains the range of ethical possibility, without making a strong distinction between the “hard” constraints typically associated with technology and the “soft” ones associated with society. As Fortun and Fortun (2005, 50) elaborate, ethical plateaus are constantly in slow transformation, producing a landscape both shifting and durable, “always altering perspectives on what is real, natural, inevitable, possible, and obligatory.”

Where Fortun and Fortun (2005, 50) focus on the ethical plateau as an environment external to actors that can “forcefully mold” their ethical perceptions, decorrelation takes the ethical plateau itself as an object of intentional engineering. We might say, quite geometrically, that by transforming one-dimensional moral dichotomies into two-dimensional spaces for reflection and action, decorrelation literally produces ethical plateaus. Engineers are not only passive recipients of large-scale forces but also understand themselves as potentially transformative agents.

This does not mean that engineers always achieve their goals or that they work beyond any determining context. Despite the universalizing reputation of mathematics (and the universalizing aspirations of computer scientists), we can still understand decorrelation and its vectoral kin as particular techniques, embodying values and working toward ends that are not universally shared. I’ve already suggested that Huron’s orthogonal utopia, a vision of unlimited and unimpinged choice, reflects a particular orientation toward ethics, placing a value on disaggregation itself. As critics have begun to offer normative recommendations for ethics within algorithmic systems, by contrast, they have pushed for an explicitly relational ethics (Amrute 2019; Zigon 2019; Birhane 2021)—one that rejects the premises of decorrelative projects and instead aims to foreground and cultivate meaningful relations, grounded in an ethics of care.

The fact that “care” has already been embraced by figures like Whitman and others in the tech sector raises an important question: How do existing care practices compare with the goals envisioned by these critics?

MATTERS OF CARE

Ellie called herself a “data gardener.” At the music recommendation company I call Whisper, she managed the Quality Assurance (QA) team—a group largely populated by a rotating cast of interns. People on Ellie’s team maintained Whisper’s data: they inspected the company’s constantly updating database of artist popularity scores, looking for sudden dramatic changes that might be evidence of an error; they tested new recommendation algorithms, running various artists through and evaluating whether the outputs made sense. Especially puzzling cases ended up at Ellie’s desk for her to resolve. As a visitor, I spent a lot of my time doing QA work, sitting next to her.

“I love music data gardening,” Ellie messaged me one day, while cleaning up the metadata for an artist named Boobie $oprano. Gardening captured the way she tended to unruly algorithmic outputs, weeding out the inevitable errors that grew from Whisper’s lively software system. She found it meditative, she told me: “Zen like. Must maintain balance in the garden.” Like her colleagues, Ellie considered herself someone who really cared about music, and she manifested this care through her close attention to musical data.

Work like Ellie’s is often called “data cleaning,” and it has been the object of much ethnographic attention that emphasizes how important it is to making the “raw” data that informs scientific inquiry or powers algorithmic systems (e.g., Walford 2017; Biruk 2018; Plantin 2019). Scholars working within the tradition of feminist care ethics have argued for understanding this labor as a form of care and for making it more visible (e.g. Mattern 2018; Pinel, Prainsack, and McKevitt 2020). Data cleaning is essential to the ongoing functioning of these systems, it is often neglected or hidden, and it is conventionally feminized and racialized. (It is no accident that Ellie called her work “gardening,” while her male colleagues working on data infrastructure would call themselves “plumbers.”)

Whitman, too, regarded QA work as the prototypical form of care within a recommendation company. As he wrote on his blog, in a self-consciously tautological definition:

Care is a layer of quality assurance, editing and sanity checks, real-world usage and analysis and, well, care, on top of any systematic results. (Whitman 2012)

Quality Assurance workers took care of algorithmic inputs and tended to algorithmic outputs. To do this well required a familiarity with music that made it possible to recognize subtle errors in metadata across many genres and to understand what made an algorithmic playlist good in the absence of any objective metric for correctness. Ellie’s meditative engagement with musical data resembles the kind of affective ethical attention that Sareeta Amrute (2019) has called “attunement,” relying not only on the application of strict rules but also on an intuitive engagement with the objects of labor. This was not a matter of mobilizing “good” taste, but rather of possessing what one of Ellie’s colleagues called “a Wikipedia kind of knowledge” about music—a wide-ranging, sometimes cursory, understanding of the broader musical world. When Whitman insisted that his company uniquely cared about music, this was why it mattered: workers had to rely on their own cultural intuitions and care about music to know whether their system worked well. He recalled how his insistence on care had emerged in response to a particularly bad recommender released by one of The Echo Nest’s competitors: it was “this terrible, hilarious thing—why did they release it? How could they have possibly done that?” Care was sometimes conflated with quality, which was its evidence. “Back then,” he told me, “care meant ‘never show anything bad—don’t just do something because the computer said to.’” Human checking was essential.

We may be skeptical of invocations of care like these, seeing them as status jockeying among corporations. But as María Puig de la Bellacasa (2017, 10) warns in the book from which I borrow this section’s title, “There are many reasons to treat the reductive appropriation of care in the contexts of the ethical ideologies of the Global North with attention rather than scorn.” Rather than endorse a “hegemonic ethics” (Puig de la Bellacasa 2017, 132) that presumes to already know what care must be, Puig de la Bellacasa calls on us to attend to how care is enacted in the world, even in settings that appear inimical to it. Indeed, this has become a common theme in anthropological work on care, where the signature analytic move is to divest care of its common associations with positive affect and desirability. Care, in this school of thought, is not defined by nursing, mothering, or any of the other stereotypical caring professions, nor is it understood as essentially good—for its givers, for its recipients, or in the outcomes it produces. Instead, care refers to the work of sustaining relationships that keep the world going (Puig de la Bellacasa 2017; Hobart and Kneese 2020). As Lisa Stevenson (2014, 3) writes in her ethnography of “bureaucratic care” in the Arctic,

Shifting our understanding of care away from its frequent associations with either good intentions, positive outcomes, or sentimental responses to suffering allows us to nuance the discourse on care so that both the ambivalence of our desires and the messiness of our attempts to care can come into view.

Care work is necessary to produce any sort of world, not only the ones we might deem desirable.

We can think of this analytic move as a kind of decorrelation, separating care from positive affect. Like Whitman’s decorrelation of care from scale, this transformation opens up a space of analytic possibility: it makes it possible to see care in sites of violence or harm, like rape crisis centers (Mulla 2014), and to recognize that locations full of good feeling may be careless, such as international circuits of care work that exclude and oppress (Murphy 2015).7 It has also facilitated studies of care that leave the prototypical dyadic scenes of caring and follow care “across many scales and dimensions” (Mattern 2018): caring for ecosystems, populations, or infrastructures (Stevenson 2014; Bocci 2017; Ishii 2017). Feminist care ethics, with its persistent concern for relationality, clearly works in a different political register than Huron’s roughly libertarian moral philosophy. But it demonstrates that decorrelation may be useful for more than disaggregating values from each other.

The question to answer, following this work, is not whether care is present, but rather how care is enacted (Mol 2008). In music recommendation, as I’ve described, care manifests primarily as “care for data” (Whitman 2013, 3). However suffused it may be with positive affect, technologists’ focus on caring for data can have negative consequences for other parts of the overall sociotechnical system that are kept out of frame. Ethnographies of medical data work, for instance (e.g., Pine and Mazmanian 2016), describe how patients’ data can displace patients themselves as objects of attention and care. While many critics of music streaming (e.g., Pelly 2017) argue that these systems harm musicians’ livelihoods and creative work, such concerns are not addressed by caring for data. Whitman argued that this form of care would ultimately produce a system “useful to musicians,” but direct engagement with artists happened elsewhere in the corporate structure, beyond the recommender team’s control.

When we see “careful” alternatives to algorithmic systems, like The Yams, or critiques that suggest care as a solution to algorithmic ills, we need to remember that forms of care exist even within apparently careless settings, and their presence does not guarantee any particular outcome. Because of the assumed correlation between care and scale, it can be hard to see practices like “bureaucratic care” or “care for the data” as care at all, but we can understand them as forms of care that have taken root in particular organizational structures, grown in relation with other values. To understand how they fit into local ethical vernaculars—and to mount more thorough critiques of them—requires an appreciation of how care is enacted through these relations. As feminist studies of care have indicated, caring is complicated.

THE PROMISE OF SCALABILITY

Unlike care, Whitman (2012) wrote, “scale is easy to explain.” In the world of software startups, the value and meaning of scale is usually taken to be obvious: companies aspire to capture more users, to grow their market valuations, and to increase the amount of data they collect and analyze. Large companies like Facebook or Google are models of success, and countless publications targeted at would-be founders mine those companies’ histories for lessons in how to scale up. Paul Graham (2012), a venture capitalist and influential writer of such advice, argues that a focus on growth defines startups: they are “designed to grow fast,” different from conventional businesses, “in the same way a redwood seedling has a different destiny from a bean sprout.”

Rapid growth is essential to the economics of venture capital, which relies on rare, dramatic successes to underwrite portfolios of risky investments (Cutler 2018). Successfully scaled companies are desirable not only because they make money proportional to their size but also because scaling up is often understood to make them work better. In recommender systems, scale is closely tied to functionality: “You have to know about as much music as possible to make good recommendations,” Whitman (2012) wrote, “We track over two million artists and over 30 million songs and there is no way a manually curated database can reach that level of knowledge.” Developers understand recommender systems as a solution to the problem of overload posed by large catalogs; if they do not work at scale, then they do not work at all.

While scale may seem simple and desirable in the tech industry, anthropologists have usually considered scale to be a “problem” (Carr and Lempert 2016, 4)—an object of suspicion, implying calculation, capitalism, and hierarchy. Anna Lowenhaupt Tsing (2012, 506), for instance, has written extensively about the “mounting piles of ruins” that the pursuit of scalability leaves in its wake, offering one of anthropology’s most influential accounts of scale, premised on the idea that it is (inversely) correlated with care. Tracing the contemporary pursuit of scale back to the development of Caribbean sugarcane plantations, Tsing (2012, 513) argues that scalability was achieved through care’s abandonment:

Replacing relations of care between farmers and crops, plantation designs led to alienation between workers and cane; cane was the enemy. At least in theory, such labor avoided transformative relationships and thus could not disturb system design.

To achieve scale required the cruel elimination of the kinds of potentially transformative relations that feminist theorists call care. Though capitalists like Graham may adopt organic metaphors, Tsing argues that such scaling is alien to much of the living world.8

In addition to seeing scale as a problem, anthropologists—including Tsing (2005) and a host of linguistic anthropologists—have seen it as something to be problematized, arguing that “the scales that social actors rely upon to organize, interpret, orient, and act in their worlds are not given but made—and rather laboriously so” (Carr and Lempert 2016, 3). We should be wary, then, of making claims about scale as though it were simply an objective phenomenon. In the anthropological literature, scale is not a matter of objective size but the consequence of comparison, and if we want to understand scale, we need to look closely at the comparisons on which it depends.

Examining tech industry usage, we can see that scale is indeed used comparatively: concerns about working “at scale” rarely refer to any particular size, but remain intentionally abstract and open-ended. They make a loose comparison between the present and an anticipated future, or between one organization and another (Avle et al. 2020). For software startups, scale is not a matter of size—it is a kind of promise.

We have already seen how such comparison works for claims to smallness in the case of The Yams, which figured human curators as small-scale and careful by defining them against algorithms. Conversely, tech companies’ claims to largeness are often figured against manual labor. Spotify advertises both algorithmically generated playlists and “handcrafted” ones, compiled by human curators. Whitman contrasted the scalable data apparatus of The Echo Nest with a “manually curated database.” Hands and curation offer a symbolic alternative to algorithmic machinery—a vision of craft labor, full of careful attention at small scale (cf. Paxson 2013). If you want to make recommendations to millions of users, they argue, you can’t do it “by hand.”

This is why companies making claims to scalability conventionally hide care work. If we assume that care is synonymous with small-scale work, then its persistence threatens to reveal scalability’s big secret: as Tsing (2012, 510) writes, “scalability never fulfills its own promises”; it “uses articulations with nonscalable forms even as it denies or erases them” (Tsing 2012, 506). Plantations depended on the uncompensated and unappreciated work of enslaved people, as well as local microbial ecologies that supported cane growth. Today, companies still depend on feminized and racialized “articulation work” to cover the gap between the local mess of the world and the formal aspirations of corporate action (Star and Strauss 1999). As Lilly Irani (2015) has argued, startups commonly draw their corporate boundaries to intentionally exclude “janitorial” work: a company employing a dozen engineers in a California office can present itself as “lean” and scalable, while hundreds of globally distributed ad hoc “ghost workers” (Gray and Suri 2019) do the care work of labeling training data for machine learning, cleaning up databases, or moderating posts on social media, removing offensive materials so that companies can continue to grow uninterrupted (Roberts 2019; Ruckenstein and Turunen 2019). Scalability is not an intrinsic quality of particular techniques, but rather a consequence of where project boundaries are drawn, in the service of particular visions of corporate futures.

This insight about corporate boundaries and aspirations suggests an answer to the question lurking in the background of this essay: Why would the work of playlist curators like The Yams seem intrinsically opposed to scale, while the work of data curators at The Echo Nest did not? For The Yams, operating under a correlated understanding of care and scale, individuals making playlists constituted the obvious small-scale counterparts to the scalable apparatus of algorithmic recommendation. But with care and scale decorrelated, care work no longer functions as an essentially small point of comparison against which scalar claims can be made. This decorrelation was not achieved simply by Whitman’s declaration that he would pursue care and scale at once; rather, it was embodied in organizational structure. What distinguished data curators from playlist curators was their location in this structrure and where their care was directed. Because QA work improved the quality of centralized data stores, it was seen as compatible with scale: the fruits of Ellie’s data gardening, for instance, could be algorithmically propagated throughout Whisper’s services. A playlist curator at The Yams, by contrast, worked at the “edge” of the system, oriented toward individual users. Claims to scalability are still made through the strategic drawing of corporate boundaries, but in decorrelation we find a shifting strategy that makes it plausible for startup founders to embrace kinds of work that once would have seemed incompatible with scale.

MASTERS OF SCALE

Listening to the first episode of Masters of Scale, a podcast for aspiring entrepreneurs, one might be surprised to discover that it does not offer advice about how to eradicate nonscalable work from a business plan. Instead, the host—the venture capitalist Reid Hoffman—argues for the importance of the nonscalable, seeming to echo Tsing (2012): “If you want your company to truly scale, you have to do things that don’t scale” (Hoffman 2018). The episode is titled “Handcrafted,” and it features an interview with one of the founders of the short-term rental site Airbnb, who describes how, in the early days of his company, the founders took on all sorts of nonscalable work: they visited individual users to help them with the site, they offered free photography to fill out their listings, they worked to “win them over, one by one.”

This counterintuitive advice came from Graham (2013), whose blog post titled “Do Things That Don’t Scale” argues that startups are “so hard to get rolling that you should expect to take heroic measures at first.” These measures range from personally contacting new users to manually performing the kinds of data work that most companies would eventually seek to automate or outsource. While founders besotted with scalability might think of such work as anathema to good business plans, Graham insists that these nonscalable efforts are not a “necessary evil” to be avoided, but rather a crucial part of a company’s “DNA.” Where Tsing (2012, 515) writes that “scalable projects are articulations between scalable and nonscalable elements, in which nonscalable effects can be hidden from project investors,” here we see investors embracing the nonscalable, working to bring it inside of scalability projects not as a dirty secret, but as a conscious strategy.

Where Graham associates nonscalable work with the early days of a startup’s existence, Hoffman argues that founders in particular need to retain a tendency toward the handcrafted, even as the rest of their organization scales up, because new problems arise that cannot be solved by scaling up old solutions. “The sharpest founders never fully abandon the [handcrafted] mindset, no matter how big their company gets,” Hoffman says, and their ongoing nonscalable endeavors provide material for the constant renovation that capitalism requires. In his coauthored book, Blitzscaling, Hoffman argues against the ideology of scaling as radical repeatability described by Tsing, claiming that “a global giant isn’t simply a start-up that’s been multiplied by a thousand… . Each major increment of growth represents a qualitative as well as a quantitative change” (Hoffman and Yeh 2018, 37). This is a theory of scalability that explicitly requires transformative engagement with the world.9

To reconcile nonscalable work with scalar aspirations, Graham (2013) makes use of the calculation grammar of vector mathematics, concluding his post with a section titled “Vector.” He writes: “The need to do something unscalably laborious to get started is so nearly universal that it might be a good idea to stop thinking of startup ideas as scalars.” “Scalars,” here, refer not to scaling, but to one-dimensional values, contrasted with multidimensional vectors. Instead of reckoning the value of startup ideas as simply their capacity to scale, Graham suggests, “we should try thinking of them as pairs of what you’re going to build, plus the unscalable thing(s) you’re going to do initially to get the company going.” By imagining startups as two-dimensional vectors, with one dimension aimed at scale and the other at ineradicable nonscalable work, Graham offers a decorrelative way to reconcile values that had seemed like opposites.

We may rightly wonder whether this embrace of the nonscalable is just another instance of what Tsing (2012) calls “piracy”—the raiding of nonscalable worlds by scalability projects. Indeed, in Blitzscaling—named after the Nazi military strategy of blitzkriegthe only mention of ethics comes in a discussion of “ethical piracy,” where the authors suggest that founders should behave like “lovable rogues” who break the rules but follow a “personal code of ethics” (Hoffman and Yeh 2018, 180). This is clearly not an endorsement of feminist care ethics or organizational critique, but is rather a reimagination of how kinds of work once assumed inimical to scale might be made compatible with it—an effort to reshape the ethical plateau.

This reclamation of work previously denigrated or obscured does not necessarily answer critics. The foregrounding of some care work does not entail the foregrounding of all care work, and Hoffman and Graham still speak in the founder-glorifying language of venture capital: their nonscalable work is “heroic,” undertaken by “masters of scale” who are expected to perform unsustainable amounts of work for a chance at enriching themselves, in a financing scheme designed to reliably enrich their funders. Such companies may still have plans to automate or outsource this work, and even those that keep care workers like data curators in-house are still likely to closely manage the corporate boundary, keeping contract workers like custodial staff or content moderators on the outside.

Internal to companies, decorrelation can facilitate a division of ethical labor. Understood as independent axes, care and scale can be pursued by distinct groups of workers. Whitman spoke of “care people” and “scale people” in his company—while he insisted that it was important that everyone cared about music, the work of care was not evenly distributed. In many tech companies, specific teams are tasked with work on QA, ethics, or “trust and safety,” while other teams avidly pursue scale. These care teams are commonly underresourced and disproportionately staffed by minoritized workers; when their goals come into conflict with scale, the care groups usually lose (Metcalf, Moss, and boyd 2019). While I was revising this essay, for instance, Google fired two of its prominent AI ethicists for their role in producing a research paper that explored problems with the large-scale AI models the company generates (Bender et al. 2021). Only a decorrelated ethics, one that did not draw into question the desirability of scale, could be entertained. While decorrelation may provide terms by which care (or a proactive interest in ethics more broadly) can be brought into technical organizations, it can also constrain such efforts, requiring that questions of care be kept orthogonal to questions of scale as a matter of organizational principle.

* * *

Care and scale are definitionally flexible and essentially comparative: there is no absolute measure of either carefulness or scalability on which we might ground ourselves. What I have described in this article is a shift in how the relationship between these values is imagined in the tech industry, producing new grounds for ethical thought and action. This shift is evident in aspirational statements from founders, advice from investors, and in the organization of work. If anthropologists presume that care and scale are necessarily correlated and opposed, we will find it hard to map this emerging ethical plateau, where visions of blitzkrieg and handcrafting intertwine.

As we refine our disciplinary critiques, we may also find something useful in decorrelation for ourselves: it reminds us to not take our terms for granted and to entertain the possibilities of a “speculative ethics” (Puig de la Bellacasa 2017), where the relations among values are not set in stone, but are rather shifting and malleable. Thinking decorrelatively about the relation between care and scale can help us avoid idealizing the small for its own sake; it may also help us discern variety among large-scale projects and recognize emergent forms of care within them. At a moment when we find ourselves faced with many large-scale problems—from climate change to structural racism to infectious disease—we may want to remain open to the possibility of reconciling care with scale. Reimagining the relations among our values is a task too important to be left to technologists alone.

ABSTRACT

The people who make algorithmic recommender systems want apparently incompatible things: they pride themselves on the scale at which their software works, but they also want to treat their materials and users with care. Care and scale are commonly understood as contradictory goals: to be careful is to work at small scale, while working at large scale requires abandoning the small concerns of care. Drawing together anthropological work on care and scale, this article analyzes how people who make music recommender systems try to reconcile these values, reimagining what care and scale mean and how they relate to each other in the process. It describes decorrelation, an ethical technique that metaphorically borrows from the mathematics of machine learning, which practitioners use to reimagine how values might relate with each other. This “decorrelative ethics” facilitates new arrangements of care and scale, which challenge conventional anthropological theorizing. [ethics; care; scale; algorithms; music]

NOTES

Acknowledgments Thanks to the many people who have carefully engaged with this argument as it has evolved, including the Tufts Department of Anthropology, the Labor Tech Reading Group, Sareeta Amrute, Dominic Boyer, Li Cornfeld, Michael Ekstrand, Stefan Helmreich, Anna Lauren Hoffmann, Amy Johnson, Deirdre Loughridge, Beth Semel, Maria Sidorkina, Jarret Zigon, and colloquium audiences at Northwestern, Amherst, and ITU Copenhagen. The anonymous reviewers and editor Heather Paxson of Cultural Anthropology deserve my particular gratitude, as do Jessica Lockrem and the rest of the production staff. This research was supported by grants from the Wenner-Gren Foundation (Dissertation Fieldwork Grant 8797), the National Science Foundation (DDRIG 1323834), and the Intel Science and Technology Center for Social Computing at UC Irvine.

1. Although it never became clear whether Alex was a human, a shared corporate persona, a chatbot, or some agentic mixture of these, I have elected to give him a pseudonym here. Throughout this article, any person introduced by first name alone is pseudonymous.

2. For an analogous case in a different industry and national context, see Shuang L. Frost’s (2020) account of a “small and beautiful” Chinese ridesharing platform.

3. Malte Ziewitz (2019) describes such work as “ethigraphy.” See Daniel Neyland (2016) for a similar effort mounted from an ethnomethodological orientation. This understanding of ethics is thus a narrower companion to what Louise Amoore (2020) theorizes as “cloud ethics”—a form of ethics (and politics) distributed through the apparatus of algorithmic systems themselves.

4. When I emailed Huron later to ask about his philosophical sources, he could not recall any in particular; the idea was one he had considered for many years, and it had become more of a normative commitment than an empirical claim.

5. Such vector representations of consumer desire have been used in market research since the 1960s (e.g., Green and Carmone 1969).

6. While revising this article, I encountered the “open ethics vector” initiative (https://openethics.ai/vector/), which aims to produce a standardized vector representation of “ethical preferences,” so that people with varying ethical values might find AI software that somehow “matches” them.

7. See Sarah Pinto’s (2014, 251) Daughters of Parvati for a diagram of this move rendered explicitly as a pair of orthogonal axes.

8. Like my interlocutors, Tsing (2012) uses the term scaling here to refer to scaling up, not down.

9. Véra Ehrenstein and Daniel Neyland (2018) find a similar dynamic in global health initiatives, drawing the idea that scaling up means homogeneous expansion into question.

REFERENCES

Akrich, Madeleine 1992 “The De-Scription of Technical Objects.” In Shaping Technology / Building Society: Studies in Sociotechnical Change, edited by Wiebe E. Bijker and John Law, 205–24. Cambridge, Mass.: MIT Press.

Amoore, Louise 2020 Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham, N.C.: Duke University Press.

Amrute, Sareeta 2019 “Of Techno-Ethics and Techno-Affects.” Feminist Review 123, no. 1: 56–73. https://doi.org/10.1177/0141778919879744.

Avle, Seyram, Cindy Lin, Jean Hardy, and Silvia Lindtner 2020 “Scaling Techno-Optimistic Visions.” Engaging Science, Technology, and Society 6: 237–54. https://doi.org/10.17351/ests2020.283.

Ballestero, Andrea 2015 “The Ethics of a Formula: Calculating a Financial-Humanitarian Price for Water.” American Ethnologist 42, no. 2: 262–78. https://doi.org/10.1111/amet.12129.

Bateson, Gregory 1972 Steps to an Ecology of Mind. Chicago: University of Chicago Press.

Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell 2021 “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. New York: Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922.

Birhane, Abeba 2021 “Algorithmic Injustice: A Relational Ethics Approach.” Patterns 2, no. 2: 100205. https://doi.org/10.1016/j.patter.2021.100205.

Biruk, Cal 2018 Cooking Data: Culture and Politics in an African Research World. Durham, N.C.: Duke University Press.

Bocci, Paolo 2017 “Tangles of Care: Killing Goats to Save Tortoises on the Galápagos Islands.” Cultural Anthropology 32, no. 3: 424–49. https://doi.org/10.14506/ca32.3.08.

Boke, Charis 2016 “Care.” Theorizing the Contemporary, Fieldsights, July 12. https://culanth.org/fieldsights/care.

Caplan, Robyn, and danah boyd 2018 “Isomorphism through Algorithms: Institutional Dependencies in the Case of Facebook.” Big Data and Society 5, no. 1. https://doi.org/10.1177/2053951718757253.

Carr, E. Summerson, and Michael Lempert, eds. 2016 Scale: Discourse and Dimensions of Social Life. Oakland: University of California Press.

Cutler, Kim-Mai 2018 “The Unicorn Hunters.” Logic Magazine 4. https://logicmag.io/scale/the-unicorn-hunters/.

Das, Veena 2012 “Ordinary Ethics.” In A Companion to Moral Anthropology, edited by Didier Fassin, 133–49. Malden, Mass.: John Wiley and Sons. https://doi.org/10.1002/9781118290620.ch8.

Ehrenstein, Véra, and Daniel Neyland 2018 “On Scale Work: Evidential Practices and Global Health Interventions.” Economy and Society 47, no. 1: 59–82. https://doi.org/10.1080/03085147.2018.1432154.

Eubanks, Virginia 2018 Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s.

Fields, Benjamin 2011 “Contextualize Your Listening: The Playlist as Recommendation Engine.” PhD diss., Goldsmiths.

Fischer, Michael M. J. 2001 “Ethnographic Critique and Technoscientific Narratives: The Old Mole, Ethical Plateaux, and the Governance of Emergent Biosocial Polities.” Culture, Medicine and Psychiatry 25: 355–93. https://doi.org/10.1023/A:1013078230464.

Fortun, Kim, and Mike Fortun 2005 “Scientific Imaginaries and Ethical Plateaus in Contemporary U.S. Toxicology.” American Anthropologist 107, no. 1: 43–54. https://doi.org/10.1525/aa.2005.107.1.043.

Frost, Shuang L. 2020 “Platforms As If People Mattered.” Economic Anthropology 7, no. 1: 134–46. https://doi.org/10.1002/sea2.12162.

Gillespie, Tarleton 2020 “Content Moderation, AI, and the Question of Scale.” Big Data and Society 7, no. 2. https://doi.org/10.1177/2053951720943234.

Graham, Paul 2012 “Startup = Growth.” Paul Graham (blog), September. http://www.paulgraham.com/growth.html.

2013 “Do Things That Don’t Scale.” Paul Graham (blog), July. http://paulgraham.com/ds.html.

Gray, Mary L., and Siddharth Suri 2019 Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston: Houghton Mifflin Harcourt.

Green, Paul E., and Frank J. Carmone 1969 “Multidimensional Scaling: An Introduction and Comparison of Nonmetric Unfolding Techniques.” Journal of Marketing Research 6, no. 3: 330–41. https://doi.org/10.2307/3150139.

Gusterson, Hugh 2019 “Introduction: Robohumans.” In Life by Algorithms: How Roboprocesses Are Remaking Our World, edited by Catherine Besteman and Hugh Gusterson, 1–30. Chicago: University of Chicago Press.

Hanna, Alex, and Tina M. Park 2020 “Against Scale: Provocations and Resistances to Scale Thinking.” In Proceedings of CSCW ’20: Reconsidering Scale and Scaling in CSCW Research (CSCW Workshop ’20). New York: ACM.

Haraway, Donna J. 1991 “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century.” In Simians, Cyborgs, and Women: The Reinvention of Nature, 149–81. New York: Routledge.

Hobart, Hi‘ilei Julia Kawehipuaakahaopulani, and Tamara Kneese 2020 “Radical Care: Survival Strategies for Uncertain Times.” Social Text 38, no. 1: 1–16. https://doi.org/10.1215/01642472-7971067.

Hoffman, Reid 2018 “Handcrafted.” Masters of Scale. Podcast. https://mastersofscale.com/brian-chesky-handcrafted/.

Hoffman, Reid, and Chris Yeh 2018 Blitzscaling: The Lightning-Fast Path to Building Massively Valuable Companies. New York: Currency.

Hughes, Thomas P. 1987 “The Evolution of Large Technological Systems.” In The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, edited by Wiebe E. Bijker, Thomas P. Hughes, and Trevor Pinch, 51–82. Cambridge, Mass.: MIT Press.

Huron, David 2011 “Designing the Future: An Affective Neuroscience Approach.” Paper presented at the 12th International Society for Music Information Retrieval Conference, Miami Beach, Fla, October 24–28.

Irani, Lilly 2015 “The Cultural Work of Microwork.” New Media and Society 17, no. 5: 720–39. https://doi.org/10.1177%2F1461444813511926.

Ishii, Miho 2017 “Caring for Divine Infrastructures: Nature and Spirits in a Special Economic Zone in India.” Ethnos 82, no. 4: 690–710. https://doi.org/10.1080/00141844.2015.1107609.

Jansson, Johan, and Brian J. Hracs 2018 “Conceptualizing Curation in the Age of Abundance: The Case of Recorded Music.” Environment and Planning A: Economy and Space 50, no. 8: 1602–25. https://doi.org/10.1177/0308518X18777497.

Mackenzie, Adrian 2017 Machine Learners: Archaeology of a Data Practice. Cambridge, Mass.: MIT Press.

Mattern, Shannon 2018 “Maintenance and Care.” Places, November. https://placesjournal.org/article/maintenance-and-care/.

Metcalf, Jacob, Emanuel Moss, and danah boyd 2019 “Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics.” Social Research: An International Quarterly 86, no. 2: 449–76. https://muse.jhu.edu/article/732185.

Mol, Annemarie 2008 The Logic of Care: Health and the Problem of Patient Choice. London: Routledge.

Mol, Annemarie, Ingunn Moser, and Jeannette Pols, eds. 2010 Care in Practice: On Tinkering in Clinics, Homes and Farms. Bielefeld: Transcript.

Morris, Jeremy Wade 2015 “Curation by Code: Infomediaries and the Data Mining of Taste.” European Journal of Cultural Studies 18, nos. 4–5: 446–63. https://doi.org/10.1177%2F1367549415577387.

Mulla, Sameena 2014 The Violence of Care: Rape Victims, Forensic Nurses, and Sexual Assault Intervention. New York: NYU Press.

Murphy, Michelle 2015 “Unsettling Care: Troubling Transnational Itineraries of Care in Feminist Health Practices.” Social Studies of Science 45, no. 5: 717–37. https://doi.org/10.1177/0306312715589136.

Neyland, Daniel 2016 “Bearing Account-able Witness to the Ethical Algorithmic System.” Science, Technology and Human Values 41, no. 1: 50–76. https://doi.org/10.1177/0162243915598056.

Noble, Safiya Umoja 2018 Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

O’Neil, Cathy 2016 Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.

Paxson, Heather 2013 The Life of Cheese: Crafting Food and Value in America. Berkeley: University of California Press.

Pelly, Liz 2017 “The Problem with Muzak.” Baffler, no. 37. https://thebaffler.com/salvos/the-problem-with-muzak-pelly.

Pine, Kathleen H., and Melissa Mazmanian 2016 “Artful and Contorted Coordinating: The Ramifications of Imposing Formal Logics of Task Jurisdiction on Situated Practice.” Academy of Management Journal 60, no. 2: 720–42. https://doi.org/10.5465/amj.2014.0315.

Pinel, Clémence, Barbara Prainsack, and Christopher McKevitt 2020 “Caring for Data: Value Creation in a Data-Intensive Research Laboratory.” Social Studies of Science 50, no. 2: 175–97. https://doi.org/10.1177/0306312720906567.

Pinto, Sarah 2014 Daughters of Parvati: Women and Madness in Contemporary India. Philadelphia: University of Pennsylvania Press.

Plantin, Jean-Christophe 2019 “Data Cleaners for Pristine Datasets: Visibility and Invisibility of Data Processors in Social Science.” Science, Technology, and Human Values 44, no. 1: 52–73. https://doi.org/10.1177/0162243918781268.

Puig de la Bellacasa, María 2017 Matters of Care: Speculative Ethics in More Than Human Worlds. Minneapolis: University of Minnesota Press.

Roberts, Sarah T. 2019 Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, Conn.: Yale University Press.

Ruckenstein, Minna, and Linda Lisa Maria Turunen 2019 “Re-humanizing the Platform: Content Moderators and the Logic of Care.” New Media and Society 22, no. 6: 1026–42.https://doi.org/10.1177/1461444819875990.

Spotify n.d. “Company Info.” Spotify website. Accessed June 19, 2021. https://newsroom.spotify.com/company-info/.

Star, Susan Leigh, and Anselm Strauss 1999 “Layers of Silence, Arenas of Voice: The Ecology of Visible and Invisible Work.” Computer Supported Cooperative Work (CSCW) 8: 9–30. https://doi.org/10.1023/A:1008651105359.

Stevenson, Lisa 2014 Life Beside Itself: Imagining Care in the Canadian Arctic. Oakland: University of California Press.

Strathern, Marilyn 1992 After Nature: English Kinship in the Late Twentieth Century. Cambridge: Cambridge University Press.

Tsing, Anna Lowenhaupt 2005 Friction: An Ethnography of Global Connection. Princeton, N.J.: Princeton University Press.

2012 “On Nonscalability: The Living World Is Not Amenable to Precision-Nested Scales.” Common Knowledge 18, no. 3: 505–24. https://doi.org/10.1215/0961754X-1630424.

Verbeek, Peter-Paul 2006 “Materializing Morality: Design Ethics and Technological Mediation.” Science, Technology, and Human Values 31, no. 3: 361–80. https://doi.org/10.1177/0162243905285847.

Walford, Antonia 2017 “Raw Data: Making Relations Matter.” Social Analysis 61, no. 2: 65–80. https://doi.org/10.3167/sa.2017.610205.

Walk, Hunter 2020 “If Your Pitch Deck Has a Competitive 2×2, I’m Going to Ask You This Question.” Hunter Walk (blog), May 25. https://hunterwalk.com/2020/05/25/if-your-pitch-deck-has-a-competitive-2x2-im-going-to-ask-you-this-question/.

Whitman, Brian 2012 “How Music Recommendation Works—and Doesn’t Work.” Variogram, December 11. https://notes.variogr.am/2012/12/11/how-music-recommendation-works-and-doesnt-work/.

2013 “Care and Scale: Fifteen Years of Music Retrieval.” ACM Transactions on Multimedia Computing, Communications, and Applications 46: 1–4. https://doi.org/10.1145/2492703.

Winner, Langdon 1980 “Do Artifacts Have Politics?” Daedalus 109, no. 1: 121–36. http://www.jstor.org/stable/20024652.

Ziewitz, Malte 2019 “Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines.” Social Studies of Science 49, no. 5: 707–31. https://doi.org/10.1177/0306312719865607.

Zigon, Jarrett 2009 “Within a Range of Possibilities: Morality and Ethics in Social Life.” Ethnos 74, no. 2: 251–76. https://doi.org/10.1080/00141840902940492.

2013 “On Love: Remaking Moral Subjectivity in Postrehabilitation Russia.” American Ethnologist 40, no. 1: 201–15. https://doi.org/10.1111/amet.12014.

2019 “Can Machines Be Ethical? On the Necessity of Relational Ethics and Empathic Attunement for Data-Centric Technologies.” Social Research 86, no. 4: 1001–22. https://muse.jhu.edu/article/748872.

CULTURAL ANTHROPOLOGY, Vol. 36, Issue 3, pp. 509-537, ISSN 0886-7356, online ISSN 1548-1360. © American Anthropological Association 2021. Cultural Anthropology journal content published since 2014 is freely available to download, save, reproduce, and transmit for noncommercial, scholarly, and educational purposes. Reproduction and transmission of journal content for the above purposes should credit the author and original source. Use, reproduction, or distribution of journal content for commercial purposes requires additional permissions from the American Anthropological Association; please contact permissions@americananthro.org. DOI: 10.14506/ca36.3.11