From Form to Content

HAMID R. EKBIA

Indiana University Bloomington

Orcid ID icon http://orcid.org/0000-0002-8437-520X

BONNIE A. NARDI

University of California, Irvine


Automation and algorithms, as ideas and techniques, have a long history. The official history of automation traces its origins back to the close of the nineteenth century and the emergence of such mechanical systems as James Watt’s steam engine, the lathe, and the Jacquard loom. However, a broader view of the notion, understood as any attempt to replace or reduce human labor through the introduction of artifacts, would begin the history much earlier. Such attempts, as we discuss below, have resulted in mixed outcomes—most recently, in the growing phenomenon of heteromation, understood as a new method of capital accumulation reliant on masses of free or low-cost human labor (Ekbia and Nardi 2017).

The notion of an algorithm, similarly, has a long history, going back centuries to the work of the Persian mathematician Al Khawrizmi (c. 780–850), the Latinized version of whose name lies behind the word algorithm itself. The revival of the term in the wake of digital electronics brought it from the realm of algebra to that of computing and operation, giving it the broad meaning of any effective method that can be expressed and executed within a finite amount of time and space. The modifier effective is paramount here, as we shall presently see.

As parochially old as these ideas might be, however, their marriage is not. In fact, their integration can be considered one of the major technical marvels of our era, because it has enabled a transition from the automation of form to the automation of content. The marriage has given birth in our times to a rather tenacious offspring, which can be understood as a process consisting of four stages: monitoring, mining, marking, and manipulation. In the monitoring stage, data are collected about entities (objects, events, individuals, behaviors, and so on) in an environment increasingly equipped with sensing devices and information-gathering techniques—from health trackers and mobile technologies to closed-circuit cameras and online services. The vast amount of data collected through these mechanisms is aggregated and mined to discern and identify patterns that are otherwise hidden from direct human perception. These patterns are then used as the basis for sorting those same entities that served as original sources of data, pigeonholing and marking them as belonging to certain relevant categories. Finally, these markings are in turn used to target and manipulate the behavior of the entities in ways imagined or desired by the developers and proprietors of algorithms. The cycle is repeated, feeding back on itself in the incessant four-stage process outlined here.

To understand this, we have to remember that earlier types of automation solely traded in form in the sense of “the conversion of a work process, a procedure, or equipment to automatic rather than human operation and control” (Gerovitch 2003, 122). This could be said as much about the mechanical Jacquard loom of the nineteenth century as about the electronic feedback control of Ford production plants or the numerical-control machines of aircraft manufacturing in the middle of the twentieth century—they all automated form.

The situation changes, however, when we wed automation with algorithms, enabling the automation of content. The reason is that the divide between form and content is not as sharp and unbridgeable as it appears—“form blurs into content as processing depth increases,” as Douglas Hofstadter (1985, 22) noted many years ago. Or, as he went on to muse, “content is just fancy form,” by which he meant that “‘content’ is just a shorthand way of saying ‘form as perceived by a very fancy apparatus capable of making complex and subtle distinctions and abstractions and connections to prior concepts” (Hofstadter 1985, 22). This happens, for instance, in music, where complex forms consisting of basic notations layered into delicate melodic and harmonic structures can arouse meaningful emotional responses in the listener. The same also happens in computer databases, where relations among entities are captured and structured so as to maintain a meaningful relationship with the outside world, endowing them with an almost magical efficacy. The magic plays out whenever you order an item online only to see its physical embodiment at your doorstep a few days later; whenever you transfer funds through your online banking site and those funds do in fact get relocated across accounts, vendors, lenders, institutions; or whenever you make a flight reservation online with all the attendant details (origin, destination, time, seat) to find yourself seated on the right plane at the right time later on. One can only imagine the surprise at these events for someone living in the precomputer era and transferred to the present moment to appreciate the spellbinding character of these feats.

The philosopher Jerry Fodor (1975, 68) gave voice to this magic when he asserted that “computations just are processes in which representations have their causal consequences in virtue of their form”—a claim that another philosopher, John Haugeland (1985, 106), framed in formalist terms as follows: “If you take care of the syntax, the semantics will take care of itself.” In other words: if you take care of the form, the content will take care of itself. Although these assertions predate our examples above, they provide the gist of what was once called the computational theory of mind—a theory that has met numerous challenges in the intervening years but is going through a revival in another guise.

The new guise goes by the name big data, with other associated terms such as machine learning, algorithmics, and analytics, where automated algorithms, churning gargantuan amounts of data, perform tasks that were either unfathomable or squarely belonged, until recently, to the realm of the human mind. A few examples clarify the scope and scale of these developments.

The first comes from the area of drug discovery—for instance, the new technique called drug repositioning, which has to do with taking drugs developed for one disorder and “repositioning” them to tackle another (see Nosengo 2016). The National Institutes of Health in the United States, for example, have a library of roughly 450 drugs that have never reached the market although they have passed the safety tests in humans—an untapped resource largely ignored until recently. One case involved a compound named ebselen, originally developed to treat stroke survivors, which has received new attention as a treatment for people with bipolar disorder. Other common cases include the repositioning of generic drugs, failed drugs (that have passed phase one of the clinical trial but not phase two, because they fail to show the same effects in humans as in animals), or even drugs that have been approved but not developed or manufactured for various research or business reasons. Tapping into these resources can save both time and money, cutting the development-to-deployment cycle from the current average of thirteen to fifteen years and $2–3 billion to roughly half the time and one-tenth the cost of a new drug. Although classic cases of repositioning occurred in the past by serendipity, automated algorithms are now being developed for a systematic search, including “big-data analytics that can now uncover molecular similarities between diseases; computational models that can predict which compounds might take advantage of those similarities; and high-throughput screening systems that can quickly test many drugs against different cell lines” (Nosengo 2016, 315).

A second example belongs to the more subjective domain of romance and matchmaking. Online dating sites match people seeking romantic relationships by capturing so-called personality attributes and tastes and comparing them against those of thousands of others who are on the same quest, in a sense automating the age-old process of matchmaking through sophisticated algorithms. These algorithms essentially follow the same steps as the drug-discovery process outlined above—namely, uncovering similarities between people, predicting scenarios in which people can benefit from those similarities, and screening individuals who can potentially match a given profile. A new set of issues arises, however, when automated algorithms are applied to human affairs such as romance and dating. The sociologist Eva Illouz (2012, 177), referring to these algorithms as “technologies of choice,” considers the forms provided by dating sites to be aimed at making data standard, measurable, and comparable, rather than allowing people to express their unique qualities. Providing a mechanism of interchangeability through standardized profiles, these sites exemplify the bigger dilemma of extending the reach of automated algorithms beyond cases such as drug discovery, where they can be meaningfully effective, to those situations where they are not (Ekbia et al. 2015).

The final example has to do with commodity pricing in the marketplace. This process, until recently, proved straightforward enough, with the merchant or seller assigning a relatively stable and visible price to any given commodity. Today online retailers, and increasingly brick-and-mortar ones, engage in what is called “dynamic pricing,” through which the price of the same item can change based on the time or location of purchase, the availability of competing items, or the desirability of the product. The airline industry originally pioneered this practice, which has now spread to all kinds of industries, including retailers such as Amazon and travel websites such as Orbitz, as well as banks, insurance companies, transportation platforms such as Uber, and even local supermarkets (Ezrachi and Stucke 2016, 90–91). Amazon reportedly makes price changes 2.5 million times a day based on criteria such as those mentioned above.

These three examples from very different domains—science, romance, and commerce—speak to a much broader development that is largely enabled by the conjoining of automation and algorithms. Despite their differences, each of these examples provides glaring embodiments of the four-stage process described above. This relatively generic process, diversely manifested and implemented in all domains of contemporary life, explains the unfathomable power of automated algorithms—a power that, until recently, could only be attributed to the mythical figures of Olympian gods. As with the gods of ancient civilizations, however, power here is impregnated with pitfalls and paradoxes, some of which are already evident in our examples. Here, they manifest themselves most clearly in the indispensability of humans in heteromated systems, and in how such systems actually work against the majority of human beings.

To see how, we need to take a closer look at our examples. We already noted the pitfalls of automated algorithms in the case of online dating sites. Even in the case of drug discovery, where the process works relatively well, potentially disrupting big-pharma business models, the ultimate benefit might end up accruing to business entities rather than to the consumer. The pitfalls are most evident, however, in commodity markets, where dynamic pricing does seem to work to consumer advantage in certain situations—for instance, when your neighborhood baker or grocer sells bread at a lower price at the end of the day. This appears to be the original justification by companies such as Uber, which provide not only competitive fares to people seeking car services but also advantages such as rapid availability and price transparency. On the flip side, however, these same practices can lead to price discrimination or so-called surge pricing in situations in which demand exceeds supply—for example, during a snowstorm in New York, when some Uber rides cost 8.25 times the normal price, leading some commentators to refer to Uber’s practice as an “algorithmic monopoly” (MacMillan and Demos 2015).

More importantly, the relationship of these companies to their workers and to those affected by their practices—Amazon’s warehouse employees, Uber’s drivers, taxi drivers, and so on—and the companies’ lack of responsibility for their safety, well-being, and welfare together constitute a major source of concern among those who observe these relationships closely. The recent suicide by a New York taxi driver in response to Uber’s business model provides a dramatic and tragic testimony to the pitfalls of these arrangements and the “dark side of the gig economy” (Bellafante 2018). These are just a few examples of a growing phenomenon that we call heteromation, referring to a new economic arrangement in which humans are put on the margins of machines and algorithms, providing labor in unrewarded or minimally rewarded ways (Ekbia and Nardi 2017). In Uber’s case, the company gains the use of an automobile owned and operated by a minimally compensated driver who labors without benefits (unemployment compensation, health insurance, or vacation). Uber provides the algorithms to link customer and driver, but drivers are on their own otherwise, assuming all risk. Even with the minimal compensation Uber has arranged, the company is planning to dispense with drivers altogether by replacing them with automated vehicles, putting faith in algorithms to run the business. In the process, thousands and millions of taxi drivers, truck drivers, and others might lose the sources of their subsistence and survival.

Yet it remains to be seen whether and for how long these arrangements can be sustained. As the philosopher George Caffentzis (2013, 72) has remarked: “The capitalist class faces a permanent contradiction it must finesse: (a) the desire to eliminate recalcitrant, demanding workers from production, (b) the desire to exploit the largest mass of workers possible.” Social media, gaming, video production, search, health tracking, the writing of product/film/book/travel reviews, and so on, generate immense wealth for companies such as Google, Amazon, Facebook, and others. What are the benefits of this economy, we might ask, for the average person?

With respect to labor, we find ourselves in a period of historically low unemployment, even with so much automation. Yet increasingly, jobs are ill-paid, unstable, and without significant (or any) benefits. Starbucks, for example, does offer some employee benefits, but the average nonmanager wage is $9.43. Fifty weeks of work a year at this rate amount to $18,860. Tips may add another $1,300 per year (Kline 2016). The availability of many jobs shifts geographically, demanding that workers chase them. Such jobs (in fast food, construction, nursing homes, coffee bars, Walmart, and so on) are not heteromated, but they go hand in hand with heteromation as sources of cheap, mobile labor. Heteromation allows people to work anywhere as long as they have a computing device. Through this labor, we produce content to be turned into data that is packaged and sold, we identify ourselves as targets for advertising, and we create communities that drive product sales, such as gamers and fan-fiction audiences. The emergence of heteromation seems to verify Karl Marx’s labor theory of value once again.

In summary, the marriage of algorithms and automation might indeed prove a powerful development, bringing content into the fold of automatic systems. In the world we live in, and in the capitalist economy that has come to govern this world, however, that marriage has given birth to many unwanted and untoward children: increased potentials to take advantage of humans in vulnerable situations, the increasing marginalization of labor, lack of protections for workers or thoughts for their futures, and so on. Algorithms are meaningfully effective in cases such as drug discovery, but they can complicate human relations—whether in the inequities of surge pricing or by underwriting increasingly precarious conditions for workers. The question to address as we go forward is not whether automation and algorithms are inherently harmful—clearly, they are not—but why we see so many specific implementations that undermine the social fabric.

REFERENCES

Bellafante, Ginia 2018 “A Driver’s Suicide Reveals the Dark Side of the Gig Economy.” New York Times, February 6. https://www.nytimes.com/2018/02/06/nyregion/livery-driver-taxi-uber.html.

Caffentzis, George 2013 In Letters of Blood and Fire: Work, Machines, and the Crisis of Capitalism. Oakland, Calif.: PM Press.

Ekbia, Hamid R., Michael Mattioli, Inna Kouper, Gary Arave, Ali Ghazinejad, Timothy Bowman, Venkata Ratandeep Suri, Andrew Tsou, Scott Weingart, and Cassidy R. Sugimoto 2015 “Big Data, Bigger Dilemmas: A Critical Review.” Journal of the Association for Information Science and Technology 66, no. 8: 1523–46. https://doi.org/10.1002/asi.23294.

Ekbia, Hamid R., and Bonnie A. Nardi 2017 Heteromation, and Other Stories of Computing and Capitalism. Cambridge, Mass.: MIT Press.

Ezrachi, Ariel, and Maurice E. Stucke 2016 Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy. Cambridge, Mass.: Harvard University Press.

Fodor, Jerry A. 1975 The Language of Thought. New York: Thomas Y. Crowell.

Gerovitch, Slava 2003 “Automation.” Encyclopedia of Computer Science, 4th edition, edited by Anthony Ralston, Edwin D. Reilly, and David Hemmendinger, 122–26. New York: Grove’s Dictionaries.

Haugeland, John 1985 Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT Press.

Hofstadter, Douglas R. 1985 Metamagical Themas: Questing for the Essence of Mind and Pattern. New York: Basic Books.

Illouz, Eva 2012 Why Love Hurts: A Sociological Explanation. Malden, Mass.: Polity.

Kline, Daniel B. 2016 “How Much Does Starbucks Pay Its Workers?” Motley Fool, July 19. https://www.fool.com/investing/2016/07/19/how-much-does-starbucks-pay-its-workers.aspx.

MacMillan, Douglas, and Telis Demos 2015 “Uber Valued at More Than $50 Billion.” Wall Street Journal, July 31. https://www.wsj.com/articles/uber-valued-at-more-than-50-billion-1438367457.

Nosengo, Nicola 2016 “Can You Teach Old Drugs New Tricks?” Nature 534: 314–16. https://doi.org/10.1038/534314a.