Posted by: Em | SEP-29-2021
What is the problem of automation, especially in relation to so-called “AI,” labor and culture? It struck me while reading Wageless Life by Ian G.R. Shaw and Marv Waterstone that perhaps a major issue is that both critiques and celebrations of automation do not have a clear idea of what it is. The lack of clarity is deliberate, and fostered, by those who own, develop, and capitalize upon automated systems, platforms that automatically curate their content, and unaccountable automatic decision-making processes such as employment, credit scores, and immigration related AI systems. In this case, being unclear, in either direction, plays into their hands.
I'm sympathetic to the argument put forward in Wageless Life, that we all need to unlearn and fight for alternatives to the default status of dispossession and needing to “earn our way” in the world, usually through exploitative wage labor, that we are born into. Capitalism is a historical anomaly and inherently unsustainable, and destroying it is key not only to halting the social disposal of surplussed human populations and even broader environmental devastation, but also to easing the alienation suffered by those who haven't been totally cast off by capitalism (yet) and opening up space for alternative systems of value. The analysis they provide along these lines is often very true and presenting alternatives at this point in time is clearly necessary.
One scenario the authors propose to emphasize the urgency of the problem is faulty, however, and weakens the overall argumentation of the book. They tell the reader to imagine (or hell, it's practically already happening,) a world of “robot feudalism,” where giant tech corporations with the capacity to make and own the robotic and AI-powered means of production can dispose of workers almost completely. They draw up a sort of obverse Astro Boy scenario where, instead of functioning as a metaphor for exploited groups, anthropomorphized robots “take” even the high paying jobs, like a check-cashing labor force in and of themselves that has notable parallels to anxieties about hyper-technological and efficient foreign work forces that informed, for example, popular cyberpunk settings and aesthetics.
Basically, this is not how automation has ever worked, and the chances of it ever working in a way remotely resembling this in the future are, in my opinion, more obviously unlikely with every news story about delivery or ridesharing apps (which primarily automate the distribution of bits of labor), Amazon warehouses, or Tesla's “self-driving” cars. In these stories, the human worker is always essential, working harder, adapting faster, developing new forms of knowledge, monitoring and correcting the “cutting-edge” AI, and so on, despite being devalued and abused into extreme precarity.
Why, then, does this fiction of automation fully independent of human labor, coming soon if not already here, have such a hold on the ideas of ostensible materialists? It can come up either as a celebration of how automation will definitively end the need for work in any form, or intense paranoia about the elimination of the need for the human altogether or of having set off a “dark age” outside of human control. Unfortunately, I think writers who take on this narrative in a positive or negative sense are primarily drawing from how the owners of the tech corporations most involved in reshaping labor talk about themselves, rather than observing how automation interacts with labor right now.
Automation in the spectacular technological sense has always served to disperse or conceal labor, rather than do away with the need for it altogether. Automation studies has gotten a huge amount of mileage out of the example of the Mechanical Turk, a “chess playing robot” that was not even a specific “man in a suit,” but an interchangeable collection of game chess players who played the part and allowed the marvel to tour around Europe to oohs and aahs. Even more illustrative is the example Edward Jones-Imhotep cites of the dumb waiters at Monticello, which made dishes appear and disappear like magic, conveniently ensuring that slaves would not even be silently present at meals. You can make the argument that these were instances of false “automation” in the far past that were pointing towards an imminent horizon of the elimination of the worker. But that horizon has receded as much as technology has advanced towards it, while owners and bosses keep repeating the same concealing fictions, and the potential rewards of automation, in terms of time saved and production increased, never actually reach anyone but tech CEOs.
Amazon took the name of the Mechanical Turk for its distributed piecework factory that illustrates capitalism's strategy for human labor going forward. This decision has an obvious winking irony to it, but also conveniently displaces the Wikipedia article about the fake automaton as the top search engine result for the phrase, strategically impacting cultural associations and collective memory. Automation and AI as self-starting, uncontrollable and autonomous is how these corporations would like to rewrite both past and present.
Much of the work done on Amazon's MTurk is cleaning or generating data sets, which will be paired with programmed algorithms to make what is called “AI.” Many tasks also include monitoring, curating and correcting the outcomes of these systems, which have no deductive or critical capabilities. While the centralization of production owners of these automation systems opt for can make many jobs scarcer or more precarious and alienating, they still rely heavily on human labor, creating, in many cases, newly immiserating and unstable work for the inconsistently-employed, primarily to maintain a flattering illusion of technical wizardry for the capitalist class. Labor doesn't disappear, but it becomes so small in individual pieces that if you blink, you may miss it.
To more closely track what bosses and owners say about the potential of AI and automation rather than look at its history is like being duped into a shell game. If your eyes only follow the most flashy movements of the shells, you will never correctly locate the ball. In fact, to even seriously listen to these claims is perhaps to lose from the start. Lose sight of the location of labor, and the wage/r will be easily stolen as well. The “more work for mother” problem becomes the smart home “Big Mother,” and everything from self-checkouts to social media apps has us doing a bit of extra work that corporations can cut corners on, or manufacturing a bit more data for platforms to capitalize upon, despite none of it adding up to an actual job for any of us.
Corporations benefit from this labor becoming so diffuse and piecemeal that it is both impossible to regulate, and nearly impossible to live off of, all while creating a fiction of having created something magically autonomous and simply too complex to be understood by mere mortals. This manifests clearly in how slowly government regulations making exploitative and precarious gig work remotely humane and livable are emerging, but the broader conclusion is that we are all being funnelled into tolerating these working conditions or similar, and often also performing a level of this labor for free as well.
This form of labor is diffused into billions of tiny droplets, like a vapor, so it's hard to see where it begins or ends. No difference, though, to corporations who have the resources and centralized infrastructure to suck all of the moisture out of the room regardless. In fact, they benefit from this diffusion, which muddies understanding of how the technology works (leading to superstitious, fantastical conclusions) and also muddies the demand-making process, and the worker's own understanding of their work. It is harder to break free from the moral and self-worth baggage of “earning a living” not because work is harder to come by to the point of elimination, but because the specific types of work required to be a good capitalist subject is in the air we all breathe, even when we're legally un/der-employed or long-term “unemployable.”
This is a much more interesting and historically grounded problem to wrestle with than robotic paradises or apocalypses. I'm especially interested in these problems as I realize it's emerged as a consistent theme in my own fiction writing. In the short visual novel FLESH/CIRCUIT, the main character is functionally surplussed on Earth but seeks a precarious and highly surveilled role as a professional party guest on a low-orbit celebrity penthouse complex. Providing “interesting-ness” or “sexiness” is an implicit background task of being any sort of cultural worker that institutions like academia and the art world rely on extracting value from, and has already been more explicitly formalized in work like Andrea Fraser's Untitled (2003), Hito Steyerl's Lovely Andrea (2007), as well as the ubiquitous impetus to brand oneself as an influencer or content creator and act optimally in line with the affordances corporate media platforms make for this activity. As billionaires find it increasingly difficult to hang on to their artpop girlfriends and resulting clout/connections, I foresaw this possibly becoming more explicitly a “(bull)shit job” as well.
The protagonist also meets the prototype of a forthcoming android servant who is based on the imaging, modification and replication of an actual human consciousness. He was “acquired” through a combination of exploiting an accidental death and the incursion of corporate funding sources into academic research, and therefore kept secret and presented as an authentic “AI.” (I began writing this story before reading the short story Lena but it also highly informed how I ended up handling the themes here!) She helps him escape, and is pursued by an ex-cop who got sick of her alienating and repetitive job monitoring and classifying miniscule bits of surveillance data to enact the law “remotely.”
In a different story, the main character finds herself on a long-haul flight to Jupiter to perform in-person control and maintenance under NDA which will allow a remote mining colony to appear fully automatic and autonomous (again, I swear I started writing this one before Tesla's fake robot that was literally an actor in a morph suit). Amidst crushing boredom she realizes that the contract she signed doesn't give a clear answer on what, if any, part of this multi-year mission is not technically “work,” and gets her crewmates to develop tentative forms of collective behaviour and time theft as a small act of resistance against work fully consuming their lives. This creates a tiny almost-utopian community that can only exist temporarily and under those peculiar conditions, and she is writing the story, coping in hindsight with its dissolution.
Alongside the ridiculously overreaching contract, which, if you actually read what you sign when you start a new job or agree to a new lease or whatever, is not too far-fetched, another element of the protagonist's job which gives it a desirable but resentful character is that it is one of the few ways for her to access healthcare that (kind of) affirms her bodily autonomy. These two stories take place in the same speculative future and are ways, for me, of thinking through and expressing relationships to work, technology, desire, freedom, other people, and so on. I think these things are better entangled and historically contextualized than considered in isolation. And all of them depict points where the narratives about technology bump into and conflict with human experiences, the labor and emotive investment and exploitation and alternatives opened up behind the scenes.
Characterization of particular types of technology as uniquely bad has multiple things in common with the FALC-y (fully automated luxury communism) future where our needs meet themselves through magical infrastructures that enable us to just contemplate our jewelled tortoise of a state subsidized Disney+ account all day: they're idealizations, based on narratives tech industry owners use to talk up themselves, they don't engage with the histories of these technologies, and they're pretty bland visions of a future, imo anyways. Historically contextualizing these issues allows us to identify systems of exploitation and ideology being built in to emerging technologies, without throwing out the forms of connection, knowledge, expression and new capabilities people have often wrested from tech, usually against the expectations and permission of capital owners.
In focusing on individual technological advances as a unique threat and not a further development of existing problems, problems we already know how to think about, takeaways from the text risk being tainted by paranoia and reaction. A tradcellerationist scumbag in my inbox trying to get an article out of me phrased his request in terms of “non-invasive technology,” and “harmony between human and machine and nature;” they have already learned to speak the language of tech critique. Which is not to say all these ideas are bad; any position can be cynically co-opted. However, we have to examine the foundations of these arguments and how they impact all of us to avoid making fundamentally harmful compromises or alliances.
An idealist critique of tech is also tied to a nostalgia about tradition and locality that can become exclusive and harmful. The surprising generativity, wilfulness and liveliness of consciousness itself, fully realized in how we have to relate to others, is capable of defying all constructed categories that enforce normativity and shore up capitalist extraction and management of populations, just as a hammer is a tool for both putting in and pulling out nails.
This is why I focus a lot of my practice and writing on neglected or stigmatized pockets of cultural space. Human culture is already one of the most stratified and systematically dispossessed activities within capitalism, and The Art World is especially like a terrarium for trialling newer and more aggressive notions of commodification and financialization. But, despite this, in neglected, grey-legal and ephemeral corners, the people most alienated by the values and priorities of capitalism go on producing work that reaffirms the value of life, that touches me. And for a lot of people I think this observation is a convincing argument to build on that doesn't rely on regression or a dusty idea of tradition. The foundation worth moving forward on is not based on a specific natural order to alternately conform to or that some point in the past was better, but that we can make a future where people can more equally and broadly exercise and enjoy their unique capability. To act as if machines and computer algorithms already have taken care of it is a low estimation of it.