Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
New AI technologies are continuously being developed to enhance labour productivity and reduce labour costs at work. By automating mental and creative labour, companies hope to become less dependent on human resources for financial success. While workers often criticize the influence of AI in their work, Big Tech and the companies that implement these technologies often present this outcome as the inevitable price of progress. However, as shown by Daron Acemoglu and Simon Johnson, recent winners of the Nobel Prize in Economics, the effects of new workplace technologies are not set in stone but the outcome of power differences in labour markets. Depending on the bargaining power of workers, the influence of democratic governance on technology investments, or consumer boycotts, the benefits of technological progress can be distributed in ways that benefit more stakeholders than only Big Tech and employers. If the latter meet resistance from other stakeholders, they must share the spoils of progress more equitably. Furthermore, Matteo Pasquinelli’s The Eye of the Master highlights how AI technology relies on exploitative absorption of human intelligence. AI technology requires enormous databases composed of information about human behaviour in order to simulate and automate these behaviours. Without data inputs from the workers whose bargaining power is eroded by automation, these AI-systems lack the information required to generate intelligible output.
In my paper, I wish to delve deeper into Pasquinelli’s usage of Karl Marx’ notion of the general intellect to defend a case for the democratization and nationalization of workplace AI. In his Grundrisse, Marx described industrial workplace technologies as automated systems that absorbed the artisanal knowledge and skills from craftsmen in order to deskill the workforce. The industrial general intellect parasitized on human intelligence to grant more coordinative powers to the managers in charge of the labour process. A similar threat is currently at play in the implementation of workplace AI: new technologies extract human knowledge from workers and embed it in software unilaterally controlled by managers and data technicians. In this context, democratizing AI implies granting workers a voice over how their data is used to transform the future of work. Nationalization offers an expedient strategy for institutionalizing democratic control over workplace AI development and implementation. While most political initiatives propose to merely regulate the implementation of AI, while leaving private property and decisions over financial investments and technological development to corporate agents, I argue that this regulatory approach is insufficient to fully grant workers democratic control over workplace AI. Nonetheless, the nationalization of AI requires strategic rethinking of AI governance in order to avoid the specter of rigid communist planning. While nationalization offers opportunities for democratic control, it also pertains the risk of smothering technological innovation through bureaucratic planning.
All play and no work? AI and existential unemployment
Gary David O'Brien
Lingnan University, Hong Kong S.A.R. (China)
Recent developments in generative AI, such as large language models and image generation software, raise the possibility that AI systems might be able to replace humans in some of the intrinsically valuable work through which humans find meaning in their lives – work like scientific and philosophical research and the creation of art. If AIs can do this work more efficiently than humans, this might make human performance of these activities pointless. This represents a threat to human wellbeing which is distinct from, and harder to solve, than the automation of merely instrumentally valuable activities. In this paper I outline the problem, assess its seriousness, and propose possible solutions.
In section 1 I lay out my assumptions about the development of AI, and specify the kind of work I am interested in. I assume that AIs will continue to develop to the point at which they can outperform humans at most or all tasks. I also assume that the economic problems of automation will be solved. That is, if most work, in the sense of paid employment, is automated, human beings will still be able to support themselves, for example with a Universal Basic Income. My concern is with meaningful work, by which I mean the exertion of effort to attain some non-trivial goal, and I contrast this with play. I argue further that research and the creation of art are particularly important forms of meaningful work.
In section 2 I argue that AI could reduce our incentives to perform meaningful work, and this might result in a great deskilling of humanity. This would be bad if we accept a perfectionist element in our theory of wellbeing. Secondly, the deskilling of humanity might result in our civilization becoming locked in to a suboptimal future.
In section 3 I argue that, even if humans continue to do meaningful work in the automated world, the mere existence of AI systems would undermine its meaning and value. I critique Danaher’s (2019a) and Suits’ (1978) arguments that we should embrace the total automation of work and retreat to a ‘utopia of games’. Instead, I argue that the threat to meaning and value posed by AI gives us a prima facie reason to slow down its development.