AI, Labor, and Incentivizing Automation
AI, Labor, and Incentivizing Automation
When we think about the role of artificial intelligence (AI) and automation in the current workplace and in the workplace of the future, it is important to situate our analysis within the wider context of historical technological development. AI, while disruptive, is not unprecedented in this context, and to prepare for its future effects, we can look to the past for guidance in how to conceptualize its impact. In this paper, I first examine technology through a historical lens, including previous fears of automation and technological unemployment, as well as their actual effects. I also provide a framework for thinking about the economic effects of technology by exploring the four mechanisms through which technology impacts employment: substituting for labor, expanding the sectors responsible for innovation, complementing certain labor, and lowering the cost of production. I then go on to show that the decision to develop automated technology has often been guided by cost-saving practices in response to concentrations of wealth in particular domains, or high wages relative to capital prices. Ultimately, I argue that AI will develop more quickly in sectors in which there is sustained investment in research and development, which is heavily influenced by labor wages relative to capital costs. While outside the scope of this paper, it is noted that if this trend continues, a policy mechanism to mitigate the economic and political implications of widespread automation will likely be necessary.
Technology has always been closely linked with employment, and with economic activity more generally. Particularly since the first Industrial Revolution, technological innovation has had a disruptive effect on how economies are organized and how labor operates, creating widespread fears about the fate of working people (Akst 2013). The classic example of the printing press goes back even further than the Industrial Revolution—this new device, at the time, meant the end of employment for many people whose jobs required writing and re-printing by hand. But we now know that in the long run, the printing press was instrumental for subsequent economic growth, and today very few would argue for the return to hand-lettering. Eventually hand-lettering as a field died out, but many other fields arose in its place, spurred by opportunities brought by the printing press. Despite the anxiety and disruption this caused in the short-term, the economy and well-being of its individual members benefited greatly from this invention. Recent research estimates that cities that adopted the printing press experienced economic growth up to 60 points higher than those that did not (Dittmar 2011, 1133).
We can also find more recent examples of this dynamic. The Industrial Revolution, beginning around the middle of the 18th century in Europe, fundamentally changed the kinds of jobs available to workers. In England, the epicenter of the transformation, wages were relatively high, giving owners of capital an incentive to develop technology that could be substituted for the high cost of labor. According to Robert Allen, the adoption of technologies like the spinning jenny, which partially automated the process of weaving, was due to England having high wages relative to capital costs (Allen 2007, 12). This meant that the return on investment of research and development into automating technologies was high and demonstrates how considerations of labor and capital drive the process of technological innovation.
According to Allen (2007), technologies like the spinning jenny “increased capital requirements while reducing labour [sic] requirements. This meant that the incentive to adopt the new techniques was greatest where wages were highest relative to capital costs” (Allen 2007, 2-3). At the time of its invention, in the mid-1700s, Britain had high relative wages—a spinner in France could earn 9 sous per day, whereas a spinner in England could earn 12.5. More importantly, in Britain, wages relative to capital costs were up to two and half times greater than they were in France—meaning labor was so cheap in France that it didn’t make sense to develop the spinning jenny, or even to adopt it after it had been invented. 
Even though the spinning jenny increased productivity, the cost of investing in the materials to produce them was still higher than wages in France (Allen 2007, 6). Similarly, the impetus for modern-day automation technologies is an accumulation of wealth for workers—whether it be relatively high wages, or a large number of low-wage workers—that justifies investing in technology to disrupt it. This can be in low-wage jobs with many employees, such as the transportation and service industries, or in middle-to-high wage fields like software programming and law. What is relevant is the relationship between labor costs and capital costs.
During any economic transformation, there will be those who are hurt in the short term. Knowledge about certain industries and types of labor are passed down through generations, creating a kind of stability that becomes upended when new forms of labor emerge. As Joel Mokyr points out, technological change can have many effects on labor, including the destruction of traditional labor hierarchies, changes in physical work environments, relocation of jobs and the subsequent breaking up of families and communities, and, of course, unemployment (Mokyr 1992, 325). One natural response to these changes is resistance, which can include tariffs and regulations, political activism, as well as violent actions like the physical destruction of technology, exemplified most famously by the Luddites. But over time, as cheaper capital displaced labor, goods and services dropped in price. This had the effect of raising everyone’s real income, so people were able to buy more things, increasing the demand for yet more goods—which created more jobs to produce these goods. In this way, there is no “lump of labor,” or fixed amount of work to be done within an economy, but merely an ever-shifting labor market that changes to meet new demands as they arise (Haldane 2015).
In order to better understand modern conversations around automation, it is helpful to look at the dialogue regarding technological and economic changes that have occurred throughout history. According to Daniel Akst, in the mid-20th century, high unemployment rates fueled fears of the consequences of automation, as well as imports of foreign goods, in the popular imagination. A common consolation was the idea that manufacturers who lost their jobs to automation (as well as trade) would quickly rebound and be re-situated in a new field—but this did not usually happen, as these workers typically just dropped out of the workforce (Akst, 2013). As Wassily Leontief famously quipped, the idea that humans will always be indispensable to productivity is as misguided as someone of an earlier time thinking that horses were indispensable to agricultural labor. Because heretofore humans have always been part of labor, does not mean that they always will be. If computers are able to replace non-routine tasks, humans may go the way of horses. Leontief is also skeptical of re-training programs, believing that labor re-placement programs must take into account the skills and tasks that will be valuable for the long-term, not necessarily just what is currently valuable (since these tasks could also soon be automated) (Leontief 1983, 3-4).
In general, it is helpful to think of four ways that technology impacts employment. Technology directly substitutes for labor, subsequently raising productivity and lowering prices. It also expands the sectors that are responsible for technological innovation, increasing their demand and creating new jobs. Indirectly, technology complements certain labor, which leads to improvement in these sectors that then expand and increase labor demand. Finally, technology lowers the costs of production, and in turn prices, which allows consumers to shift some of their spending to other discretionary goods and services, which also increases demand of (new) labor (Stewart et al 2015, 1). In thinking about how automation has already impacted and will continue to impact employment, this framework, based on historical experience, provides a useful guide.
For all the panic about the advent of “artificial intelligence”, there is surprisingly little agreement on what the term actually encompasses. Does it refer to any machine that completes a task that would have been done by a human? Does it have to display human characteristics beyond intelligence? One framework for conceptualizing AI deals with four notions: acting humanly, thinking humanly, thinking rationally, and acting rationally. AI can be modeled on any of these paradigms. The Turing Test accurately captures the acting humanly paradigm—it is concerned with whether a human, in blindly interacting with an AI, is able to tell it is interacting with an AI rather than another human. In order to pass the Turing Test, the AI must be able to master natural language processing, which allows it to successfully and seamlessly speak the given language; possess knowledge representation, so that it can record what it takes in; demonstrate automated reasoning, using its stored information to develop new thoughts; and include machine learning, so that it can learn from new information, data, patterns, and circumstances (Norvig and Russell 2009, 2).
So where does this leave us? For the sake of convenience, I adopt Nils J. Nilsson’s definition of AI, which is “…that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment” (Nilsson 2010, 13). While relatively broad and vague, this definition allows us to account for a diversity of AI manifestations, including both those that are programmed to perform a specific task as well as those that possess “general” intelligence. It also allows space for AI that is not necessarily programmed to merely imitate human intelligence in its different forms, but to develop and express its own.
Since the beginning of its development, AI has been based on a few different systems of thinking: symbol systems, expert systems, and machine learning. The first efforts at creating AI relied on symbol systems, in which AI would string together logical calculations (based on numbers, letters, and other “symbols”) to reach a conclusion. As problems got more complex, however, symbol systems showed limitations—there were too many things to compute to make this a viable method. The next efforts utilized expert systems, which took symbol systems but then consulted with experts on particular topics, who would narrow down the possible calculations, making the computing process much easier. However, this method was limited because each type of machine required expensive, specialized programming for its particular field, and there was hardly any advantage over using a real person. Finally, the system that has come to dominate AI research is machine learning, in which the programmer feeds a large number of relevant examples to the machine, and in turn it learns to build models of thinking based on these examples. Thus, the programmer uses data to “teach” the AI. With innovations in processing, as well as the recent accumulation of vast amounts of data, machine learning is currently the most promising paradigm for programming AI (Kaplan, 2016).
There are two different types of AI that are produced: narrow (sometimes called weak) AI, and human-level (sometimes called strong) AI. The former is programmed only to complete a specific task. An example of this would be a self-driving car, a computer program that plays chess, or a Roomba. The latter, which are much more difficult to develop and thus rarer, are meant to resemble the holistic capabilities of humans. They are supposed to be able to “think,” and to be able to learn and develop based on analyzing new information (Norvig and Russell 2009, 1020-1026). Thus far, automation has been preoccupied with narrow AI—machines that are developed to complete a particular task at a lower cost relative to human labor. Regarding workplace productivity and employment, both types of AI are relevant.
The narrow AI already in use has had a demonstrable impact on the workplace. In the last 30 years, wages have stagnated, inequality has risen, and employment growth has primarily been concentrated among low-skill (and thus low-wage) jobs. This is, in part, a function of how the kinds of technologies adopted in workplaces have changed the tasks workers perform, and in turn the demand for human labor. Thus far, AI has been very good at imitating “routine” tasks, or those that follow explicit commands, but not as good at “non-routine” tasks, which are not yet well enough understood to program. This also creates a labor market polarization, in which computers complement low- and high-skill jobs, but substitute middle-skill jobs. Many of the low-skill jobs that have been created are non-routine, as routine jobs are increasingly automated. An estimated 60% of the relative demand shift for college-educated labor between 1970 and 1998 can be attributed to the automation of routine tasks, both manual and cognitive (Autor et al 2003, 1279).
But what happens when middle and higher skill tasks, and those that are non-routine, can be automated? This began happening in the 2000s, as AI started being able to do the jobs traditionally occupied by middle and upper-level professionals. This began a process of deskilling, in which higher-skill workers were unable to find work appropriate to their skill level, and consequently had to move down the occupational ladder, exerting downward pressure on middle-skill workers, and so forth. Additionally, it is possible the “maturity” of the IT sector has led to a decrease in demand for skills. This means that, once the capital of the IT sector is in place, as it is now, the only new demand for high-skill cognitive tasks is in maintaining the capital. In other words, as the IT sector was being built up, there was high demand for cognitive skills; but once the field was established, a “de-skilling” process occurred in which high-skilled workers were less in demand. There was therefore a decrease in skills demand relative to the initial investment stage (which saw the increased skills demand of the 1990s). This may also explain why there are less high-skill cognitive jobs, other than the fact that they are becoming automated (Beaudry et al. 2013).
Another way to think about this is to compare it to the emergence of new technologies like the spinning jenny and weaving machines during the Industrial Revolution, as mentioned earlier. These technologies were developed and introduced to reduce costs by replacing labor with machines, since human wages were relatively high. The same was true of workers during the end of the 20th and beginning of the 21st centuries—as the supply of skilled labor increased starting in the 1970s, wages increased as well, incentivizing the development of automated machines that could complete these same tasks at a lower cost (Acemoglu 2003). The labor market polarization and unemployment we’ve seen thus far may just be the beginning.
The most commonly cited statistic regarding jobs and automation is from an analysis Carl Frey and Michael Osborne conducted on which tasks are most at risk for automation. In talking to experts, the two determined that about 47 percent of US employment is at risk. This analysis has been criticized for assuming that task structures are the same across all jobs within a field, but the number is helpful in highlighting the scale at which automation is contributing to mass unemployment, even if it’s less insightful about the distribution of job loss (Frey and Osborne 2013, 1). What kind of tasks will be automated? According to a Bank of America Merrill Lynch report, automation will most heavily impact aerospace and defense, automobiles and transportation, finance, healthcare, industrials, domestic services, and agriculture and mining. They estimate that 45% of manufacturing tasks will be automated by 2025, and that the development of these technologies will bring large initial job-creating investments (as well as subsequent labor cost savings). These technologies bring not only labor-saving capabilities, but also improve the productivity growth, as the machines are able to complete the tasks more efficiently and/or effectively than humans. The report also estimates that there is a 50% chance of full human-centered AI (high-level machine learning) by the years 2040-2050 (Bank of America Merrill Lynch 2015).
As discussed earlier, automation is increasingly being applied to non-routine skills and tasks, or those which are highly cognitive. This includes programs that can craft news stories and other pieces of writing, automated stock market trading algorithms, machines that can analyze the events of a game and summarize them, and other machine-learning based applications. This means that it may have an increasingly large impact on white collar jobs, as well as middle class clerical jobs. The use of cloud computing technology and software as a service (SaaS) means that many organizations are getting rid of their IT departments, instead outsourcing them to centralized companies like Amazon, Google, and Microsoft. Some of these tasks are even done by machine learning software, decreasing the overall need for IT employees. Additionally, a number of high-skill jobs, such as lawyers, radiologists, computer programmers, and tax preparers, are beginning to be offshored to places with cheaper labor costs (Ford, 2015). According to Alan Blinder, about 30-40 million jobs in the US—that’s about a quarter of the workforce—could be offshored (Blinder 2007). As Brynjolffson and McAfee demonstrated, offshoring has often been the first step before automation, meaning there’s a good chance these jobs are ripe for automation in the near future. They note, “[i]f you can give precise instructions to someone else on exactly what needs to be done, you can often write a precise computer program to do the same task.” (Brynjolffson and McAfee 2014, 80).
This trend supports the notion that there is an incentive to automate jobs that are high-skill and thus high-wage. After the growth of high-skilled workers since 1970, organizations were interested in developing technologies to replace the jobs that followed this boom, and so have begun focusing on automating these jobs (Acemoglu 2003). What, then, happens to these white collar workers? If they fall down the occupation ladder, they either push out lower-skilled workers or create a concentration of workers at low-skill occupations—creating just another opportunity for disruption through automation of these tasks. Of course, we will not be able to automate everything, but the amount of investment in R&D, and the pace at which it is conducted, strongly influences the capabilities that come out of it. If AI generates greater wealth, this means even more incentive to automate in order to replace the labor that produced it in the first place. This leads to even more R&D (or at least proportional, so that investment is not costlier than what is eventually saved), which in turn increases the probability of developing AI that will replace more workers—a kind of feedback loop in which human labor will always lose out to the joint forces of large capital and automated technology.
As has been established, this trend is not new. It drove the innovation of weaving machines during the Industrial Revolution and has impacted demand for skills in the wake of the IT revolution. When this happened in the past, overall economic productivity grew, labor re-oriented, and living standards increased (Haldane 2015). But will this time be different? Of course, this is impossible to know for certain. So far, automation has begun polarizing the labor market, but has not yet brought mass technological unemployment. Yet this may be due to the fact that the number of “robots” in use is relatively low, which is likely to change in the near future (Acemoglu and Restrepo, 2017). That said, we have already seen negative effects of automation in terms of income and wealth inequality. The benefits that AI has brought so far have accrued primarily to elite, wealthy owners of capital, and has created a polarization of the labor force in which middle-income jobs are disappearing. Historically, the impetus for developing automation has been high wages relative to capital costs—the incentive to develop and adopt these technologies comes from the possibility of saving on labor costs. These are the sectors most ripe for automation, in which investments into research and development are most likely to pay off.
Should the automation revolution continue to unfold, it may be quantitatively and qualitatively different than previous technological transformations. With machine learning technology that can continuously build on its own skill-set, it may be possible that, at some point, humans are irrelevant to the process of labor—just as horses became irrelevant to agriculture. If this is the case, serious consideration should be given to mechanisms through which the wealth generated by automation-driven productivity growth can be used to mitigate the effects of widespread unemployment.