Artificial Intelligence in the Not-So-Distant Future

posted in: Viewpoints | 1

The motto of the Tyrell Corporation – the corporation responsible for android production in Ridley Scott’s 1982 classic Blade Runner – is “more human than human”. As autonomous systems become increasingly ubiquitous and permanent fixtures in modern societies, there is both concern and curiosity that our not-so-distant future resembles something out of a science fiction film. Whether these autonomous systems qualify as Artificial Intelligence (AI) comparable to Scott’s androids is nonetheless a matter of debate.

Many technology users claim that true AI still exists only in the imagination of Hollywood filmmakers while others suggest it is present among us today. During the Risks and Benefits of Artificial Intelligence and Robotics, the Cambridge Centre for Risk Studies, in collaboration with the United Nations, explored the deeper societal and economic impacts of AI and whether we are on our way to seeing technology that fulfills the Tyrell Corporation’s fateful words.

The rapid pace of innovation in cyber and computing systems has beguiled some into believing that the world is on the cusp of the AI reality. These advances saw the triumph of the computer programme AlphaGo over internationally renowned player Lee Sedol in a five-game tournament of Go in March 2016.[1] AlphaGo’s victory is symbolic that autonomous systems already possess the capacity to out-think humans – even world champions.

In some cases, autonomous systems and robots already excel over human workers in efficiency and productivity. This poses a growing threat to the current workforce. Corporations may be willing to abandon workers in the pursuit of profit and use automation to bypass issues raised by labour strikes and protests. This has been demonstrated by the comments of a former fast food chain CEO. Following strikes by employees demanding a $15 minimum wage in 2016, he stated that “it’s cheaper to buy a $35,000 robotic arm than it is to hire an employee who’s inefficient making $15 an hour bagging French fries”. [2]  From the service sector to the military, it is probable that a large proportion of human workers will eventually be replaced by machines and computerisation.

The fallout of workers being replaced by automation would likely extend beyond individuals and impact whole communities. Similar to the devastation experienced by areas in Northern England and South Wales after the 1980s collapse of the mining industry, workers being replaced en masse by automation in certain industries will likely have catastrophic impacts on the regions and communities tied to those sectors.

Solutions to mitigate the effect automation may have on the human workforce have been offered – ranging from a universal basic income to taxation of robots to a radical rehaul of education systems. Nonetheless, the current lack of policy and public focus makes finding solutions to the challenges that AI poses to the workforce difficult. At a minimum, a stronger commitment from political figures and better media coverage will be essential to tackle these issues.

Despite any advantages that autonomous systems may possess over humans, they ironically may also be undone by human imperfections. The data algorithms via which smart devices and AI technology operate are generated by humans and therefore, inextricable from human biases. A recent Google search corrected “muslims report terrorism” to “muslims support terrorism”.[3]   A Carnegie Mellon study also found that women ‘are much less likely to be shown adverts on Google for highly paid jobs than men’. As a result of machine bias in automated recruitment programmes, career coaching services for high-paying executive jobs were disproportionately advertised to male job seekers over female ones.[4, 5] Even as technology evolves and expands, social biases around race and gender may harden and become more indelible.

Machine biases affect diverse contexts, from risk assessments in the criminal justice system to interactive bots on social media.[6, 7] It will take genuine and collective efforts to purge software of their current prejudices and ensure that data algorithms of the future are underlined by the values of an equitable society. Smart technology has the potential to correct human bias rather than reinforce it. To achieve this however will require consensus towards a common governance for AI.

Could automated systems also become purveyors of moral judgments and ethics? This is already visible in practices such as ‘predictive policing’. In 2016, the Chicago Police Department (CPD) began to use ‘network analysis to generate a highly controversial Strategic Subject List of people deemed at risk of becoming either victims or perpetrators of violent crimes’. Police officers use this list to inform individuals believed to be ‘high-risk’.[8] As the system attempts to predict actions not yet committed, and relies on police data that disproportionately represents crimes committed by individuals of colour, it is debatable whether or not the CPD is making ethical use of an automated system.

This moral predicament extends to other contexts and industries, including the private world of business. Credit card companies and banks currently track customer transactions, including purchases at “unhealthy” establishments. This information, coupled with alerts from an automated system regarding a client’s health condition, may categorise the client as a ‘high-risk’ debtor. The financial institution in question then faces an ethical crossroads: whether it should exercise its fiduciary duties by cutting credit to a client with a growing risk profile or assume a public utility role and attempt to counsel and assist such clients. The decisions that companies, both public and private, make with regards to the moral use of AI will be crucial in shaping our future in an increasingly automated world.

Data is not merely fed into smart devices by humans – it is also something that devices record and track as humans use them. In the “the Internet of Things”, smart technology in our homes and public spaces is seamlessly connected in an all-encompassing cloud. This cloud relies on the generation of significant amounts of data that can range from information about the contents of an individual’s fridge to their daily calendar to the length of their commute back and forth from work. This is then transmitted and communicated between smart software in phones, cars and household appliances.

Devices capable of tracking an individual’s every move from waking to sleeping signals an Orwellian future where surveillance – by the state or corporations or both – can creep into every aspect of modern day life. Trusting users currently share their data freely and uncritically without much consideration of its value and their right to privacy.

In the future, good business practice might allow consumers to regulate the level of access to their data, giving them the power of choice in selecting devices for use.  In parallel, companies might move towards greater transparency regarding their uses of consumer data. Consumer trust in smart devices and the firms that produce them would increase proportionally.

The Centre for Risk Studies looks forward to an on-going discussion of these issues and others related to AI at the 2017 Risk Summit that will take place from the 22-23rd June (Registration now open). This year’s Risk Summit theme is Managing Risk in a Smarter World and considers the effective management and assessment of risk in the context of an ever-changing and technologically advancing world.

Much gratitude to Olivia Majumdar, Editorial Assistant at the Cambridge Centre for Risk Studies for her contributions to this article.

Michelle Tuveson

Michelle Tuveson

Michelle Tuveson is a Founder and Executive Director of the Cambridge Centre for Risk Studies at the University of Cambridge Judge Business School and a Member of the IEEE Standards Committee for the Ethical Considerations in the Design of AI and Autonomous Systems.

Leave a Reply to Artificial Intelligence in the Not-So-Distant Future – AI Update Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.