Robots and Artificial Intelligence Report (Assessment)

Introduction, impact on organizations, impact on employees, how society is influenced, recommendations.

The world is approaching an era with a new technological structure, where robots and devices powered with artificial intelligence will be extensively used both in production and in personal life. Currently, manufacturers of such devices and machinery are often labeling their products intellectual. However, at the current stage of development, it is merely marketing. Substantial research is needed to make contemporary machines intelligent. Although the technology does not yet exist in its final form, many are already pondering the possible positive and negative impacts of robots and artificial intelligence.

One the one hand, with artificial intelligence and fully autonomous robots, organizations will be able to optimize their spending and increase the speed of development and production of their commodities. On the other hand, employees are concerned that they will be laid off because their responsibilities might be taken away by machinery. Outside of the organizational context, artificial intelligence and robots are likely to provide additional comfort and convenience to people in their personal lives. This paper explores the benefits and disadvantages of robots and AI in the context of business, job market, and society.

Artificial intelligence and robots can bring many benefits to organizations, mainly due to the capacity for extensive automation. However, automation is a vague term, and it is necessary to clearly outline what aspects of organizational processes can be automated. On the contrary, there are concerns with security and ethics. Furthermore, AI development, due to its novelty, continues to stay as one of the most expensive areas of research.

Positive Effects

Customer relationship is one of the most critical areas for every organization. Currently, replying to emails, answering chat messages and phone calls, and resolving client issues require trained personnel. At the same time, companies collect enormous amounts of customer data that is of no use if not applied to solve problems. Artificial intelligence and robots may solve this issue by analyzing the vast array of data and learning to respond to customer inquiries (Ransbotham, Kiron, Gerbert, & Reeves, 2017). Not only will it lead to a reduction in the number of customer service agents, but it may also lead to a more pleasant client experience. That is because while one human specialist can handle only one person, a software program can handle thousands of requests simultaneously.

To perceive any meaning from terabytes of semi-structured and unstructured information, data specialists of companies need to work tirelessly and for considerable amounts of time. Artificial intelligence can automate these data mining tasks – new data is analyzed immediately after getting added to databases, and the autonomous program automatically scans for patterns and anomalies (von Krogh, 2018). The technology may be used to discover insights and gain a competitive advantage in the market.

AI-powered robots may replace humans in some areas of a company’s operations. For instance, some hotels are using such robots to automate check-ins and check-outs, provide more convenient customer experience through 24/7 support service (Wirtz, 2019). Operational automation is also possible in manufacturing facilities where string temperature levels must be maintained (Wirtz, 2019). Stock refilling is a potential use case for stores and restaurants. Although not everything can be automated, a substantial portion of companies’ activities can be run through the use of intelligent robot systems.

Administrative tasks can also be eased with the help of artificial intelligence. For instance, current use cases include aiding the recruitment department (Hughes, Robert, Frady, & Arroyos, 2019). An intelligent software system can automatically analyze thousands of resumes and filter those that are not suitable (Hughes et al., 2019). There are several benefits of an automated recruitment process – a substantial amount of financial resources is saved because there is no need to hire a recruitment agency, and all applications will be considered objectively, with no bias and discrimination.

The recruitment process is not the only human resources department function an intelligent software system may help with. Organizations are often challenged by the need to schedule workers according to workload (Hughes et al., 2019). HR managers also need to consider which workers work well together, and what task needs which employee. Artificial intelligence may automate much of these responsibilities – it can assign more workers to a particular shift when more customers are expected, and choose employees that work together much more effectively than others (Hughes et al., 2019). Both organizations and employees benefit from such functions because companies will have optimized scheduling, and workers will be more satisfied because of more productive relationships.

Adverse Impacts

Despite many benefits, there are also limitations of artificial intelligence and robotics. The technology relies on the availability of data, and often such information is unstructured, is of poor quality, and inconsistent (Webber, Detjen, MacLean, & Thomas, 2019). Therefore, it is challenging for a company with no access to a large pool of data to develop an intelligent system. Currently, only companies like Google, Facebook, Uber, and Apple, that gather terabytes of data each minute have the capacity to build sophisticated and useful AI-powered systems.

Any company that is planning to adopt AI and robotics to achieve new business objectives should be ready for high expenditures. Because of a shortage of skilled professionals that are able to develop and operate reliable AI solutions, the cost of producing a required software system is high. Such a situation makes AI a prerogative of rich companies and virtually impossible for those who only want to try the technology to see whether it is suitable at the moment.

For the majority of workers, their managers and supervisors are the sources of mentorship and advice. A recent study suggests that robots can also serve as guidance because the majority of employees trust robots more than their managers (Brougham & Haar, 2018). The primary advantage of robot managers over their human counterparts is that they provide unbiased and objective advice. Besides, robots are able to work for 24 hours, which allows employees to get answers to their questions much sooner than they receive now.

As stated in the paper before, artificial intelligence and robots can contribute significantly to the recruitment process with unbiased assistance. It is beneficial not only to enterprises but also to employees because they will have an equal opportunity for receiving the job (Hughes et al., 2019). Also, recommendation systems may allow people with little or no experience to be recognized by companies (Hughes et al., 2019). Traditional barriers will cease to exist if hiring managers will start to depend on intelligent systems heavily.

One significant benefit of robots over humans is that they are never physically tired. This attribute can be proven to be especially beneficial if robots are used to aid people with tedious and repetitive tasks (Cesta, Cortellessa, Orlandini, & Umbrico, 2018). However, for this approach to work, companies need to consider robots not as an eventual replacement but as colleagues to human employees. In such a scenario, human workers deal with unpredictable and non-trivial tasks, while robots relieve them from doing repetitive tasks and duties that may have caused physical harm.

Robots powered with artificial intelligence have the potential to become effective teambuilders. There are efforts to build a system that accepts responses and commentaries from team members and gives targeted feedback, which may be used to enhance the relationship between team members (Webber, Detjen, MacLean, & Thomas, 2019). The system can also be used at a different stage – when forming new teams, by carefully inspecting the available data, the system may give recommendations on which employees will be the most effective in a team considering their skillsets (Webber et al., 2019). While AI cannot become a replacement for human involvement in team building activities, it can positively influence groups through systematic interventions.

Despite many positive effects, artificial intelligence and robots may serve as the most detrimental agents to human employment. Due to the capacity of being automated, robots and AI may replace humans in many areas of activity. For instance, with the emergence of autonomous vehicles, drivers may lose their jobs. The list of jobs that are under the risk of being diminished by robots is long. It includes support specialists, proofreaders, receptionists, machinery operators, factory workers, taxi and bus drivers, soldiers, and farmers (Brougham & Haar, 2018).

Some claim that, while taking away many opportunities from people, artificial intelligence and robots will create other jobs that humans will need to occupy (Brougham & Haar, 2018). However, skeptics state that artificial intelligence will harm the middle class and increase the gap between highly skilled employees and regular workers (Brougham & Haar, 2018). AI is only an emerging technology, but employees and companies will need to be ready for its adverse influences.

Society has been significantly influenced by technology, and this trend will continue as artificial intelligence and robots get more sophisticated. As progress is made in the field of AI and robotics, the technology will blend into people’s lives, and it will become challenging to distinguish between what is a technology and what is not (Helbing, 2019). This uniform integration has many benefits, such as convenience and comfort. However, because technology is power, some critics claim that people will need to view these advancements from the standpoint of citizens, not consumers (Helbing, 2019).

Artificial intelligence relies heavily on the data people generate in order to train and provide better results (Helbing, 2019). As the sole owners of their personal data, people will need to be able to control how this data is used and for what purposes. In the wrong hands or the corrupt system, this information may be used to influence citizens (Helbing, 2019). Therefore, it is reasonable to claim that, as artificial intelligence and robots get more advanced, society will strive for more transparency in how their personal data is used.

There are three recommendations worth making, and each one of them relates to one potential effect of artificial intelligence and robots. There is a widespread belief that intelligent systems will eventually replace human beings in many industries and jobs (Brougham & Haar, 2018). Not only will it have a detrimental effect on those who will lose their jobs, but it will also harm society’s current structure. One way of mitigating these consequences is to design robots and AI not to replace human employees but assist them in jobs they are performing for increasing productivity.

In the contemporary world, people produce enormous amounts of data, which is collected both by the government and private companies. Current laws require enterprises to use personal data of their customers in such a way that their private information is not exposed to third-parties (Helbing, 2019).

As artificial intelligence gets more developed, current laws may become obsolete. The government should demand companies to be much more transparent in how the data is used. Furthermore, the government should require companies to undertake security measures so that personal information is not used by an intelligent system to impose harm on people. A relatively recent case of Cambridge Analytica shows how the public can be manipulated if personal data results in the wrong hands. Public awareness of AI and robots’ implications should also be increased.

It is already known that artificial intelligence and robotics are the next chapters in the history of digital technology. Present versions of artificial intelligence have partial success in identifying and curing cancer, predicting the weather, analyzing the image from cameras and other sensors to drive a car autonomously, and much more. Organizations and businesses are the first ones to utilize the technology to maximize their profits and minimize their expenditure while keeping the quality of products and services at the highest levels. There are many benefits of the technology, including significant automation in many areas of organizational activity, and employee assistance.

People, however, should also remember the downsides – many people are likely to lose their jobs, and companies need to make substantial investments before artificial intelligence and robots are entirely usable. To mitigate some of the adverse consequences, companies will need to think about using AI and robots to assist employees and not to replace them. The government should also be involved – it must ensure that personal data of customers is safe. Efforts should also be made to increase public awareness about the implications of artificial intelligence and robots.

Brougham, D., & Haar, J. (2018). Smart technology, artificial intelligence, robotics, and algorithms (STARA): Employees’ perceptions of our future workplace. Journal of Management & Organization , 24 (2), 239-257.

Cesta, A., Cortellessa, G., Orlandini, A., & Umbrico, A. (2018). Towards flexible assistive robots using artificial intelligence . Web.

Helbing, D. (2019). Towards digital enlightenment . Cham, Switzerland: Springer International Publishing.

Hughes, C., Robert, L., Frady, K., & Arroyos, A. (2019). Managing technology and middle- and low-skilled employees . Bingley, UK: Emerald Group Publishing.

Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping business with artificial intelligence: Closing the gap between ambition and action. MIT Sloan Management Review , 59 (1).

von Krogh, G. (2018). Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing. Academy of Management Discoveries , 4 (4), 404-409.

Webber, S. S., Detjen, J., MacLean, T. L., & Thomas, D. (2019). Team challenges: Is artificial intelligence the solution? Business Horizons , 62 (6), 741-750.

Wirtz, J. (2019). Organizational ambidexterity: Cost-effective service excellence, service robots, and artificial intelligence . Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2024, March 24). Robots and Artificial Intelligence. https://ivypanda.com/essays/robots-and-artificial-intelligence/

"Robots and Artificial Intelligence." IvyPanda , 24 Mar. 2024, ivypanda.com/essays/robots-and-artificial-intelligence/.

IvyPanda . (2024) 'Robots and Artificial Intelligence'. 24 March.

IvyPanda . 2024. "Robots and Artificial Intelligence." March 24, 2024. https://ivypanda.com/essays/robots-and-artificial-intelligence/.

1. IvyPanda . "Robots and Artificial Intelligence." March 24, 2024. https://ivypanda.com/essays/robots-and-artificial-intelligence/.

Bibliography

IvyPanda . "Robots and Artificial Intelligence." March 24, 2024. https://ivypanda.com/essays/robots-and-artificial-intelligence/.

  • Robotics and Artificial Intelligence in Organizations
  • Robotic Visual Recognition and Robotics in Healthcare
  • Amazon’s AI-Powered Home Robots
  • Artificial Intelligence in “I, Robot” by Alex Proyas
  • Questionable Future of Robotics
  • Is the Robotics Development Helpful or Harmful?
  • Robotics in Construction: Automated and Semi-Automated Devices
  • Robotics' Sociopolitical and Economic Implications
  • Why Artificial Intelligence Will Not Replace Human in Near Future?
  • The Use of Robotics in the Operating Room
  • Attraction of Investment for Robotization of Production
  • Natural Language Processing in Business
  • Robots in Today's Society: Artificial Intelligence
  • Neural Networks and Stocks Trading
  • Computer Financial Systems and the Labor Market

These 5 robots could soon become part of our everyday lives

A robot and a human shaking hands.

Recent advances in artificial intelligence (AI) are leading to the emergence of a new class of robot. Image:  Quartz

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Pieter Abbeel

essay on robotics and artificial intelligence

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Artificial Intelligence is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, artificial intelligence.

  • Recent advances in artificial intelligence (AI) are leading to the emergence of a new class of robot.
  • In the next five years, our households and workplaces will become dependent upon the role of robots, says Pieter Abbeel, the founder of UC Berkeley Robot Learning Lab.
  • Here he outlines a few standout examples.

People often ask me about the real-life potential for inhumane, merciless systems like Hal 9000 or the Terminator to destroy our society.

Growing up in Belgium and away from Hollywood, my initial impressions of robots were not so violent. In retrospect, my early positive affiliations with robots likely fueled my drive to build machines to make our everyday lives more enjoyable. Robots working alongside humans to manage day-to-day mundane tasks was a world I wanted to help create.

Now, many years later, after emigrating to the United States, finishing my PhD under Andrew Ng , starting the Berkeley Robot Learning Lab , and co-founding Covariant , I’m convinced that robots are becoming sophisticated enough to be the allies and helpful teammates that I hoped for as a child.

Recent advances in artificial intelligence (AI) are leading to the emergence of a new class of robot. These are machines that go beyond the traditional bots running preprogrammed motions; these are robots that can see, learn, think, and react to their surroundings.

While we may not personally witness or interact with robots directly in our daily lives, there will be a day over the next five years in which our households and workplaces are dependent upon the role of robots to run smoothly. Here are a few standout examples, drawn from some of my guests on The Robot Brains Podcast .

Robots that deliver medical supplies to extremely remote places

After spending months in Africa and South America talking to medical and disaster relief providers, Keenan Wyrobek foresaw how AI-powered drone technology could make a positive impact. He started Zipline , which provides drones to handle important and dangerous deliveries. Now shipping one ton of products a day, the company is helping communities in need by using robots to accomplish critical deliveries (they’re even delivering in parts of the US ).

Special delivery.

Robots that automate recycling

Recycling is one of the most important activities we can do for a healthier planet. However, it’s a massive undertaking. Consider that each human being produces almost 5 lbs of waste a day and there are 7.8 billion of us. The real challenge comes in with second sorting—the separation process applied once the easy-to-sort materials have been filtered. Matanya Horowitz sat down with me to explain how AMP Robotics helps facilities across the globe save and reuse valuable materials that are worth billions of dollars but were traditionally lost to landfills.

Sorting it out.

Robots that handle dangerous, repetitive warehouse tasks

Marc Segura of ABB , a robotics firm started in 1988, shared real stories from warehouses across the globe in which robots are managing jobs that have high-accident rates or long-term health consequences for humans. With robots that are strong enough to lift one-ton cars with just one arm, and other robots that can build delicate computer chips (a task that can cause long-term vision impairments for a person), there are a whole range of machines handling tasks not fit for humans.

Can you do what I do?

Have you read?

How to prevent mass extinction in the ocean using ai, robots and 3d printers, get a grip: how geckos are inspiring robotics , robots to help nurses on the frontlines.

Long before covid-19 started calling our attention to the overworked nature of being a healthcare worker, Andrea Thomas of Diligent Robots noticed the issue. She spoke with me about the inspiration for designing Moxi, a nurse helper. Now being used in Dallas hospitals , the robots help clinical staff with tasks that don’t involve interacting with patients. Nurses have reported lowered stress levels as mundane errands like supply stocking is automatically handled. Moxi is even adding a bit of cheer to patients’ days as well.

At your service.

Robots that run indoor farms

Picking and sorting the harvest is the most time-sensitive and time-consuming task on a farm. Getting it right can make a massive difference to the crop’s return. I got the chance to speak with AppHarvest ’s Josh Lessing , who built the world’s first “cross-crop” AI, Virgo, that learned how to pick all different types of produce. Virgo can switch between vastly different shapes, densities, and growth scenarios, meaning one day it can pick tomatoes, the next cucumbers, and after that, strawberries. Virgo currently operates at the AppHarvest greenhouses in Kentucky to grow non-GMO, chemical-free produce.

The robot future has already begun

Collaborating with software-driven co-workers is no longer the future; it’s now. Perhaps you’ve already seen some examples. You’ll be seeing a lot more in the decade to come.

Pieter Abbeel is the director of the Berkeley Robot Learning Lab and a co-founder of Covariant, an AI robotics firm. Subscribe to his podcast wherever you like to listen.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:

The agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} weekly.

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Artificial Intelligence .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

essay on robotics and artificial intelligence

Microchips – their past, present and future

Victoria Masterson

March 27, 2024

essay on robotics and artificial intelligence

Technology’s tipping point: Why now is the time to earn trust in AI

Margot Edelman

March 21, 2024

essay on robotics and artificial intelligence

3 tech pioneers on the biggest AI breakthroughs – and what they expect will come next

essay on robotics and artificial intelligence

From the world wide web to AI: 11 technology milestones that changed our lives

Stephen Holroyd

March 14, 2024

essay on robotics and artificial intelligence

Here's how investors are navigating the opportunities and pitfalls of the AI era

Chris Gillam and Judy Wade

essay on robotics and artificial intelligence

How to navigate the ethical dilemmas posed by the future of digital identity

Matt Price and Anna Schilling

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Head shot of SEAS Ph.D. alum Jacomo Corbo

Alumni profile: Jacomo Corbo, Ph.D. '08

Racing into the future of machine learning 

AI / Machine Learning , Computer Science

Harvard SEAS Ph.D. student Lucas Monteiro Paes wearing a white shirt and black glasses

Ph.D. student Monteiro Paes named Apple Scholar in AI/ML

Monteiro Paes studies fairness and arbitrariness in machine learning models

AI / Machine Learning , Applied Mathematics , Awards , Graduate Student Profile

Four people standing in a line, one wearing a Harvard sweatshirt, two holding second-place statues

A new phase for Harvard Quantum Computing Club

SEAS students place second at MIT quantum hackathon

Computer Science , Quantum Engineering , Undergraduate Student Profile

Nature Outlook logo

Robotics and artificial intelligence

Intelligent machines could shape the future of science and society.

Updated 27 March 2024

essay on robotics and artificial intelligence

Image credit: Peter Crowther

At the end of the twentieth century, computing was transformed from the preserve of laboratories and industry to a ubiquitous part of everyday life. We are now living through the early stages of a similarly rapid revolution in robotics and artificial intelligence — and the effect on society could be just as enormous.

This collection will be updated throughout 2024, with stories from journalists and research from across the Nature Portfolio journals . Check back throughout the year for the latest additions, or sign up to Nature Briefing: AI and Robotics to receive weekly email updates on this collection and other goings-on in AI and robotics.

Original journalism from Nature .

Soft robot resembling a four-legged starfish on a black background

Robot, repair thyself: laying the foundations for self-healing machines

Advances in materials science and sensing could deliver robots that can mend themselves and feel pain. By Simon Makin

29 February 2024

Isolated hand on white background holding a large cockroach. On the insect's back is a square pack showing a circuit board and wires.

This cyborg cockroach could be the future of earthquake search and rescue

From drivable bionic animals to machines made from muscle, biohybrid robots are on their way to a variety of uses. By Liam Drew

7 December 2023

How robots can learn to follow a moral code

Ethical artificial intelligence aims to impart human values on machine-learning systems. By Neil Savage

26 October 2023

A surreal illustration of an alphabet soup that spells the phrase "I am a robot"

A test of artificial intelligence

With debate raging over the abilities of modern AI systems, scientists are struggling to effectively assess machine intelligence. By Michael Eisenstein

14 September 2023

A pair of robot dogs walking along the tracks of a railway tunnel.

Robots need better batteries

As mobile machines travel further from the grid, they'll need lightweight and efficient power sources. By Jeff Hecht

29 June 2023

Synthetic data could be better than real data

Machine-generated data sets have the potential to improve privacy and representation in artificial intelligence, if researchers can find the right balance between accuracy and fakery. By Neil Savage

27 April 2023

Why artificial intelligence needs to understand consquences

A machine with a grasp of cause and effect could learn more like a human, through imagination and regret. By Neil Savage

24 February 2023

A man sits on his sofa, looking thoughtfully out of shot and holding a small black plastic box in his hands.

Abandoned: The human cost of neurotechnology failure

When the makers of electronic implants abandon their projects, people who rely on the devices have everything to lose. By Liam Drew

6 December 2022

A quadruped robot with a flat body is pictured running in a laboratory. The limbs are blurred with speed

Bioinspired robots walk, swim, slither and fly

Engineers look to nature for ideas on how to make robots move through the world. By Neil Savage

29 September 2022

Learning over a lifetime

Artificial-intelligence researchers turn to lifelong learning in the hopes of making machine intelligence more adaptable. By Neil Savage

20 July 2022

A robotic hand holding a marble between two glowing fingertips

Teaching robots to touch

Robots have become increasingly adept at interacting with the world around them. But to fulfil their potential, they also need a sense of touch. By Marcus Woo

26 May 2022

Artist's impression of microscopic robots attacking a tumour inside the body

Miniature medical robots step out from sci-fi

Tiny machines that deliver therapeutic payloads to precise locations in the body are the stuff of science fiction. But some researchers are trying to turn them into a clinical reality. By Anthony King

29 March 2022

Illustration of a scientist peeling back one face of a large black cube to look inside

Breaking into the black box of artificial intelligence

Scientists are finding ways to explain the inner workings of complex machine-learning models. By Neil Savage

Machine-generated data sets could improve privacy and representation in artificial intelligence, if researchers can find the right balance between accuracy and fakery. By Neil Savage

A pair of robot dogs walking along the tracks of a railway tunnel.

Why artificial intelligence needs to understand consequences

Eager for more.

Good news — more stories on robotics and artificial intelligence will be published here throughout the year. Click below to sign up for weekly email updates from Nature Briefing: AI and Robotics .

Research and reviews

Curated from the Nature Portfolio journals.

essay on robotics and artificial intelligence

Nature is pleased to acknowledge financial support from FII Institute in producing this Outlook supplement. Nature  maintains full independence in all editorial decisions related to the content. About this content.

The supporting organization retains sole responsibility for the following message:

Fii institute.

FII Institute is a global non-profit foundation with an investment arm and one agenda: Impact on Humanity. Committed to ESG principles, we foster the brightest minds and transform ideas into real-world solutions in five focus areas: AI and Robotics, Education, Healthcare, and Sustainability.

We are in the right place at the right time – when decision makers, investors, and an engaged generation of youth come together in aspiration, energized and ready for change. We harness that energy into three pillars – THINK, XCHANGE, ACT – and invest in the innovations that make a difference globally.

Join us to own, co-create and actualize a brighter, more sustainable future for humanity.

Visit the FII Institute website .

SPONSOR FEATURES

Sponsor retains sole responsibility for the content of the below articles.

A robot in front of a classroom chalkboard

Will ChatGPT give us a lesson in education?

There might be a learning curve as AI tools grow in popularity, but this technology offers teachers opportunities to help pupils acquire new skills around formulating questions and in critical thinking.

A glowing computer chip on a circuit board

The challenge of making moral machines

Artificial intelligence has the potential to improve industries, markets and lives – but only if we can trust the algorithms.

  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Terms & Conditions
  • Accessibility statement

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Ethics of Artificial Intelligence and Robotics

Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these.

After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects , i.e., tools made and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects , i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3).

For each section within these themes, we provide a general explanation of the ethical issues , outline existing positions and arguments , then analyse how these play out with current technologies and finally, what policy consequences may be drawn.

1.1 Background of the Field

1.2 ai & robotics, 1.3 a note on policy, 2.1 privacy & surveillance, 2.2 manipulation of behaviour, 2.3 opacity of ai systems, 2.4 bias in decision systems, 2.5 human-robot interaction, 2.6 automation and employment, 2.7 autonomous systems, 2.8 machine ethics, 2.9 artificial moral agents, 2.10 singularity, research organizations, conferences, policy documents, other relevant pages, related entries, 1. introduction.

The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues.

Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some damage is done. In addition to such “ethical concerns”, new technologies challenge current norms and conceptual systems, which is of particular interest to philosophy. Finally, once we have understood a technology in its context, we need to shape our societal response, including regulation and law. All these features also exist in the case of new AI and Robotics technologies—plus the more fundamental fear that they may end the era of human control on Earth.

The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that. Press coverage thus focuses on risk, security (Brundage et al. 2018, in the Other Internet Resources section below, hereafter [OIR]), and prediction of impact (e.g., on the job market). The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome. Current discussions in policy and industry are also motivated by image and public relations, where the label “ethical” is really not much more than the new “green”, perhaps used for “ethics washing”. For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem. This article focuses on the genuine problems of ethics where we do not readily know what the answers are.

A last caveat: The ethics of AI and robotics is a very young field within applied ethics, with significant dynamics, but few well-established issues and no authoritative overviews—though there is a promising outline (European Group on Ethics in Science and New Technologies 2018) and there are beginnings on societal impact (Floridi et al. 2018; Taddeo and Floridi 2018; S. Taylor et al. 2018; Walsh 2018; Bryson 2019; Gibert 2019; Whittlestone et al. 2019), and policy recommendations (AI HLEG 2019 [OIR]; IEEE 2019). So this article cannot merely reproduce what the community has achieved thus far, but must propose an ordering where little order exists.

The notion of “artificial intelligence” (AI) is understood broadly as any kind of artificial computational system that shows intelligent behaviour, i.e., complex behaviour that is conducive to reaching goals. In particular, we do not wish to restrict “intelligence” to what would require intelligence if done by humans , as Minsky had suggested (1985). This means we incorporate a range of machines, including those in “technical AI”, that show only limited abilities in learning or reasoning but excel at the automation of particular tasks, as well as machines in “general AI” that aim to create a generally intelligent agent.

AI somehow gets closer to our skin than other technologies—thus the field of “philosophy of AI”. Perhaps this is because the project of AI is to create machines that have a feature central to how we humans see ourselves, namely as feeling, thinking, intelligent beings. The main purposes of an artificially intelligent agent probably involve sensing, modelling, planning and action, but current AI applications also include perception, text analysis, natural language processing (NLP), logical reasoning, game-playing, decision support systems, data analytics, predictive analytics, as well as autonomous vehicles and other forms of robotics (P. Stone et al. 2016). AI may involve any number of computational techniques to achieve these aims, be that classical symbol-manipulating AI, inspired by natural cognition, or machine learning via neural networks (Goodfellow, Bengio, and Courville 2016; Silver et al. 2018).

Historically, it is worth noting that the term “AI” was used as above ca. 1950–1975, then came into disrepute during the “AI winter”, ca. 1975–1995, and narrowed. As a result, areas such as “machine learning”, “natural language processing” and “data science” were often not labelled as “AI”. Since ca. 2010, the use has broadened again, and at times almost all of computer science and even high-tech is lumped under “AI”. Now it is a name to be proud of, a booming industry with massive capital investment (Shoham et al. 2018), and on the edge of hype again. As Erik Brynjolfsson noted, it may allow us to

virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. (quoted in Anderson, Rainie, and Luchsinger 2018)

While AI can be entirely software, robots are physical machines that move. Robots are subject to physical impact, typically through “sensors”, and they exert physical force onto the world, typically through “actuators”, like a gripper or a turning wheel. Accordingly, autonomous cars or planes are robots, and only a minuscule portion of robots is “humanoid” (human-shaped), like in the movies. Some robots use AI, and some do not: Typical industrial robots blindly follow completely defined scripts with minimal sensory input and no learning or reasoning (around 500,000 such new industrial robots are installed each year (IFR 2019 [OIR])). It is probably fair to say that while robotics systems cause more concerns in the general public, AI systems are more likely to have a greater impact on humanity. Also, AI or robotics systems for a narrow set of tasks are less likely to cause new issues than systems that are more flexible and autonomous.

Robotics and AI can thus be seen as covering two overlapping sets of systems: systems that are only AI, systems that are only robotics, and systems that are both. We are interested in all three; the scope of this article is thus not only the intersection, but the union, of both sets.

Policy is only one of the concerns of this article. There is significant public discussion about AI ethics, and there are frequent pronouncements from politicians that the matter requires new policy, which is easier said than done: Actual technology policy is difficult to plan and enforce. It can take many forms, from incentives and funding, infrastructure, taxation, or good-will statements, to regulation by various actors, and the law. Policy for AI will possibly come into conflict with other aims of technology policy or general policy. Governments, parliaments, associations, and industry circles in industrialised countries have produced reports and white papers in recent years, and some have generated good-will slogans (“trusted/responsible/humane/human-centred/good/beneficial AI”), but is that what is needed? For a survey, see Jobin, Ienca, and Vayena (2019) and V. Müller’s list of PT-AI Policy Documents and Institutions .

For people who work in ethics and policy, there might be a tendency to overestimate the impact and threats from a new technology, and to underestimate how far current regulation can reach (e.g., for product liability). On the other hand, there is a tendency for businesses, the military, and some public administrations to “just talk” and do some “ethics washing” in order to preserve a good public image and continue as before. Actually implementing legally binding regulation would challenge existing business models and practices. Actual policy is not just an implementation of ethical theory, but subject to societal power structures—and the agents that do have the power will push against anything that restricts them. There is thus a significant risk that regulation will remain toothless in the face of economical and political power.

Though very little actual policy has been produced, there are some notable beginnings: The latest EU policy document suggests “trustworthy AI” should be lawful, ethical, and technically robust, and then spells this out as seven requirements: human oversight, technical robustness, privacy and data governance, transparency, fairness, well-being, and accountability (AI HLEG 2019 [OIR]). Much European research now runs under the slogan of “responsible research and innovation” (RRI), and “technology assessment” has been a standard field since the advent of nuclear power. Professional ethics is also a standard field in information technology, and this includes issues that are relevant in this article. Perhaps a “code of ethics” for AI engineers, analogous to the codes of ethics for medical doctors, is an option here (Véliz 2019). What data science itself should do is addressed in (L. Taylor and Purtova 2019). We also expect that much policy will eventually cover specific uses or technologies of AI and robotics, rather than the field as a whole. A useful summary of an ethical framework for AI is given in (European Group on Ethics in Science and New Technologies 2018: 13ff). On general AI policy, see Calo (2018) as well as Crawford and Calo (2016); Stahl, Timmermans, and Mittelstadt (2016); Johnson and Verdicchio (2017); and Giubilini and Savulescu (2018). A more political angle of technology is often discussed in the field of “Science and Technology Studies” (STS). As books like The Ethics of Invention (Jasanoff 2016) show, concerns in STS are often quite similar to those in ethics (Jacobs et al. 2019 [OIR]). In this article, we discuss the policy for each type of issue separately rather than for AI or robotics in general.

2. Main Debates

In this section we outline the ethical issues of human use of AI and robotics systems that can be more or less autonomous—which means we look at issues that arise with certain uses of the technologies which would not arise with others. It must be kept in mind, however, that technologies will always cause some uses to be easier, and thus more frequent, and hinder other uses. The design of technical artefacts thus has ethical relevance for their use (Houkes and Vermaas 2010; Verbeek 2011), so beyond “responsible use”, we also need “responsible design” in this field. The focus on use does not presuppose which ethical approaches are best suited for tackling these issues; they might well be virtue ethics (Vallor 2017) rather than consequentialist or value-based (Floridi et al. 2018). This section is also neutral with respect to the question whether AI systems truly have “intelligence” or other mental properties: It would apply equally well if AI and robotics are merely seen as the current face of automation (cf. Müller forthcoming-b).

There is a general discussion about privacy and surveillance in information technology (e.g., Macnish 2017; Roessler 2017), which mainly concerns the access to private data and data that is personally identifiable. Privacy has several well recognised aspects, e.g., “the right to be let alone”, information privacy, privacy as an aspect of personhood, control over information about oneself, and the right to secrecy (Bennett and Raab 2006). Privacy studies have historically focused on state surveillance by secret services but now include surveillance by other state agents, businesses, and even individuals. The technology has changed significantly in the last decades while regulation has been slow to respond (though there is the Regulation (EU) 2016/679)—the result is a certain anarchy that is exploited by the most powerful players, sometimes in plain sight, sometimes in hiding.

The digital sphere has widened greatly: All data collection and storage is now digital, our lives are increasingly digital, most digital data is connected to a single Internet, and there is more and more sensor technology in use that generates data about non-digital aspects of our lives. AI increases both the possibilities of intelligent data collection and the possibilities for data analysis. This applies to blanket surveillance of whole populations as well as to classic targeted surveillance. In addition, much of the data is traded between agents, usually for a fee.

At the same time, controlling who collects which data, and who has access, is much harder in the digital world than it was in the analogue world of paper and telephone calls. Many new AI technologies amplify the known issues. For example, face recognition in photos and videos allows identification and thus profiling and searching for individuals (Whittaker et al. 2018: 15ff). This continues using other techniques for identification, e.g., “device fingerprinting”, which are commonplace on the Internet (sometimes revealed in the “privacy policy”). The result is that “In this vast ocean of data, there is a frighteningly complete picture of us” (Smolan 2016: 1:01). The result is arguably a scandal that still has not received due public attention.

The data trail we leave behind is how our “free” services are paid for—but we are not told about that data collection and the value of this new raw material, and we are manipulated into leaving ever more such data. For the “big 5” companies (Amazon, Google/Alphabet, Microsoft, Apple, Facebook), the main data-collection part of their business appears to be based on deception, exploiting human weaknesses, furthering procrastination, generating addiction, and manipulation (Harris 2016 [OIR]). The primary focus of social media, gaming, and most of the Internet in this “surveillance economy” is to gain, maintain, and direct attention—and thus data supply. “Surveillance is the business model of the Internet” (Schneier 2015). This surveillance and attention economy is sometimes called “surveillance capitalism” (Zuboff 2019). It has caused many attempts to escape from the grasp of these corporations, e.g., in exercises of “minimalism” (Newport 2019), sometimes through the open source movement, but it appears that present-day citizens have lost the degree of autonomy needed to escape while fully continuing with their life and work. We have lost ownership of our data, if “ownership” is the right relation here. Arguably, we have lost control of our data.

These systems will often reveal facts about us that we ourselves wish to suppress or are not aware of: they know more about us than we know ourselves. Even just observing online behaviour allows insights into our mental states (Burr and Christianini 2019) and manipulation (see below section 2.2 ). This has led to calls for the protection of “derived data” (Wachter and Mittelstadt 2019). With the last sentence of his bestselling book, Homo Deus , Harari asks about the long-term consequences of AI:

What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves? (2016: 462)

Robotic devices have not yet played a major role in this area, except for security patrolling, but this will change once they are more common outside of industry environments. Together with the “Internet of things”, the so-called “smart” systems (phone, TV, oven, lamp, virtual assistant, home,…), “smart city” (Sennett 2018), and “smart governance”, they are set to become part of the data-gathering machinery that offers more detailed data, of different types, in real time, with ever more information.

Privacy-preserving techniques that can largely conceal the identity of persons or groups are now a standard staple in data science; they include (relative) anonymisation , access control (plus encryption), and other models where computation is carried out with fully or partially encrypted input data (Stahl and Wright 2018); in the case of “differential privacy”, this is done by adding calibrated noise to encrypt the output of queries (Dwork et al. 2006; Abowd 2017). While requiring more effort and cost, such techniques can avoid many of the privacy issues. Some companies have also seen better privacy as a competitive advantage that can be leveraged and sold at a price.

One of the major practical difficulties is to actually enforce regulation, both on the level of the state and on the level of the individual who has a claim. They must identify the responsible legal entity, prove the action, perhaps prove intent, find a court that declares itself competent … and eventually get the court to actually enforce its decision. Well-established legal protection of rights such as consumer rights, product liability, and other civil liability or protection of intellectual property rights is often missing in digital products, or hard to enforce. This means that companies with a “digital” background are used to testing their products on the consumers without fear of liability while heavily defending their intellectual property rights. This “Internet Libertarianism” is sometimes taken to assume that technical solutions will take care of societal problems by themselves (Mozorov 2013).

The ethical issues of AI in surveillance go beyond the mere accumulation of data and direction of attention: They include the use of information to manipulate behaviour, online and offline, in a way that undermines autonomous rational choice. Of course, efforts to manipulate behaviour are ancient, but they may gain a new quality when they use AI systems. Given users’ intense interaction with data systems and the deep knowledge about individuals this provides, they are vulnerable to “nudges”, manipulation, and deception. With sufficient prior data, algorithms can be used to target individuals or small groups with just the kind of input that is likely to influence these particular individuals. A ’nudge‘ changes the environment such that it influences behaviour in a predictable way that is positive for the individual, but easy and cheap to avoid (Thaler & Sunstein 2008). There is a slippery slope from here to paternalism and manipulation.

Many advertisers, marketers, and online sellers will use any legal means at their disposal to maximise profit, including exploitation of behavioural biases, deception, and addiction generation (Costa and Halpern 2019 [OIR]). Such manipulation is the business model in much of the gambling and gaming industries, but it is spreading, e.g., to low-cost airlines. In interface design on web pages or in games, this manipulation uses what is called “dark patterns” (Mathur et al. 2019). At this moment, gambling and the sale of addictive substances are highly regulated, but online manipulation and addiction are not—even though manipulation of online behaviour is becoming a core business model of the Internet.

Furthermore, social media is now the prime location for political propaganda. This influence can be used to steer voting behaviour, as in the Facebook-Cambridge Analytica “scandal” (Woolley and Howard 2017; Bradshaw, Neudert, and Howard 2019) and—if successful—it may harm the autonomy of individuals (Susser, Roessler, and Nissenbaum 2019).

Improved AI “faking” technologies make what once was reliable evidence into unreliable evidence—this has already happened to digital photos, sound recordings, and video. It will soon be quite easy to create (rather than alter) “deep fake” text, photos, and video material with any desired content. Soon, sophisticated real-time interaction with persons over text, phone, or video will be faked, too. So we cannot trust digital interactions while we are at the same time increasingly dependent on such interactions.

One more specific issue is that machine learning techniques in AI rely on training with vast amounts of data. This means there will often be a trade-off between privacy and rights to data vs. technical quality of the product. This influences the consequentialist evaluation of privacy-violating practices.

The policy in this field has its ups and downs: Civil liberties and the protection of individual rights are under intense pressure from businesses’ lobbying, secret services, and other state agencies that depend on surveillance. Privacy protection has diminished massively compared to the pre-digital age when communication was based on letters, analogue telephone communications, and personal conversation and when surveillance operated under significant legal constraints.

While the EU General Data Protection Regulation (Regulation (EU) 2016/679) has strengthened privacy protection, the US and China prefer growth with less regulation (Thompson and Bremmer 2018), likely in the hope that this provides a competitive advantage. It is clear that state and business actors have increased their ability to invade privacy and manipulate people with the help of AI technology and will continue to do so to further their particular interests—unless reined in by policy in the interest of general society.

Opacity and bias are central issues in what is now sometimes called “data ethics” or “big data ethics” (Floridi and Taddeo 2016; Mittelstadt and Floridi 2016). AI systems for automated decision support and “predictive analytics” raise “significant concerns about lack of due process, accountability, community engagement, and auditing” (Whittaker et al. 2018: 18ff). They are part of a power structure in which “we are creating decision-making processes that constrain and limit opportunities for human participation” (Danaher 2016b: 245). At the same time, it will often be impossible for the affected person to know how the system came to this output, i.e., the system is “opaque” to that person. If the system involves machine learning, it will typically be opaque even to the expert, who will not know how a particular pattern was identified, or even what the pattern is. Bias in decision systems and data sets is exacerbated by this opacity. So, at least in cases where there is a desire to remove bias, the analysis of opacity and bias go hand in hand, and political response has to tackle both issues together.

Many AI systems rely on machine learning techniques in (simulated) neural networks that will extract patterns from a given dataset, with or without “correct” solutions provided; i.e., supervised, semi-supervised or unsupervised. With these techniques, the “learning” captures patterns in the data and these are labelled in a way that appears useful to the decision the system makes, while the programmer does not really know which patterns in the data the system has used. In fact, the programs are evolving, so when new data comes in, or new feedback is given (“this was correct”, “this was incorrect”), the patterns used by the learning system change. What this means is that the outcome is not transparent to the user or programmers: it is opaque. Furthermore, the quality of the program depends heavily on the quality of the data provided, following the old slogan “garbage in, garbage out”. So, if the data already involved a bias (e.g., police data about the skin colour of suspects), then the program will reproduce that bias. There are proposals for a standard description of datasets in a “datasheet” that would make the identification of such bias more feasible (Gebru et al. 2018 [OIR]). There is also significant recent literature about the limitations of machine learning systems that are essentially sophisticated data filters (Marcus 2018 [OIR]). Some have argued that the ethical problems of today are the result of technical “shortcuts” AI has taken (Cristianini forthcoming).

There are several technical activities that aim at “explainable AI”, starting with (Van Lent, Fisher, and Mancuso 1999; Lomas et al. 2012) and, more recently, a DARPA programme (Gunning 2017 [OIR]). More broadly, the demand for

a mechanism for elucidating and articulating the power structures, biases, and influences that computational artefacts exercise in society (Diakopoulos 2015: 398)

is sometimes called “algorithmic accountability reporting”. This does not mean that we expect an AI to “explain its reasoning”—doing so would require far more serious moral autonomy than we currently attribute to AI systems (see below §2.10 ).

The politician Henry Kissinger pointed out that there is a fundamental problem for democratic decision-making if we rely on a system that is supposedly superior to humans, but cannot explain its decisions. He says we may have “generated a potentially dominating technology in search of a guiding philosophy” (Kissinger 2018). Danaher (2016b) calls this problem “the threat of algocracy” (adopting the previous use of ‘algocracy’ from Aneesh 2002 [OIR], 2006). In a similar vein, Cave (2019) stresses that we need a broader societal move towards more “democratic” decision-making to avoid AI being a force that leads to a Kafka-style impenetrable suppression system in public administration and elsewhere. The political angle of this discussion has been stressed by O’Neil in her influential book Weapons of Math Destruction (2016), and by Yeung and Lodge (2019).

In the EU, some of these issues have been taken into account with the (Regulation (EU) 2016/679), which foresees that consumers, when faced with a decision based on data processing, will have a legal “right to explanation”—how far this goes and to what extent it can be enforced is disputed (Goodman and Flaxman 2017; Wachter, Mittelstadt, and Floridi 2016; Wachter, Mittelstadt, and Russell 2017). Zerilli et al. (2019) argue that there may be a double standard here, where we demand a high level of explanation for machine-based decisions despite humans sometimes not reaching that standard themselves.

Automated AI decision support systems and “predictive analytics” operate on data and produce a decision as “output”. This output may range from the relatively trivial to the highly significant: “this restaurant matches your preferences”, “the patient in this X-ray has completed bone growth”, “application to credit card declined”, “donor organ will be given to another patient”, “bail is denied”, or “target identified and engaged”. Data analysis is often used in “predictive analytics” in business, healthcare, and other fields, to foresee future developments—since prediction is easier, it will also become a cheaper commodity. One use of prediction is in “predictive policing” (NIJ 2014 [OIR]), which many fear might lead to an erosion of public liberties (Ferguson 2017) because it can take away power from the people whose behaviour is predicted. It appears, however, that many of the worries about policing depend on futuristic scenarios where law enforcement foresees and punishes planned actions, rather than waiting until a crime has been committed (like in the 2002 film “Minority Report”). One concern is that these systems might perpetuate bias that was already in the data used to set up the system, e.g., by increasing police patrols in an area and discovering more crime in that area. Actual “predictive policing” or “intelligence led policing” techniques mainly concern the question of where and when police forces will be needed most. Also, police officers can be provided with more data, offering them more control and facilitating better decisions, in workflow support software (e.g., “ArcGIS”). Whether this is problematic depends on the appropriate level of trust in the technical quality of these systems, and on the evaluation of aims of the police work itself. Perhaps a recent paper title points in the right direction here: “AI ethics in predictive policing: From models of threat to an ethics of care” (Asaro 2019).

Bias typically surfaces when unfair judgments are made because the individual making the judgment is influenced by a characteristic that is actually irrelevant to the matter at hand, typically a discriminatory preconception about members of a group. So, one form of bias is a learned cognitive feature of a person, often not made explicit. The person concerned may not be aware of having that bias—they may even be honestly and explicitly opposed to a bias they are found to have (e.g., through priming, cf. Graham and Lowery 2004). On fairness vs. bias in machine learning, see Binns (2018).

Apart from the social phenomenon of learned bias, the human cognitive system is generally prone to have various kinds of “cognitive biases”, e.g., the “confirmation bias”: humans tend to interpret information as confirming what they already believe. This second form of bias is often said to impede performance in rational judgment (Kahnemann 2011)—though at least some cognitive biases generate an evolutionary advantage, e.g., economical use of resources for intuitive judgment. There is a question whether AI systems could or should have such cognitive bias.

A third form of bias is present in data when it exhibits systematic error, e.g., “statistical bias”. Strictly, any given dataset will only be unbiased for a single kind of issue, so the mere creation of a dataset involves the danger that it may be used for a different kind of issue, and then turn out to be biased for that kind. Machine learning on the basis of such data would then not only fail to recognise the bias, but codify and automate the “historical bias”. Such historical bias was discovered in an automated recruitment screening system at Amazon (discontinued early 2017) that discriminated against women—presumably because the company had a history of discriminating against women in the hiring process. The “Correctional Offender Management Profiling for Alternative Sanctions” (COMPAS), a system to predict whether a defendant would re-offend, was found to be as successful (65.2% accuracy) as a group of random humans (Dressel and Farid 2018) and to produce more false positives and less false negatives for black defendants. The problem with such systems is thus bias plus humans placing excessive trust in the systems. The political dimensions of such automated systems in the USA are investigated in Eubanks (2018).

There are significant technical efforts to detect and remove bias from AI systems, but it is fair to say that these are in early stages: see UK Institute for Ethical AI & Machine Learning (Brownsword, Scotford, and Yeung 2017; Yeung and Lodge 2019). It appears that technological fixes have their limits in that they need a mathematical notion of fairness, which is hard to come by (Whittaker et al. 2018: 24ff; Selbst et al. 2019), as is a formal notion of “race” (see Benthall and Haynes 2019). An institutional proposal is in (Veale and Binns 2017).

Human-robot interaction (HRI) is an academic fields in its own right, which now pays significant attention to ethical matters, the dynamics of perception from both sides, and both the different interests present in and the intricacy of the social context, including co-working (e.g., Arnold and Scheutz 2017). Useful surveys for the ethics of robotics include Calo, Froomkin, and Kerr (2016); Royakkers and van Est (2016); Tzafestas (2016); a standard collection of papers is Lin, Abney, and Jenkins (2017).

While AI can be used to manipulate humans into believing and doing things (see section 2.2 ), it can also be used to drive robots that are problematic if their processes or appearance involve deception, threaten human dignity, or violate the Kantian requirement of “respect for humanity”. Humans very easily attribute mental properties to objects, and empathise with them, especially when the outer appearance of these objects is similar to that of living beings. This can be used to deceive humans (or animals) into attributing more intellectual or even emotional significance to robots or AI systems than they deserve. Some parts of humanoid robotics are problematic in this regard (e.g., Hiroshi Ishiguro’s remote-controlled Geminoids), and there are cases that have been clearly deceptive for public-relations purposes (e.g. on the abilities of Hanson Robotics’ “Sophia”). Of course, some fairly basic constraints of business ethics and law apply to robots, too: product safety and liability, or non-deception in advertisement. It appears that these existing constraints take care of many concerns that are raised. There are cases, however, where human-human interaction has aspects that appear specifically human in ways that can perhaps not be replaced by robots: care, love, and sex.

2.5.1 Example (a) Care Robots

The use of robots in health care for humans is currently at the level of concept studies in real environments, but it may become a usable technology in a few years, and has raised a number of concerns for a dystopian future of de-humanised care (A. Sharkey and N. Sharkey 2011; Robert Sparrow 2016). Current systems include robots that support human carers/caregivers (e.g., in lifting patients, or transporting material), robots that enable patients to do certain things by themselves (e.g., eat with a robotic arm), but also robots that are given to patients as company and comfort (e.g., the “Paro” robot seal). For an overview, see van Wynsberghe (2016); Nørskov (2017); Fosch-Villaronga and Albo-Canals (2019), for a survey of users Draper et al. (2014).

One reason why the issue of care has come to the fore is that people have argued that we will need robots in ageing societies. This argument makes problematic assumptions, namely that with longer lifespan people will need more care, and that it will not be possible to attract more humans to caring professions. It may also show a bias about age (Jecker forthcoming). Most importantly, it ignores the nature of automation, which is not simply about replacing humans, but about allowing humans to work more efficiently. It is not very clear that there really is an issue here since the discussion mostly focuses on the fear of robots de-humanising care, but the actual and foreseeable robots in care are assistive robots for classic automation of technical tasks. They are thus “care robots” only in a behavioural sense of performing tasks in care environments, not in the sense that a human “cares” for the patients. It appears that the success of “being cared for” relies on this intentional sense of “care”, which foreseeable robots cannot provide. If anything, the risk of robots in care is the absence of such intentional care—because less human carers may be needed. Interestingly, caring for something, even a virtual agent, can be good for the carer themselves (Lee et al. 2019). A system that pretends to care would be deceptive and thus problematic—unless the deception is countered by sufficiently large utility gain (Coeckelbergh 2016). Some robots that pretend to “care” on a basic level are available (Paro seal) and others are in the making. Perhaps feeling cared for by a machine, to some extent, is progress for come patients.

2.5.2 Example (b) Sex Robots

It has been argued by several tech optimists that humans will likely be interested in sex and companionship with robots and be comfortable with the idea (Levy 2007). Given the variation of human sexual preferences, including sex toys and sex dolls, this seems very likely: The question is whether such devices should be manufactured and promoted, and whether there should be limits in this touchy area. It seems to have moved into the mainstream of “robot philosophy” in recent times (Sullins 2012; Danaher and McArthur 2017; N. Sharkey et al. 2017 [OIR]; Bendel 2018; Devlin 2018).

Humans have long had deep emotional attachments to objects, so perhaps companionship or even love with a predictable android is attractive, especially to people who struggle with actual humans, and already prefer dogs, cats, birds, a computer or a tamagotchi . Danaher (2019b) argues against (Nyholm and Frank 2017) that these can be true friendships, and is thus a valuable goal. It certainly looks like such friendship might increase overall utility, even if lacking in depth. In these discussions there is an issue of deception, since a robot cannot (at present) mean what it says, or have feelings for a human. It is well known that humans are prone to attribute feelings and thoughts to entities that behave as if they had sentience,even to clearly inanimate objects that show no behaviour at all. Also, paying for deception seems to be an elementary part of the traditional sex industry.

Finally, there are concerns that have often accompanied matters of sex, namely consent (Frank and Nyholm 2017), aesthetic concerns, and the worry that humans may be “corrupted” by certain experiences. Old fashioned though this may seem, human behaviour is influenced by experience, and it is likely that pornography or sex robots support the perception of other humans as mere objects of desire, or even recipients of abuse, and thus ruin a deeper sexual and erotic experience. In this vein, the “Campaign Against Sex Robots” argues that these devices are a continuation of slavery and prostitution (Richardson 2016).

It seems clear that AI and robotics will lead to significant gains in productivity and thus overall wealth. The attempt to increase productivity has often been a feature of the economy, though the emphasis on “growth” is a modern phenomenon (Harari 2016: 240). However, productivity gains through automation typically mean that fewer humans are required for the same output. This does not necessarily imply a loss of overall employment, however, because available wealth increases and that can increase demand sufficiently to counteract the productivity gain. In the long run, higher productivity in industrial societies has led to more wealth overall. Major labour market disruptions have occurred in the past, e.g., farming employed over 60% of the workforce in Europe and North-America in 1800, while by 2010 it employed ca. 5% in the EU, and even less in the wealthiest countries (European Commission 2013). In the 20 years between 1950 and 1970 the number of hired agricultural workers in the UK was reduced by 50% (Zayed and Loft 2019). Some of these disruptions lead to more labour-intensive industries moving to places with lower labour cost. This is an ongoing process.

Classic automation replaced human muscle, whereas digital automation replaces human thought or information-processing—and unlike physical machines, digital automation is very cheap to duplicate (Bostrom and Yudkowsky 2014). It may thus mean a more radical change on the labour market. So, the main question is: will the effects be different this time? Will the creation of new jobs and wealth keep up with the destruction of jobs? And even if it is not different, what are the transition costs, and who bears them? Do we need to make societal adjustments for a fair distribution of costs and benefits of digital automation?

Responses to the issue of unemployment from AI have ranged from the alarmed (Frey and Osborne 2013; Westlake 2014) to the neutral (Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019) to the optimistic (Brynjolfsson and McAfee 2016; Harari 2016; Danaher 2019a). In principle, the labour market effect of automation seems to be fairly well understood as involving two channels:

(i) the nature of interactions between differently skilled workers and new technologies affecting labour demand and (ii) the equilibrium effects of technological progress through consequent changes in labour supply and product markets. (Goos 2018: 362)

What currently seems to happen in the labour market as a result of AI and robotics automation is “job polarisation” or the “dumbbell” shape (Goos, Manning, and Salomons 2009): The highly skilled technical jobs are in demand and highly paid, the low skilled service jobs are in demand and badly paid, but the mid-qualification jobs in factories and offices, i.e., the majority of jobs, are under pressure and reduced because they are relatively predictable, and most likely to be automated (Baldwin 2019).

Perhaps enormous productivity gains will allow the “age of leisure” to be realised, something (Keynes 1930) had predicted to occur around 2030, assuming a growth rate of 1% per annum. Actually, we have already reached the level he anticipated for 2030, but we are still working—consuming more and inventing ever more levels of organisation. Harari explains how this economic development allowed humanity to overcome hunger, disease, and war—and now we aim for immortality and eternal bliss through AI, thus his title Homo Deus (Harari 2016: 75).

In general terms, the issue of unemployment is an issue of how goods in a society should be justly distributed. A standard view is that distributive justice should be rationally decided from behind a “veil of ignorance” (Rawls 1971), i.e., as if one does not know what position in a society one would actually be taking (labourer or industrialist, etc.). Rawls thought the chosen principles would then support basic liberties and a distribution that is of greatest benefit to the least-advantaged members of society. It would appear that the AI economy has three features that make such justice unlikely: First, it operates in a largely unregulated environment where responsibility is often hard to allocate. Second, it operates in markets that have a “winner takes all” feature where monopolies develop quickly. Third, the “new economy” of the digital service industries is based on intangible assets, also called “capitalism without capital” (Haskel and Westlake 2017). This means that it is difficult to control multinational digital corporations that do not rely on a physical plant in a particular location. These three features seem to suggest that if we leave the distribution of wealth to free market forces, the result would be a heavily unjust distribution: And this is indeed a development that we can already see.

One interesting question that has not received too much attention is whether the development of AI is environmentally sustainable: Like all computing systems, AI systems produce waste that is very hard to recycle and they consume vast amounts of energy, especially for the training of machine learning systems (and even for the “mining” of cryptocurrency). Again, it appears that some actors in this space offload such costs to the general society.

There are several notions of autonomy in the discussion of autonomous systems. A stronger notion is involved in philosophical debates where autonomy is the basis for responsibility and personhood (Christman 2003 [2018]). In this context, responsibility implies autonomy, but not inversely, so there can be systems that have degrees of technical autonomy without raising issues of responsibility. The weaker, more technical, notion of autonomy in robotics is relative and gradual: A system is said to be autonomous with respect to human control to a certain degree (Müller 2012). There is a parallel here to the issues of bias and opacity in AI since autonomy also concerns a power-relation: who is in control, and who is responsible?

Generally speaking, one question is the degree to which autonomous robots raise issues our present conceptual schemes must adapt to, or whether they just require technical adjustments. In most jurisdictions, there is a sophisticated system of civil and criminal liability to resolve such issues. Technical standards, e.g., for the safe use of machinery in medical environments, will likely need to be adjusted. There is already a field of “verifiable AI” for such safety-critical systems and for “security applications”. Bodies like the IEEE (The Institute of Electrical and Electronics Engineers) and the BSI (British Standards Institution) have produced “standards”, particularly on more technical sub-problems, such as data security and transparency. Among the many autonomous systems on land, on water, under water, in air or space, we discuss two samples: autonomous vehicles and autonomous weapons.

2.7.1 Example (a) Autonomous Vehicles

Autonomous vehicles hold the promise to reduce the very significant damage that human driving currently causes—approximately 1 million humans being killed per year, many more injured, the environment polluted, earth sealed with concrete and tarmac, cities full of parked cars, etc. However, there seem to be questions on how autonomous vehicles should behave, and how responsibility and risk should be distributed in the complicated system the vehicles operates in. (There is also significant disagreement over how long the development of fully autonomous, or “level 5” cars (SAE International 2018) will actually take.)

There is some discussion of “trolley problems” in this context. In the classic “trolley problems” (Thomson 1976; Woollard and Howard-Snyder 2016: section 2) various dilemmas are presented. The simplest version is that of a trolley train on a track that is heading towards five people and will kill them, unless the train is diverted onto a side track, but on that track there is one person, who will be killed if the train takes that side track. The example goes back to a remark in (Foot 1967: 6), who discusses a number of dilemma cases where tolerated and intended consequences of an action differ. “Trolley problems” are not supposed to describe actual ethical problems or to be solved with a “right” choice. Rather, they are thought-experiments where choice is artificially constrained to a small finite number of distinct one-off options and where the agent has perfect knowledge. These problems are used as a theoretical tool to investigate ethical intuitions and theories—especially the difference between actively doing vs. allowing something to happen, intended vs. tolerated consequences, and consequentialist vs. other normative approaches (Kamm 2016). This type of problem has reminded many of the problems encountered in actual driving and in autonomous driving (Lin 2016). It is doubtful, however, that an actual driver or autonomous car will ever have to solve trolley problems (but see Keeling 2020). While autonomous car trolley problems have received a lot of media attention (Awad et al. 2018), they do not seem to offer anything new to either ethical theory or to the programming of autonomous vehicles.

The more common ethical problems in driving, such as speeding, risky overtaking, not keeping a safe distance, etc. are classic problems of pursuing personal interest vs. the common good. The vast majority of these are covered by legal regulations on driving. Programming the car to drive “by the rules” rather than “by the interest of the passengers” or “to achieve maximum utility” is thus deflated to a standard problem of programming ethical machines (see section 2.9 ). There are probably additional discretionary rules of politeness and interesting questions on when to break the rules (Lin 2016), but again this seems to be more a case of applying standard considerations (rules vs. utility) to the case of autonomous vehicles.

Notable policy efforts in this field include the report (German Federal Ministry of Transport and Digital Infrastructure 2017), which stresses that safety is the primary objective. Rule 10 states

In the case of automated and connected driving systems, the accountability that was previously the sole preserve of the individual shifts from the motorist to the manufacturers and operators of the technological systems and to the bodies responsible for taking infrastructure, policy and legal decisions.

(See section 2.10.1 below). The resulting German and EU laws on licensing automated driving are much more restrictive than their US counterparts where “testing on consumers” is a strategy used by some companies—without informed consent of the consumers or their possible victims.

2.7.2 Example (b) Autonomous Weapons

The notion of automated weapons is fairly old:

For example, instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. (DARPA 1983: 1)

This proposal was ridiculed as “fantasy” at the time (Dreyfus, Dreyfus, and Athanasiou 1986: ix), but it is now a reality, at least for more easily identifiable targets (missiles, planes, ships, tanks, etc.), but not for human combatants. The main arguments against (lethal) autonomous weapon systems (AWS or LAWS), are that they support extrajudicial killings, take responsibility away from humans, and make wars or killings more likely—for a detailed list of issues see Lin, Bekey, and Abney (2008: 73–86).

It appears that lowering the hurdle to use such systems (autonomous vehicles, “fire-and-forget” missiles, or drones loaded with explosives) and reducing the probability of being held accountable would increase the probability of their use. The crucial asymmetry where one side can kill with impunity, and thus has few reasons not to do so, already exists in conventional drone wars with remote controlled weapons (e.g., US in Pakistan). It is easy to imagine a small drone that searches, identifies, and kills an individual human—or perhaps a type of human. These are the kinds of cases brought forward by the Campaign to Stop Killer Robots and other activist groups. Some seem to be equivalent to saying that autonomous weapons are indeed weapons …, and weapons kill, but we still make them in gigantic numbers. On the matter of accountability, autonomous weapons might make identification and prosecution of the responsible agents more difficult—but this is not clear, given the digital records that one can keep, at least in a conventional war. The difficulty of allocating punishment is sometimes called the “retribution gap” (Danaher 2016a).

Another question is whether using autonomous weapons in war would make wars worse, or make wars less bad. If robots reduce war crimes and crimes in war, the answer may well be positive and has been used as an argument in favour of these weapons (Arkin 2009; Müller 2016a) but also as an argument against them (Amoroso and Tamburrini 2018). Arguably the main threat is not the use of such weapons in conventional warfare, but in asymmetric conflicts or by non-state agents, including criminals.

It has also been said that autonomous weapons cannot conform to International Humanitarian Law, which requires observance of the principles of distinction (between combatants and civilians), proportionality (of force), and military necessity (of force) in military conflict (A. Sharkey 2019). It is true that the distinction between combatants and non-combatants is hard, but the distinction between civilian and military ships is easy—so all this says is that we should not construct and use such weapons if they do violate Humanitarian Law. Additional concerns have been raised that being killed by an autonomous weapon threatens human dignity, but even the defenders of a ban on these weapons seem to say that these are not good arguments:

There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity. (A. Sharkey 2019)

A lot has been made of keeping humans “in the loop” or “on the loop” in the military guidance on weapons—these ways of spelling out “meaningful control” are discussed in (Santoni de Sio and van den Hoven 2018). There have been discussions about the difficulties of allocating responsibility for the killings of an autonomous weapon, and a “responsibility gap” has been suggested (esp. Rob Sparrow 2007), meaning that neither the human nor the machine may be responsible. On the other hand, we do not assume that for every event there is someone responsible for that event, and the real issue may well be the distribution of risk (Simpson and Müller 2016). Risk analysis (Hansson 2013) indicates it is crucial to identify who is exposed to risk, who is a potential beneficiary , and who makes the decisions (Hansson 2018: 1822–1824).

Machine ethics is ethics for machines, for “ethical machines”, for machines as subjects , rather than for the human use of machines as objects. It is often not very clear whether this is supposed to cover all of AI ethics or to be a part of it (Floridi and Saunders 2004; Moor 2006; Anderson and Anderson 2011; Wallach and Asaro 2017). Sometimes it looks as though there is the (dubious) inference at play here that if machines act in ethically relevant ways, then we need a machine ethics. Accordingly, some use a broader notion:

machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. (Anderson and Anderson 2007: 15)

This might include mere matters of product safety, for example. Other authors sound rather ambitious but use a narrower notion:

AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. (Dignum 2018: 1, 2)

Some of the discussion in machine ethics makes the very substantial assumption that machines can, in some sense, be ethical agents responsible for their actions, or “autonomous moral agents” (see van Wynsberghe and Robbins 2019). The basic idea of machine ethics is now finding its way into actual robotics where the assumption that these machines are artificial moral agents in any substantial sense is usually not made (Winfield et al. 2019). It is sometimes observed that a robot that is programmed to follow ethical rules can very easily be modified to follow unethical rules (Vanderelst and Winfield 2018).

The idea that machine ethics might take the form of “laws” has famously been investigated by Isaac Asimov, who proposed “three laws of robotics” (Asimov 1942):

First Law—A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law—A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov then showed in a number of stories how conflicts between these three laws will make it problematic to use them despite their hierarchical organisation.

It is not clear that there is a consistent notion of “machine ethics” since weaker versions are in danger of reducing “having an ethics” to notions that would not normally be considered sufficient (e.g., without “reflection” or even without “action”); stronger notions that move towards artificial moral agents may describe a—currently—empty set.

If one takes machine ethics to concern moral agents, in some substantial sense, then these agents can be called “artificial moral agents”, having rights and responsibilities. However, the discussion about artificial entities challenges a number of common notions in ethics and it can be very useful to understand these in abstraction from the human case (cf. Misselhorn 2020; Powers and Ganascia forthcoming).

Several authors use “artificial moral agent” in a less demanding sense, borrowing from the use of “agent” in software engineering in which case matters of responsibility and rights will not arise (Allen, Varner, and Zinser 2000). James Moor (2006) distinguishes four types of machine agents: ethical impact agents (e.g., robot jockeys), implicit ethical agents (e.g., safe autopilot), explicit ethical agents (e.g., using formal methods to estimate utility), and full ethical agents (who “can make explicit ethical judgments and generally is competent to reasonably justify them. An average adult human is a full ethical agent”.) Several ways to achieve “explicit” or “full” ethical agents have been proposed, via programming it in (operational morality), via “developing” the ethics itself (functional morality), and finally full-blown morality with full intelligence and sentience (Allen, Smit, and Wallach 2005; Moor 2006). Programmed agents are sometimes not considered “full” agents because they are “competent without comprehension”, just like the neurons in a brain (Dennett 2017; Hakli and Mäkelä 2019).

In some discussions, the notion of “moral patient” plays a role: Ethical agents have responsibilities while ethical patients have rights because harm to them matters. It seems clear that some entities are patients without being agents, e.g., simple animals that can feel pain but cannot make justified choices. On the other hand, it is normally understood that all agents will also be patients (e.g., in a Kantian framework). Usually, being a person is supposed to be what makes an entity a responsible agent, someone who can have duties and be the object of ethical concerns. Such personhood is typically a deep notion associated with phenomenal consciousness, intention and free will (Frankfurt 1971; Strawson 1998). Torrance (2011) suggests “artificial (or machine) ethics could be defined as designing machines that do things that, when done by humans, are indicative of the possession of ‘ethical status’ in those humans” (2011: 116)—which he takes to be “ethical productivity and ethical receptivity ” (2011: 117)—his expressions for moral agents and patients.

2.9.1 Responsibility for Robots

There is broad consensus that accountability, liability, and the rule of law are basic requirements that must be upheld in the face of new technologies (European Group on Ethics in Science and New Technologies 2018, 18), but the issue in the case of robots is how this can be done and how responsibility can be allocated. If the robots act, will they themselves be responsible, liable, or accountable for their actions? Or should the distribution of risk perhaps take precedence over discussions of responsibility?

Traditional distribution of responsibility already occurs: A car maker is responsible for the technical safety of the car, a driver is responsible for driving, a mechanic is responsible for proper maintenance, the public authorities are responsible for the technical conditions of the roads, etc. In general

The effects of decisions or actions based on AI are often the result of countless interactions among many actors, including designers, developers, users, software, and hardware.… With distributed agency comes distributed responsibility. (Taddeo and Floridi 2018: 751).

How this distribution might occur is not a problem that is specific to AI, but it gains particular urgency in this context (Nyholm 2018a, 2018b). In classical control engineering, distributed control is often achieved through a control hierarchy plus control loops across these hierarchies.

2.9.2 Rights for Robots

Some authors have indicated that it should be seriously considered whether current robots must be allocated rights (Gunkel 2018a, 2018b; Danaher forthcoming; Turner 2019). This position seems to rely largely on criticism of the opponents and on the empirical observation that robots and other non-persons are sometimes treated as having rights. In this vein, a “relational turn” has been proposed: If we relate to robots as though they had rights, then we might be well-advised not to search whether they “really” do have such rights (Coeckelbergh 2010, 2012, 2018). This raises the question how far such anti-realism or quasi-realism can go, and what it means then to say that “robots have rights” in a human-centred approach (Gerdes 2016). On the other side of the debate, Bryson has insisted that robots should not enjoy rights (Bryson 2010), though she considers it a possibility (Gunkel and Bryson 2014).

There is a wholly separate issue whether robots (or other AI systems) should be given the status of “legal entities” or “legal persons” in a sense natural persons, but also states, businesses, or organisations are “entities”, namely they can have legal rights and duties. The European Parliament has considered allocating such status to robots in order to deal with civil liability (EU Parliament 2016; Bertolini and Aiello 2018), but not criminal liability—which is reserved for natural persons. It would also be possible to assign only a certain subset of rights and duties to robots. It has been said that “such legislative action would be morally unnecessary and legally troublesome” because it would not serve the interest of humans (Bryson, Diamantis, and Grant 2017: 273). In environmental ethics there is a long-standing discussion about the legal rights for natural objects like trees (C. D. Stone 1972).

It has also been said that the reasons for developing robots with rights, or artificial moral patients, in the future are ethically doubtful (van Wynsberghe and Robbins 2019). In the community of “artificial consciousness” researchers there is a significant concern whether it would be ethical to create such consciousness since creating it would presumably imply ethical obligations to a sentient being, e.g., not to harm it and not to end its existence by switching it off—some authors have called for a “moratorium on synthetic phenomenology” (Bentley et al. 2018: 28f).

2.10.1 Singularity and Superintelligence

In some quarters, the aim of current AI is thought to be an “artificial general intelligence” (AGI), contrasted to a technical or “narrow” AI. AGI is usually distinguished from traditional notions of AI as a general purpose system, and from Searle’s notion of “strong AI”:

computers given the right programs can be literally said to understand and have other cognitive states. (Searle 1980: 417)

The idea of singularity is that if the trajectory of artificial intelligence reaches up to systems that have a human level of intelligence, then these systems would themselves have the ability to develop AI systems that surpass the human level of intelligence, i.e., they are “superintelligent” (see below). Such superintelligent AI systems would quickly self-improve or develop even more intelligent systems. This sharp turn of events after reaching superintelligent AI is the “singularity” from which the development of AI is out of human control and hard to predict (Kurzweil 2005: 487).

The fear that “the robots we created will take over the world” had captured human imagination even before there were computers (e.g., Butler 1863) and is the central theme in Čapek’s famous play that introduced the word “robot” (Čapek 1920). This fear was first formulated as a possible trajectory of existing AI into an “intelligence explosion” by Irvin Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. (Good 1965: 33)

The optimistic argument from acceleration to singularity is spelled out by Kurzweil (1999, 2005, 2012) who essentially points out that computing power has been increasing exponentially, i.e., doubling ca. every 2 years since 1970 in accordance with “Moore’s Law” on the number of transistors, and will continue to do so for some time in the future. He predicted in (Kurzweil 1999) that by 2010 supercomputers will reach human computation capacity, by 2030 “mind uploading” will be possible, and by 2045 the “singularity” will occur. Kurzweil talks about an increase in computing power that can be purchased at a given cost—but of course in recent years the funds available to AI companies have also increased enormously: Amodei and Hernandez (2018 [OIR]) thus estimate that in the years 2012–2018 the actual computing power available to train a particular AI system doubled every 3.4 months, resulting in an 300,000x increase—not the 7x increase that doubling every two years would have created.

A common version of this argument (Chalmers 2010) talks about an increase in “intelligence” of the AI system (rather than raw computing power), but the crucial point of “singularity” remains the one where further development of AI is taken over by AI systems and accelerates beyond human level. Bostrom (2014) explains in some detail what would happen at that point and what the risks for humanity are. The discussion is summarised in Eden et al. (2012); Armstrong (2014); Shanahan (2015). There are possible paths to superintelligence other than computing power increase, e.g., the complete emulation of the human brain on a computer (Kurzweil 2012; Sandberg 2013), biological paths, or networks and organisations (Bostrom 2014: 22–51).

Despite obvious weaknesses in the identification of “intelligence” with processing power, Kurzweil seems right that humans tend to underestimate the power of exponential growth. Mini-test: If you walked in steps in such a way that each step is double the previous, starting with a step of one metre, how far would you get with 30 steps? (answer: almost 3 times further than the Earth’s only permanent natural satellite.) Indeed, most progress in AI is readily attributable to the availability of processors that are faster by degrees of magnitude, larger storage, and higher investment (Müller 2018). The actual acceleration and its speeds are discussed in (Müller and Bostrom 2016; Bostrom, Dafoe, and Flynn forthcoming); Sandberg (2019) argues that progress will continue for some time.

The participants in this debate are united by being technophiles in the sense that they expect technology to develop rapidly and bring broadly welcome changes—but beyond that, they divide into those who focus on benefits (e.g., Kurzweil) and those who focus on risks (e.g., Bostrom). Both camps sympathise with “transhuman” views of survival for humankind in a different physical form, e.g., uploaded on a computer (Moravec 1990, 1998; Bostrom 2003a, 2003c). They also consider the prospects of “human enhancement” in various respects, including intelligence—often called “IA” (intelligence augmentation). It may be that future AI will be used for human enhancement, or will contribute further to the dissolution of the neatly defined human single person. Robin Hanson provides detailed speculation about what will happen economically in case human “brain emulation” enables truly intelligent robots or “ems” (Hanson 2016).

The argument from superintelligence to risk requires the assumption that superintelligence does not imply benevolence—contrary to Kantian traditions in ethics that have argued higher levels of rationality or intelligence would go along with a better understanding of what is moral and better ability to act morally (Gewirth 1978; Chalmers 2010: 36f). Arguments for risk from superintelligence say that rationality and morality are entirely independent dimensions—this is sometimes explicitly argued for as an “orthogonality thesis” (Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109).

Criticism of the singularity narrative has been raised from various angles. Kurzweil and Bostrom seem to assume that intelligence is a one-dimensional property and that the set of intelligent agents is totally-ordered in the mathematical sense—but neither discusses intelligence at any length in their books. Generally, it is fair to say that despite some efforts, the assumptions made in the powerful narrative of superintelligence and singularity have not been investigated in detail. One question is whether such a singularity will ever occur—it may be conceptually impossible, practically impossible or may just not happen because of contingent events, including people actively preventing it. Philosophically, the interesting question is whether singularity is just a “myth” (Floridi 2016; Ganascia 2017), and not on the trajectory of actual AI research. This is something that practitioners often assume (e.g., Brooks 2017 [OIR]). They may do so because they fear the public relations backlash, because they overestimate the practical problems, or because they have good reasons to think that superintelligence is an unlikely outcome of current AI research (Müller forthcoming-a). This discussion raises the question whether the concern about “singularity” is just a narrative about fictional AI based on human fears. But even if one does find negative reasons compelling and the singularity not likely to occur, there is still a significant possibility that one may turn out to be wrong. Philosophy is not on the “secure path of a science” (Kant 1791: B15), and maybe AI and robotics aren’t either (Müller 2020). So, it appears that discussing the very high-impact risk of singularity has justification even if one thinks the probability of such singularity ever occurring is very low.

2.10.2 Existential Risk from Superintelligence

Thinking about superintelligence in the long term raises the question whether superintelligence may lead to the extinction of the human species, which is called an “existential risk” (or XRisk): The superintelligent systems may well have preferences that conflict with the existence of humans on Earth, and may thus decide to end that existence—and given their superior intelligence, they will have the power to do so (or they may happen to end it because they do not really care).

Thinking in the long term is the crucial feature of this literature. Whether the singularity (or another catastrophic event) occurs in 30 or 300 or 3000 years does not really matter (Baum et al. 2019). Perhaps there is even an astronomical pattern such that an intelligent species is bound to discover AI at some point, and thus bring about its own demise. Such a “great filter” would contribute to the explanation of the “Fermi paradox” why there is no sign of life in the known universe despite the high probability of it emerging. It would be bad news if we found out that the “great filter” is ahead of us, rather than an obstacle that Earth has already passed. These issues are sometimes taken more narrowly to be about human extinction (Bostrom 2013), or more broadly as concerning any large risk for the species (Rees 2018)—of which AI is only one (Häggström 2016; Ord 2020). Bostrom also uses the category of “global catastrophic risk” for risks that are sufficiently high up the two dimensions of “scope” and “severity” (Bostrom and Ćirković 2011; Bostrom 2013).

These discussions of risk are usually not connected to the general problem of ethics under risk (e.g., Hansson 2013, 2018). The long-term view has its own methodological challenges but has produced a wide discussion: (Tegmark 2017) focuses on AI and human life “3.0” after singularity while Russell, Dewey, and Tegmark (2015) and Bostrom, Dafoe, and Flynn (forthcoming) survey longer-term policy issues in ethical AI. Several collections of papers have investigated the risks of artificial general intelligence (AGI) and the factors that might make this development more or less risk-laden (Müller 2016b; Callaghan et al. 2017; Yampolskiy 2018), including the development of non-agent AI (Drexler 2019).

2.10.3 Controlling Superintelligence?

In a narrow sense, the “control problem” is how we humans can remain in control of an AI system once it is superintelligent (Bostrom 2014: 127ff). In a wider sense, it is the problem of how we can make sure an AI system will turn out to be positive according to human perception (Russell 2019); this is sometimes called “value alignment”. How easy or hard it is to control a superintelligence depends significantly on the speed of “take-off” to a superintelligent system. This has led to particular attention to systems with self-improvement, such as AlphaZero (Silver et al. 2018).

One aspect of this problem is that we might decide a certain feature is desirable, but then find out that it has unforeseen consequences that are so negative that we would not desire that feature after all. This is the ancient problem of King Midas who wished that all he touched would turn into gold. This problem has been discussed on the occasion of various examples, such as the “paperclip maximiser” (Bostrom 2003b), or the program to optimise chess performance (Omohundro 2014).

Discussions about superintelligence include speculation about omniscient beings, the radical changes on a “latter day”, and the promise of immortality through transcendence of our current bodily form—so sometimes they have clear religious undertones (Capurro 1993; Geraci 2008, 2010; O’Connell 2017: 160ff). These issues also pose a well-known problem of epistemology: Can we know the ways of the omniscient (Danaher 2015)? The usual opponents have already shown up: A characteristic response of an atheist is

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world (Domingos 2015)

The new nihilists explain that a “techno-hypnosis” through information technologies has now become our main method of distraction from the loss of meaning (Gertz 2018). Both opponents would thus say we need an ethics for the “small” problems that occur with actual AI and robotics ( sections 2.1 through 2.9 above), and that there is less need for the “big ethics” of existential risk from AI ( section 2.10 ).

The singularity thus raises the problem of the concept of AI again. It is remarkable how imagination or “vision” has played a central role since the very beginning of the discipline at the “Dartmouth Summer Research Project” (McCarthy et al. 1955 [OIR]; Simon and Newell 1958). And the evaluation of this vision is subject to dramatic change: In a few decades, we went from the slogans “AI is impossible” (Dreyfus 1972) and “AI is just automation” (Lighthill 1973) to “AI will solve all problems” (Kurzweil 1999) and “AI may kill us all” (Bostrom 2014). This created media attention and public relations efforts, but it also raises the problem of how much of this “philosophy and ethics of AI” is really about AI rather than about an imagined technology. As we said at the outset, AI and robotics have raised fundamental questions about what we should do with these systems, what the systems themselves should do, and what risks they have in the long term. They also challenge the human view of humanity as the intelligent and dominant species on Earth. We have seen issues that have been raised and will have to watch technological and social developments closely to catch the new issues early on, develop a philosophical analysis, and learn for traditional problems of philosophy.

NOTE: Citations in the main text annotated “[OIR]” may be found in the Other Internet Resources section below, not in the Bibliography.

  • Abowd, John M, 2017, “How Will Statistical Agencies Operate When All Data Are Private?”, Journal of Privacy and Confidentiality , 7(3): 1–15. doi:10.29012/jpc.v7i3.404
  • AI4EU, 2019, “Outcomes from the Strategic Orientation Workshop (Deliverable 7.1)”, (June 28, 2019). https://www.ai4eu.eu/ai4eu-project-deliverables
  • Allen, Colin, Iva Smit, and Wendell Wallach, 2005, “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches”, Ethics and Information Technology , 7(3): 149–155. doi:10.1007/s10676-006-0004-4
  • Allen, Colin, Gary Varner, and Jason Zinser, 2000, “Prolegomena to Any Future Artificial Moral Agent”, Journal of Experimental & Theoretical Artificial Intelligence , 12(3): 251–261. doi:10.1080/09528130050111428
  • Amoroso, Daniele and Guglielmo Tamburrini, 2018, “The Ethical and Legal Case Against Autonomy in Weapons Systems”, Global Jurist , 18(1): art. 20170012. doi:10.1515/gj-2017-0012
  • Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018, Artificial Intelligence and the Future of Humans , Washington, DC: Pew Research Center.
  • Anderson, Michael and Susan Leigh Anderson, 2007, “Machine Ethics: Creating an Ethical Intelligent Agent”, AI Magazine , 28(4): 15–26.
  • ––– (eds.), 2011, Machine Ethics , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511978036
  • Aneesh, A., 2006, Virtual Migration: The Programming of Globalization , Durham, NC and London: Duke University Press.
  • Arkin, Ronald C., 2009, Governing Lethal Behavior in Autonomous Robots , Boca Raton, FL: CRC Press.
  • Armstrong, Stuart, 2013, “General Purpose Intelligence: Arguing the Orthogonality Thesis”, Analysis and Metaphysics , 12: 68–84.
  • –––, 2014, Smarter Than Us , Berkeley, CA: MIRI.
  • Arnold, Thomas and Matthias Scheutz, 2017, “Beyond Moral Dilemmas: Exploring the Ethical Landscape in HRI”, in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction—HRI ’17 , Vienna, Austria: ACM Press, 445–452. doi:10.1145/2909824.3020255
  • Asaro, Peter M., 2019, “AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care”, IEEE Technology and Society Magazine , 38(2): 40–53. doi:10.1109/MTS.2019.2915154
  • Asimov, Isaac, 1942, “Runaround: A Short Story”, Astounding Science Fiction , March 1942. Reprinted in “I, Robot”, New York: Gnome Press 1950, 1940ff.
  • Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan, 2018, “The Moral Machine Experiment”, Nature , 563(7729): 59–64. doi:10.1038/s41586-018-0637-6
  • Baldwin, Richard, 2019, The Globotics Upheaval: Globalisation, Robotics and the Future of Work , New York: Oxford University Press.
  • Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019, “Long-Term Trajectories of Human Civilization”, Foresight , 21(1): 53–83. doi:10.1108/FS-04-2018-0037
  • Bendel, Oliver, 2018, “Sexroboter aus Sicht der Maschinenethik”, in Handbuch Filmtheorie , Bernhard Groß and Thomas Morsch (eds.), (Springer Reference Geisteswissenschaften), Wiesbaden: Springer Fachmedien Wiesbaden, 1–19. doi:10.1007/978-3-658-17484-2_22-1
  • Bennett, Colin J. and Charles Raab, 2006, The Governance of Privacy: Policy Instruments in Global Perspective , second edition, Cambridge, MA: MIT Press.
  • Benthall, Sebastian and Bruce D. Haynes, 2019, “Racial Categories in Machine Learning”, in Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19 , Atlanta, GA, USA: ACM Press, 289–298. doi:10.1145/3287560.3287575
  • Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger, 2018, “Should We Fear Artificial Intelligence? In-Depth Analysis”, European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018, PE 614.547, 1–40. [ Bentley et al. 2018 available online ]
  • Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A Legal and Ethical Analysis”, The Information Society , 34(3): 130–140. doi:10.1080/01972243.2018.1444249
  • Binns, Reuben, 2018, “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency , in Proceedings of Machine Learning Research , 81: 149–159.
  • Bostrom, Nick, 2003a, “Are We Living in a Computer Simulation?”, The Philosophical Quarterly , 53(211): 243–255. doi:10.1111/1467-9213.00309
  • –––, 2003b, “Ethical Issues in Advanced Artificial Intelligence”, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2 , Iva Smit, Wendell Wallach, and G.E. Lasker (eds), (IIAS-147-2003), Tecumseh, ON: International Institute of Advanced Studies in Systems Research and Cybernetics, 12–17. [ Botstrom 2003b revised available online ]
  • –––, 2003c, “Transhumanist Values”, in Ethical Issues for the Twenty-First Century , Frederick Adams (ed.), Bowling Green, OH: Philosophical Documentation Center Press.
  • –––, 2012, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, Minds and Machines , 22(2): 71–85. doi:10.1007/s11023-012-9281-3
  • –––, 2013, “Existential Risk Prevention as Global Priority”, Global Policy , 4(1): 15–31. doi:10.1111/1758-5899.12002
  • –––, 2014, Superintelligence: Paths, Dangers, Strategies , Oxford: Oxford University Press.
  • Bostrom, Nick and Milan M. Ćirković (eds.), 2011, Global Catastrophic Risks , New York: Oxford University Press.
  • Bostrom, Nick, Allan Dafoe, and Carrick Flynn, forthcoming, “Policy Desiderata for Superintelligent AI: A Vector Field Approach (V. 4.3)”, in Ethics of Artificial Intelligence , S Matthew Liao (ed.), New York: Oxford University Press. [ Bostrom, Dafoe, and Flynn forthcoming – preprint available online ]
  • Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The Cambridge Handbook of Artificial Intelligence , Keith Frankish and William M. Ramsey (eds.), Cambridge: Cambridge University Press, 316–334. doi:10.1017/CBO9781139046855.020 [ Bostrom and Yudkowsky 2014 available online ]
  • Bradshaw, Samantha, Lisa-Maria Neudert, and Phil Howard, 2019, “Government Responses to Malicious Use of Social Media”, Working Paper 2019.2, Oxford: Project on Computational Propaganda. [ Bradshaw, Neudert, and Howard 2019 available online/ ]
  • Brownsword, Roger, Eloise Scotford, and Karen Yeung (eds.), 2017, The Oxford Handbook of Law, Regulation and Technology , Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199680832.001.0001
  • Brynjolfsson, Erik and Andrew McAfee, 2016, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies , New York: W. W. Norton.
  • Bryson, Joanna J., 2010, “Robots Should Be Slaves”, in Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues , Yorick Wilks (ed.), (Natural Language Processing 8), Amsterdam: John Benjamins Publishing Company, 63–74. doi:10.1075/nlp.8.11bry
  • –––, 2019, “The Past Decade and Future of Ai’s Impact on Society”, in Towards a New Enlightenment: A Transcendent Decade , Madrid: Turner - BVVA. [ Bryson 2019 available online ]
  • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant, 2017, “Of, for, and by the People: The Legal Lacuna of Synthetic Persons”, Artificial Intelligence and Law , 25(3): 273–291. doi:10.1007/s10506-017-9214-9
  • Burr, Christopher and Nello Cristianini, 2019, “Can Machines Read Our Minds?”, Minds and Machines , 29(3): 461–494. doi:10.1007/s11023-019-09497-4
  • Butler, Samuel, 1863, “Darwin among the Machines: Letter to the Editor”, Letter in The Press (Christchurch) , 13 June 1863. [ Butler 1863 available online ]
  • Callaghan, Victor, James Miller, Roman Yampolskiy, and Stuart Armstrong (eds.), 2017, The Technological Singularity: Managing the Journey , (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-54033-6
  • Calo, Ryan, 2018, “Artificial Intelligence Policy: A Primer and Roadmap”, University of Bologna Law Review , 3(2): 180-218. doi:10.6092/ISSN.2531-6133/8670
  • Calo, Ryan, A. Michael Froomkin, and Ian Kerr (eds.), 2016, Robot Law , Cheltenham: Edward Elgar.
  • Čapek, Karel, 1920, R.U.R. , Prague: Aventium. Translated by Peter Majer and Cathy Porter, London: Methuen, 1999.
  • Capurro, Raphael, 1993, “Ein Grinsen Ohne Katze: Von der Vergleichbarkeit Zwischen ‘Künstlicher Intelligenz’ und ‘Getrennten Intelligenzen’”, Zeitschrift für philosophische Forschung , 47: 93–102.
  • Cave, Stephen, 2019, “To Save Us from a Kafkaesque Future, We Must Democratise AI”, The Guardian , 04 January 2019. [ Cave 2019 available online ]
  • Chalmers, David J., 2010, “The Singularity: A Philosophical Analysis”, Journal of Consciousness Studies , 17(9–10): 7–65. [ Chalmers 2010 available online ]
  • Christman, John, 2003 [2018], “Autonomy in Moral and Political Philosophy”, (Spring 2018) Stanford Encyclopedia of Philosophy (EDITION NEEDED), URL = < https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/ >
  • Coeckelbergh, Mark, 2010, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration”, Ethics and Information Technology , 12(3): 209–221. doi:10.1007/s10676-010-9235-5
  • –––, 2012, Growing Moral Relations: Critique of Moral Status Ascription , London: Palgrave. doi:10.1057/9781137025968
  • –––, 2016, “Care Robots and the Future of ICT-Mediated Elderly Care: A Response to Doom Scenarios”, AI & Society , 31(4): 455–462. doi:10.1007/s00146-015-0626-3
  • –––, 2018, “What Do We Mean by a Relational Ethics? Growing a Relational Approach to the Moral Standing of Plants, Robots and Other Non-Humans”, in Plant Ethics: Concepts and Applications , Angela Kallhoff, Marcello Di Paola, and Maria Schörgenhumer (eds.), London: Routledge, 110–121.
  • Crawford, Kate and Ryan Calo, 2016, “There Is a Blind Spot in AI Research”, Nature , 538(7625): 311–313. doi:10.1038/538311a
  • Cristianini, Nello, forthcoming, “Shortcuts to Artificial Intelligence”, in Machines We Trust , Marcello Pelillo and Teresa Scantamburlo (eds.), Cambridge, MA: MIT Press. [ Cristianini forthcoming – preprint available online ]
  • Danaher, John, 2015, “Why AI Doomsayers Are Like Sceptical Theists and Why It Matters”, Minds and Machines , 25(3): 231–246. doi:10.1007/s11023-015-9365-y
  • –––, 2016a, “Robots, Law and the Retribution Gap”, Ethics and Information Technology , 18(4): 299–309. doi:10.1007/s10676-016-9403-3
  • –––, 2016b, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology , 29(3): 245–268. doi:10.1007/s13347-015-0211-1
  • –––, 2019a, Automation and Utopia: Human Flourishing in a World without Work , Cambridge, MA: Harvard University Press.
  • –––, 2019b, “The Philosophical Case for Robot Friendship”, Journal of Posthuman Studies , 3(1): 5–24. doi:10.5325/jpoststud.3.1.0005
  • –––, forthcoming, “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism”, Science and Engineering Ethics , first online: 20 June 2019. doi:10.1007/s11948-019-00119-x
  • Danaher, John and Neil McArthur (eds.), 2017, Robot Sex: Social and Ethical Implications , Boston, MA: MIT Press.
  • DARPA, 1983, “Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development an Application to Critical Problems in Defense”, ADA141982, 28 October 1983. [ DARPA 1983 available online ]
  • Dennett, Daniel C, 2017, From Bacteria to Bach and Back: The Evolution of Minds , New York: W.W. Norton.
  • Devlin, Kate, 2018, Turned On: Science, Sex and Robots , London: Bloomsbury.
  • Diakopoulos, Nicholas, 2015, “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures”, Digital Journalism , 3(3): 398–415. doi:10.1080/21670811.2014.976411
  • Dignum, Virginia, 2018, “Ethics in Artificial Intelligence: Introduction to the Special Issue”, Ethics and Information Technology , 20(1): 1–3. doi:10.1007/s10676-018-9450-z
  • Domingos, Pedro, 2015, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World , London: Allen Lane.
  • Draper, Heather, Tom Sorell, Sandra Bedaf, Dag Sverre Syrdal, Carolina Gutierrez-Ruiz, Alexandre Duclos, and Farshid Amirabdollahian, 2014, “Ethical Dimensions of Human-Robot Interactions in the Care of Older People: Insights from 21 Focus Groups Convened in the UK, France and the Netherlands”, in International Conference on Social Robotics 2014 , Michael Beetz, Benjamin Johnston, and Mary-Anne Williams (eds.), (Lecture Notes in Artificial Intelligence 8755), Cham: Springer International Publishing, 135–145. doi:10.1007/978-3-319-11973-1_14
  • Dressel, Julia and Hany Farid, 2018, “The Accuracy, Fairness, and Limits of Predicting Recidivism”, Science Advances , 4(1): eaao5580. doi:10.1126/sciadv.aao5580
  • Drexler, K. Eric, 2019, “Reframing Superintelligence: Comprehensive AI Services as General Intelligence”, FHI Technical Report, 2019-1, 1-210. [ Drexler 2019 available online ]
  • Dreyfus, Hubert L., 1972, What Computers Still Can’t Do: A Critique of Artificial Reason , second edition, Cambridge, MA: MIT Press 1992.
  • Dreyfus, Hubert L., Stuart E. Dreyfus, and Tom Athanasiou, 1986, Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer , New York: Free Press.
  • Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith, 2006, Calibrating Noise to Sensitivity in Private Data Analysis , Berlin, Heidelberg.
  • Eden, Amnon H., James H. Moor, Johnny H. Søraker, and Eric Steinhart (eds.), 2012, Singularity Hypotheses: A Scientific and Philosophical Assessment , (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-32560-1
  • Eubanks, Virginia, 2018, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor , London: St. Martin’s Press.
  • European Commission, 2013, “How Many People Work in Agriculture in the European Union? An Answer Based on Eurostat Data Sources”, EU Agricultural Economics Briefs , 8 (July 2013). [ Anonymous 2013 available online ]
  • European Group on Ethics in Science and New Technologies, 2018, “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, 9 March 2018, European Commission, Directorate-General for Research and Innovation, Unit RTD.01. [ European Group 2018 available online ]
  • Ferguson, Andrew Guthrie, 2017, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement , New York: NYU Press.
  • Floridi, Luciano, 2016, “Should We Be Afraid of AI? Machines Seem to Be Getting Smarter and Smarter and Much Better at Human Jobs, yet True AI Is Utterly Implausible. Why?”, Aeon , 9 May 2016. URL = < Floridi 2016 available online >
  • Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena, 2018, “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines , 28(4): 689–707. doi:10.1007/s11023-018-9482-5
  • Floridi, Luciano and Jeff W. Sanders, 2004, “On the Morality of Artificial Agents”, Minds and Machines , 14(3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d
  • Floridi, Luciano and Mariarosaria Taddeo, 2016, “What Is Data Ethics?”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences , 374(2083): 20160360. doi:10.1098/rsta.2016.0360
  • Foot, Philippa, 1967, “The Problem of Abortion and the Doctrine of the Double Effect”, Oxford Review , 5: 5–15.
  • Fosch-Villaronga, Eduard and Jordi Albo-Canals, 2019, “‘I’ll Take Care of You,’ Said the Robot”, Paladyn, Journal of Behavioral Robotics , 10(1): 77–93. doi:10.1515/pjbr-2019-0006
  • Frank, Lily and Sven Nyholm, 2017, “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?”, Artificial Intelligence and Law , 25(3): 305–323. doi:10.1007/s10506-017-9212-y
  • Frankfurt, Harry G., 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy , 68(1): 5–20.
  • Frey, Carl Benedict, 2019, The Technology Trap: Capital, Labour, and Power in the Age of Automation , Princeton, NJ: Princeton University Press.
  • Frey, Carl Benedikt and Michael A. Osborne, 2013, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Oxford Martin School Working Papers, 17 September 2013. [ Frey and Osborne 2013 available online ]
  • Ganascia, Jean-Gabriel, 2017, Le Mythe De La Singularité , Paris: Éditions du Seuil.
  • EU Parliament, 2016, “Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(Inl))”, Committee on Legal Affairs , 10.11.2016. https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
  • EU Regulation, 2016/679, “General Data Protection Regulation: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/Ec”, Official Journal of the European Union , 119 (4 May 2016), 1–88. [ Regulation (EU) 2016/679 available online ]
  • Geraci, Robert M., 2008, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”, Journal of the American Academy of Religion , 76(1): 138–166. doi:10.1093/jaarel/lfm101
  • –––, 2010, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195393026.001.0001
  • Gerdes, Anne, 2016, “The Issue of Moral Consideration in Robot Ethics”, ACM SIGCAS Computers and Society , 45(3): 274–279. doi:10.1145/2874239.2874278
  • German Federal Ministry of Transport and Digital Infrastructure, 2017, “Report of the Ethics Commission: Automated and Connected Driving”, June 2017, 1–36. [ GFMTDI 2017 available online ]
  • Gertz, Nolen, 2018, Nihilism and Technology , London: Rowman & Littlefield.
  • Gewirth, Alan, 1978, “The Golden Rule Rationalized”, Midwest Studies in Philosophy , 3(1): 133–147. doi:10.1111/j.1475-4975.1978.tb00353.x
  • Gibert, Martin, 2019, “Éthique Artificielle (Version Grand Public)”, in L’Encyclopédie Philosophique , Maxime Kristanek (ed.), accessed: 16 April 2020, URL = < Gibert 2019 available online >
  • Giubilini, Alberto and Julian Savulescu, 2018, “The Artificial Moral Advisor. The ‘Ideal Observer’ Meets Artificial Intelligence”, Philosophy & Technology , 31(2): 169–188. doi:10.1007/s13347-017-0285-z
  • Good, Irving John, 1965, “Speculations Concerning the First Ultraintelligent Machine”, in Advances in Computers 6 , Franz L. Alt and Morris Rubinoff (eds.), New York & London: Academic Press, 31–88. doi:10.1016/S0065-2458(08)60418-0
  • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016, Deep Learning , Cambridge, MA: MIT Press.
  • Goodman, Bryce and Seth Flaxman, 2017, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine , 38(3): 50–57. doi:10.1609/aimag.v38i3.2741
  • Goos, Maarten, 2018, “The Impact of Technological Progress on Labour Markets: Policy Challenges”, Oxford Review of Economic Policy , 34(3): 362–375. doi:10.1093/oxrep/gry002
  • Goos, Maarten, Alan Manning, and Anna Salomons, 2009, “Job Polarization in Europe”, American Economic Review , 99(2): 58–63. doi:10.1257/aer.99.2.58
  • Graham, Sandra and Brian S. Lowery, 2004, “Priming Unconscious Racial Stereotypes about Adolescent Offenders”, Law and Human Behavior , 28(5): 483–504. doi:10.1023/B:LAHU.0000046430.65485.1f
  • Gunkel, David J., 2018a, “The Other Question: Can and Should Robots Have Rights?”, Ethics and Information Technology , 20(2): 87–99. doi:10.1007/s10676-017-9442-4
  • –––, 2018b, Robot Rights , Boston, MA: MIT Press.
  • Gunkel, David J. and Joanna J. Bryson (eds.), 2014, Machine Morality: The Machine as Moral Agent and Patient special issue of Philosophy & Technology , 27(1): 1–142.
  • Häggström, Olle, 2016, Here Be Dragons: Science, Technology and the Future of Humanity , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198723547.001.0001
  • Hakli, Raul and Pekka Mäkelä, 2019, “Moral Responsibility of Robots and Hybrid Agents”, The Monist , 102(2): 259–275. doi:10.1093/monist/onz009
  • Hanson, Robin, 2016, The Age of Em: Work, Love and Life When Robots Rule the Earth , Oxford: Oxford University Press.
  • Hansson, Sven Ove, 2013, The Ethics of Risk: Ethical Analysis in an Uncertain World , New York: Palgrave Macmillan.
  • –––, 2018, “How to Perform an Ethical Risk Analysis (eRA)”, Risk Analysis , 38(9): 1820–1829. doi:10.1111/risa.12978
  • Harari, Yuval Noah, 2016, Homo Deus: A Brief History of Tomorrow , New York: Harper.
  • Haskel, Jonathan and Stian Westlake, 2017, Capitalism without Capital: The Rise of the Intangible Economy , Princeton, NJ: Princeton University Press.
  • Houkes, Wybo and Pieter E. Vermaas, 2010, Technical Functions: On the Use and Design of Artefacts , (Philosophy of Engineering and Technology 1), Dordrecht: Springer Netherlands. doi:10.1007/978-90-481-3900-2
  • IEEE, 2019, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (First Version), < IEEE 2019 available online >.
  • Jasanoff, Sheila, 2016, The Ethics of Invention: Technology and the Human Future , New York: Norton.
  • Jecker, Nancy S., forthcoming, Ending Midlife Bias: New Values for Old Age , New York: Oxford University Press.
  • Jobin, Anna, Marcello Ienca, and Effy Vayena, 2019, “The Global Landscape of AI Ethics Guidelines”, Nature Machine Intelligence , 1(9): 389–399. doi:10.1038/s42256-019-0088-2
  • Johnson, Deborah G. and Mario Verdicchio, 2017, “Reframing AI Discourse”, Minds and Machines , 27(4): 575–590. doi:10.1007/s11023-017-9417-6
  • Kahnemann, Daniel, 2011, Thinking Fast and Slow , London: Macmillan.
  • Kamm, Frances Myrna, 2016, The Trolley Problem Mysteries , Eric Rakowski (ed.), Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190247157.001.0001
  • Kant, Immanuel, 1781/1787, Kritik der reinen Vernunft . Translated as Critique of Pure Reason , Norman Kemp Smith (trans.), London: Palgrave Macmillan, 1929.
  • Keeling, Geoff, 2020, “Why Trolley Problems Matter for the Ethics of Automated Vehicles”, Science and Engineering Ethics , 26(1): 293–307. doi:10.1007/s11948-019-00096-1
  • Keynes, John Maynard, 1930, “Economic Possibilities for Our Grandchildren”. Reprinted in his Essays in Persuasion , New York: Harcourt Brace, 1932, 358–373.
  • Kissinger, Henry A., 2018, “How the Enlightenment Ends: Philosophically, Intellectually—in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence”, The Atlantic , June 2018. [ Kissinger 2018 available online ]
  • Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human Intelligence , London: Penguin.
  • –––, 2005, The Singularity Is Near: When Humans Transcend Biology , London: Viking.
  • –––, 2012, How to Create a Mind: The Secret of Human Thought Revealed , New York: Viking.
  • Lee, Minha, Sander Ackermans, Nena van As, Hanwen Chang, Enzo Lucas, and Wijnand IJsselsteijn, 2019, “Caring for Vincent: A Chatbot for Self-Compassion”, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19 , Glasgow, Scotland: ACM Press, 1–13. doi:10.1145/3290605.3300932
  • Levy, David, 2007, Love and Sex with Robots: The Evolution of Human-Robot Relationships , New York: Harper & Co.
  • Lighthill, James, 1973, “Artificial Intelligence: A General Survey”, Artificial intelligence: A Paper Symposion , London: Science Research Council. [ Lighthill 1973 available online ]
  • Lin, Patrick, 2016, “Why Ethics Matters for Autonomous Cars”, in Autonomous Driving , Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin, Heidelberg: Springer Berlin Heidelberg, 69–85. doi:10.1007/978-3-662-48847-8_4
  • Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence , New York: Oxford University Press. doi:10.1093/oso/9780190652951.001.0001
  • Lin, Patrick, George Bekey, and Keith Abney, 2008, “Autonomous Military Robotics: Risk, Ethics, and Design”, ONR report, California Polytechnic State University, San Luis Obispo, 20 December 2008), 112 pp. [ Lin, Bekey, and Abney 2008 available online ]
  • Lomas, Meghann, Robert Chevalier, Ernest Vincent Cross, Robert Christopher Garrett, John Hoare, and Michael Kopack, 2012, “Explaining Robot Actions”, in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI ’12 , Boston, MA: ACM Press, 187–188. doi:10.1145/2157689.2157748
  • Macnish, Kevin, 2017, The Ethics of Surveillance: An Introduction , London: Routledge.
  • Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, 2019, “Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites”, Proceedings of the ACM on Human-Computer Interaction , 3(CSCW): art. 81. doi:10.1145/3359183
  • Minsky, Marvin, 1985, The Society of Mind , New York: Simon & Schuster.
  • Misselhorn, Catrin, 2020, “Artificial Systems with Moral Capacities? A Research Design and Its Implementation in a Geriatric Care System”, Artificial Intelligence , 278: art. 103179. doi:10.1016/j.artint.2019.103179
  • Mittelstadt, Brent Daniel and Luciano Floridi, 2016, “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts”, Science and Engineering Ethics , 22(2): 303–341. doi:10.1007/s11948-015-9652-2
  • Moor, James H., 2006, “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE Intelligent Systems , 21(4): 18–21. doi:10.1109/MIS.2006.80
  • Moravec, Hans, 1990, Mind Children , Cambridge, MA: Harvard University Press.
  • –––, 1998, Robot: Mere Machine to Transcendent Mind , New York: Oxford University Press.
  • Mozorov, Eygeny, 2013, To Save Everything, Click Here: The Folly of Technological Solutionism , New York: Public Affairs.
  • Müller, Vincent C., 2012, “Autonomous Cognitive Systems in Real-World Environments: Less Control, More Flexibility and Better Interaction”, Cognitive Computation , 4(3): 212–215. doi:10.1007/s12559-012-9129-4
  • –––, 2016a, “Autonomous Killer Robots Are Probably Good News”, In Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons , Ezio Di Nucci and Filippo Santoni de Sio (eds.), London: Ashgate, 67–81.
  • ––– (ed.), 2016b, Risks of Artificial Intelligence , London: Chapman & Hall - CRC Press. doi:10.1201/b19187
  • –––, 2018, “In 30 Schritten zum Mond? Zukünftiger Fortschritt in der KI”, Medienkorrespondenz , 20: 5–15. [ Müller 2018 available online ]
  • –––, 2020, “Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target Confusion’”, in Metrics of Sensory Motor Coordination and Integration in Robots and Animals , Fabio Bonsignorio, Elena Messina, Angel P. del Pobil, and John Hallam (eds.), (Cognitive Systems Monographs 36), Cham: Springer International Publishing, 169–179. doi:10.1007/978-3-030-14126-4_9
  • –––, forthcoming-a, Can Machines Think? Fundamental Problems of Artificial Intelligence , New York: Oxford University Press.
  • ––– (ed.), forthcoming-b, Oxford Handbook of the Philosophy of Artificial Intelligence , New York: Oxford University Press.
  • Müller, Vincent C. and Nick Bostrom, 2016, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”, in Fundamental Issues of Artificial Intelligence , Vincent C. Müller (ed.), Cham: Springer International Publishing, 555–572. doi:10.1007/978-3-319-26485-1_33
  • Newport, Cal, 2019, Digital Minimalism: On Living Better with Less Technology , London: Penguin.
  • Nørskov, Marco (ed.), 2017, Social Robots , London: Routledge.
  • Nyholm, Sven, 2018a, “Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci”, Science and Engineering Ethics , 24(4): 1201–1219. doi:10.1007/s11948-017-9943-x
  • –––, 2018b, “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II”, Philosophy Compass , 13(7): e12506. doi:10.1111/phc3.12506
  • Nyholm, Sven, and Lily Frank, 2017, “From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?”, in Danaher and McArthur 2017: 219–243.
  • O’Connell, Mark, 2017, To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death , London: Granta.
  • O’Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , Largo, ML: Crown.
  • Omohundro, Steve, 2014, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence , 26(3): 303–315. doi:10.1080/0952813X.2014.895111
  • Ord, Toby, 2020, The Precipice: Existential Risk and the Future of Humanity , London: Bloomsbury.
  • Powers, Thomas M. and Jean-Gabriel Ganascia, forthcoming, “The Ethics of the Ethics of AI”, in Oxford Handbook of Ethics of Artificial Intelligence , Markus D. Dubber, Frank Pasquale, and Sunnit Das (eds.), New York: Oxford.
  • Rawls, John, 1971, A Theory of Justice , Cambridge, MA: Belknap Press.
  • Rees, Martin, 2018, On the Future: Prospects for Humanity , Princeton: Princeton University Press.
  • Richardson, Kathleen, 2016, “Sex Robot Matters: Slavery, the Prostituted, and the Rights of Machines”, IEEE Technology and Society Magazine , 35(2): 46–53. doi:10.1109/MTS.2016.2554421
  • Roessler, Beate, 2017, “Privacy as a Human Right”, Proceedings of the Aristotelian Society , 117(2): 187–206. doi:10.1093/arisoc/aox008
  • Royakkers, Lambèr and Rinie van Est, 2016, Just Ordinary Robots: Automation from Love to War , Boca Raton, LA: CRC Press, Taylor & Francis. doi:10.1201/b18899
  • Russell, Stuart, 2019, Human Compatible: Artificial Intelligence and the Problem of Control , New York: Viking.
  • Russell, Stuart, Daniel Dewey, and Max Tegmark, 2015, “Research Priorities for Robust and Beneficial Artificial Intelligence”, AI Magazine , 36(4): 105–114. doi:10.1609/aimag.v36i4.2577
  • SAE International, 2018, “Taxonomy and Definitions for Terms Related to Driving Automation Systems for on-Road Motor Vehicles”, J3016_201806, 15 June 2018. [ SAE International 2015 available online ]
  • Sandberg, Anders, 2013, “Feasibility of Whole Brain Emulation”, in Philosophy and Theory of Artificial Intelligence , Vincent C. Müller (ed.), (Studies in Applied Philosophy, Epistemology and Rational Ethics, 5), Berlin, Heidelberg: Springer Berlin Heidelberg, 251–264. doi:10.1007/978-3-642-31674-6_19
  • –––, 2019, “There Is Plenty of Time at the Bottom: The Economics, Risk and Ethics of Time Compression”, Foresight , 21(1): 84–99. doi:10.1108/FS-04-2018-0044
  • Santoni de Sio, Filippo and Jeroen van den Hoven, 2018, “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI , 5(February): 15. doi:10.3389/frobt.2018.00015
  • Schneier, Bruce, 2015, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World , New York: W. W. Norton.
  • Searle, John R., 1980, “Minds, Brains, and Programs”, Behavioral and Brain Sciences , 3(3): 417–424. doi:10.1017/S0140525X00005756
  • Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi, 2019, “Fairness and Abstraction in Sociotechnical Systems”, in Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19 , Atlanta, GA: ACM Press, 59–68. doi:10.1145/3287560.3287598
  • Sennett, Richard, 2018, Building and Dwelling: Ethics for the City , London: Allen Lane.
  • Shanahan, Murray, 2015, The Technological Singularity , Cambridge, MA: MIT Press.
  • Sharkey, Amanda, 2019, “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Ethics and Information Technology , 21(2): 75–87. doi:10.1007/s10676-018-9494-0
  • Sharkey, Amanda and Noel Sharkey, 2011, “The Rights and Wrongs of Robot Care”, in Robot Ethics: The Ethical and Social Implications of Robotics , Patrick Lin, Keith Abney and George Bekey (eds.), Cambridge, MA: MIT Press, 267–282.
  • Shoham, Yoav, Perrault Raymond, Brynjolfsson Erik, Jack Clark, James Manyika, Juan Carlos Niebles, … Zoe Bauer, 2018, “The AI Index 2018 Annual Report”, 17 December 2018, Stanford, CA: AI Index Steering Committee, Human-Centered AI Initiative, Stanford University. [ Shoam et al. 2018 available online ]
  • SIENNA, 2019, “Deliverable Report D4.4: Ethical Issues in Artificial Intelligence and Robotics”, June 2019, published by the SIENNA project (Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), University of Twente, pp. 1–103. [ SIENNA 2019 available online ]
  • Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis, 2018, “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play”, Science , 362(6419): 1140–1144. doi:10.1126/science.aar6404
  • Simon, Herbert A. and Allen Newell, 1958, “Heuristic Problem Solving: The Next Advance in Operations Research”, Operations Research , 6(1): 1–10. doi:10.1287/opre.6.1.1
  • Simpson, Thomas W. and Vincent C. Müller, 2016, “Just War and Robots’ Killings”, The Philosophical Quarterly , 66(263): 302–322. doi:10.1093/pq/pqv075
  • Smolan, Sandy (director), 2016, “The Human Face of Big Data”, PBS Documentary, 24 February 2016, 56 mins.
  • Sparrow, Robert, 2007, “Killer Robots”, Journal of Applied Philosophy , 24(1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x
  • –––, 2016, “Robots in Aged Care: A Dystopian Future?”, AI & Society , 31(4): 445–454. doi:10.1007/s00146-015-0625-4
  • Stahl, Bernd Carsten, Job Timmermans, and Brent Daniel Mittelstadt, 2016, “The Ethics of Computing: A Survey of the Computing-Oriented Literature”, ACM Computing Surveys , 48(4): art. 55. doi:10.1145/2871196
  • Stahl, Bernd Carsten and David Wright, 2018, “Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation”, IEEE Security Privacy , 16(3): 26–33.
  • Stone, Christopher D., 1972, “Should Trees Have Standing - toward Legal Rights for Natural Objects”, Southern California Law Review , 45: 450–501.
  • Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller, 2016, “Artificial Intelligence and Life in 2030”, One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016. [ Stone et al. 2016 available online ]
  • Strawson, Galen, 1998, “Free Will”, in Routledge Encyclopedia of Philosophy , Taylor & Francis. doi:10.4324/9780415249126-V014-1
  • Sullins, John P., 2012, “Robots, Love, and Sex: The Ethics of Building a Love Machine”, IEEE Transactions on Affective Computing , 3(4): 398–409. doi:10.1109/T-AFFC.2012.31
  • Susser, Daniel, Beate Roessler, and Helen Nissenbaum, 2019, “Technology, Autonomy, and Manipulation”, Internet Policy Review , 8(2): 30 June 2019. [ Susser, Roessler, and Nissenbaum 2019 available online ]
  • Taddeo, Mariarosaria and Luciano Floridi, 2018, “How AI Can Be a Force for Good”, Science , 361(6404): 751–752. doi:10.1126/science.aat5991
  • Taylor, Linnet and Nadezhda Purtova, 2019, “What Is Responsible and Sustainable Data Science?”, Big Data & Society, 6(2): art. 205395171985811. doi:10.1177/2053951719858114
  • Taylor, Steve, et al., 2018, “Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation: Summary of Consultation with Multidisciplinary Experts”, June. doi:10.5281/zenodo.1303252 [ Taylor, et al. 2018 available online ]
  • Tegmark, Max, 2017, Life 3.0: Being Human in the Age of Artificial Intelligence , New York: Knopf.
  • Thaler, Richard H and Sunstein, Cass, 2008, Nudge: Improving decisions about health, wealth and happiness , New York: Penguin.
  • Thompson, Nicholas and Ian Bremmer, 2018, “The AI Cold War That Threatens Us All”, Wired , 23 November 2018. [ Thompson and Bremmer 2018 available online ]
  • Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and the Trolley Problem”, Monist , 59(2): 204–217. doi:10.5840/monist197659224
  • Torrance, Steve, 2011, “Machine Ethics and the Idea of a More-Than-Human Moral World”, in Anderson and Anderson 2011: 115–137. doi:10.1017/CBO9780511978036.011
  • Trump, Donald J, 2019, “Executive Order on Maintaining American Leadership in Artificial Intelligence”, 11 February 2019. [ Trump 2019 available online ]
  • Turner, Jacob, 2019, Robot Rules: Regulating Artificial Intelligence , Berlin: Springer. doi:10.1007/978-3-319-96235-1
  • Tzafestas, Spyros G., 2016, Roboethics: A Navigating Overview , (Intelligent Systems, Control and Automation: Science and Engineering 79), Cham: Springer International Publishing. doi:10.1007/978-3-319-21714-7
  • Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190498511.001.0001
  • Van Lent, Michael, William Fisher, and Michael Mancuso, 2004, “An Explainable Artificial Intelligence System for Small-Unit Tactical Behavior”, in Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence, (IAAI’04) , San Jose, CA: AAAI Press, 900–907.
  • van Wynsberghe, Aimee, 2016, Healthcare Robots: Ethics, Design and Implementation , London: Routledge. doi:10.4324/9781315586397
  • van Wynsberghe, Aimee and Scott Robbins, 2019, “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics , 25(3): 719–735. doi:10.1007/s11948-018-0030-8
  • Vanderelst, Dieter and Alan Winfield, 2018, “The Dark Side of Ethical Robots”, in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society , New Orleans, LA: ACM, 317–322. doi:10.1145/3278721.3278726
  • Veale, Michael and Reuben Binns, 2017, “Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data”, Big Data & Society , 4(2): art. 205395171774353. doi:10.1177/2053951717743530
  • Véliz, Carissa, 2019, “Three Things Digital Ethics Can Learn from Medical Ethics”, Nature Electronics , 2(8): 316–318. doi:10.1038/s41928-019-0294-2
  • Verbeek, Peter-Paul, 2011, Moralizing Technology: Understanding and Designing the Morality of Things , Chicago: University of Chicago Press.
  • Wachter, Sandra and Brent Daniel Mittelstadt, 2019, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review , 2019(2): 494–620.
  • Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi, 2017, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law , 7(2): 76–99. doi:10.1093/idpl/ipx005
  • Wachter, Sandra, Brent Mittelstadt, and Chris Russell, 2018, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law & Technology , 31(2): 842–887. doi:10.2139/ssrn.3063289
  • Wallach, Wendell and Peter M. Asaro (eds.), 2017, Machine Ethics and Robot Ethics , London: Routledge.
  • Walsh, Toby, 2018, Machines That Think: The Future of Artificial Intelligence , Amherst, MA: Prometheus Books.
  • Westlake, Stian (ed.), 2014, Our Work Here Is Done: Visions of a Robot Economy , London: Nesta. [ Westlake 2014 available online ]
  • Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, … Jason Schultz, 2018, “AI Now Report 2018”, New York: AI Now Institute, New York University. [ Whittaker et al. 2018 available online ]
  • Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, and Stephen Cave, 2019, “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research”, Cambridge: Nuffield Foundation, University of Cambridge. [ Whittlestone 2019 available online ]
  • Winfield, Alan, Katina Michael, Jeremy Pitt, and Vanessa Evers (eds.), 2019, Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems , special issue of Proceedings of the IEEE , 107(3): 501–632.
  • Woollard, Fiona and Frances Howard-Snyder, 2016, “Doing vs. Allowing Harm”, Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2016/entries/doing-allowing/ >
  • Woolley, Samuel C. and Philip N. Howard (eds.), 2017, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media , Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.001.0001
  • Yampolskiy, Roman V. (ed.), 2018, Artificial Intelligence Safety and Security , Boca Raton, FL: Chapman and Hall/CRC. doi:10.1201/9781351251389
  • Yeung, Karen and Martin Lodge (eds.), 2019, Algorithmic Regulation , Oxford: Oxford University Press. doi:10.1093/oso/9780198838494.001.0001
  • Zayed, Yago and Philip Loft, 2019, “Agriculture: Historical Statistics”, House of Commons Briefing Paper , 3339(25 June 2019): 1-19. [ Zayed and Loft 2019 available online ]
  • Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan, 2019, “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?”, Philosophy & Technology , 32(4): 661–683. doi:10.1007/s13347-018-0330-6
  • Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power , New York: Public Affairs.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

Other Internet Resources

  • AI HLEG, 2019, “ High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI ”, European Commission , accessed: 9 April 2019.
  • Amodei, Dario and Danny Hernandez, 2018, “ AI and Compute ”, OpenAI Blog , 16 July 2018.
  • Aneesh, A., 2002, Technological Modes of Governance: Beyond Private and Public Realms , paper in the Proceedings of the 4th International Summer Academy on Technology Studies, available at archive.org.
  • Brooks, Rodney, 2017, “ The Seven Deadly Sins of Predicting the Future of AI ”, on Rodney Brooks: Robots, AI, and Other Stuff , 7 September 2017.
  • Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, et al., 2018, “ The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation ”, unpublished manuscript, ArXiv:1802.07228 [Cs].
  • Costa, Elisabeth and David Halpern, 2019, “ The Behavioural Science of Online Harm and Manipulation, and What to Do About It: An Exploratory Paper to Spark Ideas and Debate ”, The Behavioural Insights Team Report, 1-82.
  • Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford, 2018, “ Datasheets for Datasets ”, unpublished manuscript, arxiv:1803.09010, 23 March 2018.
  • Gunning, David, 2017, “ Explainable Artificial Intelligence (XAI) ”, Defense Advanced Research Projects Agency (DARPA) Program.
  • Harris, Tristan, 2016, “ How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist ”, Thrive Global , 18 May 2016.
  • International Federation of Robotics (IFR), 2019, World Robotics 2019 Edition .
  • Jacobs, An, Lynn Tytgat, Michel Maus, Romain Meeusen, and Bram Vanderborght (eds.), Homo Roboticus: 30 Questions and Answers on Man, Technology, Science & Art, 2019, Brussels: ASP .
  • Marcus, Gary, 2018, “ Deep Learning: A Critical Appraisal ”, unpublished manuscript, 2 January 2018, arxiv:1801.00631.
  • McCarthy, John, Marvin Minsky, Nathaniel Rochester, and Claude E. Shannon, 1955, “ A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence ”, 31 August 1955.
  • Metcalf, Jacob, Emily F. Keller, and Danah Boyd, 2016, “ Perspectives on Big Data, Ethics, and Society ”, 23 May 2016, Council for Big Data, Ethics, and Society.
  • National Institute of Justice (NIJ), 2014, “ Overview of Predictive Policing ”, 9 June 2014.
  • Searle, John R., 2015, “ Consciousness in Artificial Intelligence ”, Google’s Singularity Network, Talks at Google (YouTube video).
  • Sharkey, Noel, Aimee van Wynsberghe, Scott Robbins, and Eleanor Hancock, 2017, “ Report: Our Sexual Future with Robots ”, Responsible Robotics , 1–44.
  • Turing Institute (UK): Data Ethics Group
  • Leverhulme Centre for the Future of Intelligence
  • Future of Humanity Institute
  • Future of Life Institute
  • Stanford Center for Internet and Society
  • Berkman Klein Center
  • Digital Ethics Lab
  • Open Roboethics Institute
  • Philosophy & Theory of AI
  • Ethics and AI 2017
  • We Robot 2018
  • Robophilosophy
  • EUrobotics TG ‘robot ethics’ collection of policy documents
  • PhilPapers section on Ethics of Artificial Intelligence
  • PhilPapers section on Robot Ethics

computing: and moral responsibility | ethics: internet research | ethics: search engines and | information technology: and moral values | information technology: and privacy | manipulation, ethics of | social networking and ethics

Acknowledgments

Early drafts of this article were discussed with colleagues at the IDEA Centre of the University of Leeds, some friends, and my PhD students Michael Cannon, Zach Gudmunsen, Gabriela Arriagada-Bruneau and Charlotte Stix. Later drafts were made publicly available on the Internet and publicised via Twitter and e-mail to all (then) cited authors that I could locate. These later drafts were presented to audiences at the INBOTS Project Meeting (Reykjavik 2019), the Computer Science Department Colloquium (Leeds 2019), the European Robotics Forum (Bucharest 2019), the AI Lunch and the Philosophy & Ethics group (Eindhoven 2019)—many thanks for their comments.

I am grateful for detailed written comments by John Danaher, Martin Gibert, Elizabeth O’Neill, Sven Nyholm, Etienne B. Roesch, Emma Ruttkamp-Bloem, Tom Powers, Steve Taylor, and Alan Winfield. I am grateful for further useful comments by Colin Allen, Susan Anderson, Christof Wolf-Brenner, Rafael Capurro, Mark Coeckelbergh, Yazmin Morlet Corti, Erez Firt, Vasilis Galanos, Anne Gerdes, Olle Häggström, Geoff Keeling, Karabo Maiyane, Brent Mittelstadt, Britt Östlund, Steve Petersen, Brian Pickering, Zoë Porter, Amanda Sharkey, Melissa Terras, Stuart Russell, Jan F Veneman, Jeffrey White, and Xinyi Wu.

Parts of the work on this article have been supported by the European Commission under the INBOTS project (H2020 grant no. 780073).

Copyright © 2020 by Vincent C. Müller < vincent . c . mueller @ fau . de >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

  • Reference Manager
  • Simple TEXT file

People also looked at

Review article, a review of future and ethical perspectives of robotics and ai.

image

  • Robotics and Intelligent Systems Group, Department of Informatics, University of Oslo, Oslo, Norway

In recent years, there has been increased attention on the possible impact of future robotics and AI systems. Prominent thinkers have publicly warned about the risk of a dystopian future when the complexity of these systems progresses further. These warnings stand in contrast to the current state-of-the-art of the robotics and AI technology. This article reviews work considering both the future potential of robotics and AI systems, and ethical considerations that need to be taken in order to avoid a dystopian future. References to recent initiatives to outline ethical guidelines for both the design of systems and how they should operate are included.

Introduction

Authors and movie makers have, since the early invention of technology, been actively predicting how the future would look with the appearance of more advanced technology. One of the first—later regarded as the father of science fiction—is the French author Jules Gabriel Verne (1828–1905). He published novels about journeys under water, around the world (in 80 days), from the earth to the moon and to the center of earth. The amazing thing is that within 100 years after publishing these ideas, all—except the latter—were made possible by the progression of technology. Although it may have happened independently of Verne, engineers were certainly inspired by his books ( Unwin, 2005 ). In contrast to this mostly positive view of technological progress, many have questioned the negative impact that may lie ahead. One of the first science fiction feature films was Fritz Lang’s 1927 German production, Metropolis. The movie’s setting is a futuristic urban dystopian society with machines. Later, more than 180 similar dystopian films have followed, 1 including The Terminator, RoboCop, The Matrix , and A.I . Whether or not these are motivating or discouraging for today’s researchers in robotics and AI is hard to say but at least they have put the ethical aspects of technology on the agenda.

Recently, business leaders and academics have warned that current advances in AI may have major consequences to present society:

• “ Humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.” —Stephen Hawking in BBC interview 2 2014.

• AI is our “biggest existential threat,” Elon Musk at Massachusetts Institute of Technology during an interview 3 at the AeroAstro Centennial Symposium (2014).

• “I am in the camp that is concerned about super intelligence .” Bill Gates 4 (2015) wrote in an Ask Me Anything interview 5 on the Reddit networking site.

These comments have initiated a public awareness of the potential future impact of AI technology on society and that this impact should be considered by designers of such technology. That is, what authors and movie directors propose about the future has probably less impact than when leading academics and business people raise questions about future technology. These public warnings echo publications like Nick Bostrom’s (2014) book Superintelligence: Paths, Dangers, Strategies , where “superintelligence” is explained as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” The public concern that AI could make humanity irrelevant stands in contrast to the many researchers in the field being mostly concerned with how to design AI systems. Both sides could do well to learn from each other ( Müller, 2016a , b ). Thus, this article reviews and discusses published work on possibilities and prospects for AI technology and how we might take necessary measures to reduce the risk of negative impacts. This is a broad area to cover in a single article; opinions and publications on this topic come from people of many domains. Thus, this article is mostly limited to refer to work relevant for developers of robots and AI.

The Future Potential of Robotics and AI

Many reports predict a huge increase in the number of robots in the future (e.g., MAR, 2015 ; IFR, 2016 ; SAE, 2016 ). In the near future, many of these will be industrial robots. However, robots and autonomous systems are gradually expected to have widespread exploitation in society in the future including self-driving vehicles and service robots at work and at home. The hard question to answer is how quickly we will see a transformation.

The technologies that surround us take many shapes and have different levels of developmental progress and impact on our lives. A coarse categorization could be the following:

• Industrial robots: these have existed for many years and have made a huge impact within manufacturing. They are mostly preprogrammed by a human instructor and consist of a robot arm with a number of degrees of freedom ( Nof, 1999 ).

• Service robots: a robot which operates semi- or fully autonomously to perform useful tasks for humans or equipment but excluding industrial automation applications (IFR, 2017 ). They are currently applied in selected settings such as internal transportation in hospital, lawn mowing and vacuum cleaning.

• Artificial intelligence: software that makes technology able to adapt through learning with the target of making systems able to sense, reason, and act in the best possible way ( Tørresen, 2013 ). There has, in recent years, been a large increase in the deployment of artificial intelligence in a number of business domains including for customer service and decision support.

The technological transition from industrial robots to service robots represents an evolution into more personalized systems with an increasing degree of autonomy. This implies flexible robots that are able to perform tasks in an unconstrained, human-centered environment ( Haidegger et al., 2013 ). While the impact of industrial robots has been present for a number of years, the impact of service robots in workplaces and at home is still to be seen and assessed. Progress in artificial intelligence research will have a major impact on how quickly we see intelligent and autonomous service robots. Some factors that could make a contribution to this technological progress are included in Section “When and Where Will the Big Breakthrough Come?” and followed by opinions on robot designs in Section “How Similar to Humans Should Robots Become?” The possible effects of the coming technological transitions on humans and society and how to best design future intelligent systems are discussed in Section “Ethical Challenges and Countermeasures of Developing Advanced Artificial Intelligence and Robots.”

When and Where Will the Big Breakthrough Come?

It is difficult to predict where and when a breakthrough will come in technology. Often it happens randomly and not linked to major initiatives and projects. Something that looks uninteresting or insignificant, can prove to be significant. Some may remember trying the first graphical web browsers that became available, such as Mosaic in 1993 (developed at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign in the USA). These were slow, and it was not then obvious that the web and the Internet were something that could become as large and comprehensive as it is today. However, Internet and access to it gradually became faster and browsers also became more user friendly. So the reason why it has become so popular is probably because it is easy to use, provides quick access to information from around the world and enables free communication with anyone connected. The underlying foundation for Internet is a scalable technology being able to allow for ever-increasing traffic. For AI, the lack of technology that can handle more complex conditions has been a bottleneck ( Folsom-Kovarik et al., 2016 ).

As the complexity of our problems increases, it will become more and more difficult to automatically create a system to handle it. Divide-and-conquer helps only to a limited extent. It remains to crack the code of how development and scaling occurs in nature ( Mitchell, 2009 ). This applies both to the development of individual agents and the interaction between several agents. We have a lot of computing power available today, but as long as we do not know how programs should be designed, this power is limited in its contribution to effective solutions. Many laws of physics for natural phenomena have been discovered, but we have yet to really understand how complexity arises in nature. Advances in research in this area are likely to have a major impact on AI. Recent progress in training artificial neural networks with many layers (deep learning) is one example of how we can move forward in the right direction ( Goodfellow et al., 2016 ).

In addition to computational intelligence, robots also need mechanical bodies. Their body parts are currently static after being manufactured and put in operation. However, the introduction of 3D-printing combined with rapid prototyping opens up the possibility of in-the-field mechanical reconfiguration and adaptation ( Lipson and Kurman, 2012 ).

There are two groups of researchers that contribute to advances in AI. One group is concerned with studying biological or medical phenomena and trying to create models that best mimic them. In this way, they try to demonstrate that the biological mechanisms can be simulated in computers. This is useful, notably for developing more effective medicines and treatments for disease and disability. Many researchers in medicine collaborate with computer scientists on this type of research. One example is that the understanding of the ear’s behavior has contributed to the development of cochlear implants that give the deaf the sense of sounds and the ability to almost hear normally ( Torresen et al., 2016 ).

The second group of researchers focuses more on industrial problem solving and making engineering systems sound. Here, it is interesting to see whether biology can provide inspiration for more effective methods than those already adopted. Normally, this group of scientists works at a higher abstraction level than the former group, who try to determine how to best model mechanisms in biology, but both have mutual use of each other’s results. An example is the invention of the airplane that first became possible when the principle of air pressure and wing shape was understood by the Wright brothers through wind tunnel studies. Initial experiments with flexible wings similar to birds were unsuccessful, and it was necessary to have a level of abstraction over biology to create robust and functional airplanes.

Given the many recent warnings about AI, Müller and Bostrom (2016) collected opinions from researchers in the field, including highly cited experts, to get their view on the future. 170 responses out of 549 invitations were collected. The median estimate of respondents was that there is a one in two chance that high-level machine intelligence (defined as “a machine that can carry out most human professions at least as well as a typical human”) will be developed around 2040–2050, rising to a 9 in 10 chance by 2075. These experts expect that systems will move on to superintelligence (defined as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”) in less than 30 years thereafter. Further, they estimate the chance is about one in three that this development turns out to be “bad” or “extremely bad” for humanity. However, we should not take this as a guarantee since predicting about the future is hard and evaluation of predictions from experts have shown that they are often wrong in their forecasts ( Tetlock, 2017 ).

How Similar to Humans Should Robots Become?

How similar to the biological specimen can a robot become? It depends on developments in a number of fields such as AI methods, computing power, vision systems, speech recognition, speech synthesis, human–computer interaction, mechanics and actuators or artificial muscle fibers. It is definitely an interdisciplinary challenge ( Bar-Cohen and Hanson, 2009 ).

Given that we are able to actually create human-like robots, do we want them? Thinking of humanoid robots taking care of us when we get old would probably frighten many. There is also a hypothesis called the uncanny valley ( MacDorman and Ishiguro, 2006 ). It predicts that as robots get more similar to humans, the pleasure of having them around increases only until a certain point. When they are very similar to humans, this pleasure falls abruptly. Such robots might feel like the monstrous characters from sci-fi movies, and the reluctance to interact with robots increases . However, it later decreases again when they continue to be even more similar to humans; this is explained by reduced realism inconsistency ( MacDorman and Ishiguro, 2006 ). This decrease and increase of comfort as a robot becomes more human-like is the “uncanny valley.”

Although we fear the lack of human contact that could result from being surrounded by robots, for some tasks, many would prefer machines rather than humans. In contrast to most enjoying to help others, the feeling of being a burden to others is unpleasant, and we derive a sense of dignity from handling our key needs by ourselves. Thus, if a machine can help us, we prefer it in some contexts. We see this today with the Internet. Rather than asking others about how to solve a problem, we seek advice on the Internet. We probably achieve things with machines which we otherwise would not get done. Thus, in the same way as Google is helping us today with information needs, robots will help us with our physical needs. Of course, we still need human contact and social interaction. Thus, it is important that technology can support our social needs rather than making us more isolated. Autonomous cars may be one such measure, by enabling the elderly to go out and about more independently, they would support an active social life.

Whether the robots look like humans or not is less important than how well they solve the tasks we want them to handle. However, they must be easy to communicate with and easy to train to do what we want. Apple has had great success with its innovative mobile products that are easy to use. Both design and usability will be essential for many of us when we are going to choose what types of robot helpers we want in our own home in the future.

The fact that we are developing human-like robots means that they will have human-like behavior , but not human consciousness . They will be able to perceive, reason, make decisions, and learn to adapt but will still not have human consciousness and personality. There are philosophical considerations that raise this question, but based on current AI, it seems unlikely that artificial consciousness would be achieved anytime soon. There several arguments supporting this conclusion, including that consciousness can only arise and exist in biological matter ( Manzotti and Tagliasco, 2008 ; Edelman et al., 2011 ; Earl, 2014 ). Still, robots would, through their learning and adaptation capabilities, potentially be very good at mimicking human consciousness ( Manzotti 2013 ; Reggia, 2013 ).

Ethical Challenges and Countermeasures of Developing Advanced Artificial Intelligence and Robots

Ethical perspectives of AI and robotics should be addressed in at least two ways. First, the engineers developing systems need to be aware of possible ethical challenges that should be considered including avoiding misuse and allowing for human inspection of the functionality of the algorithms and systems ( Bostrom and Yudkowsky, 2014 ). Second, when moving toward advanced autonomous systems, the systems should themselves be able to do ethical decision making to reduce the risk of unwanted behavior ( Wallach and Allen, 2009 ).

An increasing number of autonomous systems that are working together increases the extent of any erroneous decisions made without human involvement. Several books have been published on computer ethics (also referred to as machine ethics/morality). In the book Moral Machines ( Wallach and Allen, 2009 ), a hypothetical scenario is outlined where “unethical” robotic trading systems contribute to an artificially high oil price, which leads to the automated program to control energy output switches over from oil to more polluting coal power plants to avoid increasing electricity prices. Coal-fired power plants cannot tolerate running at full production long and explodes after some time and creates massive power outage with the consequences it has for life and health. Power outages trigger terror alarms at the nearest international airport resulting in chaos both at the airport and arriving aircraft colliding etc. The conclusion is that the economic and human cost was because the automated decision systems were programmed separately. This scenario shows that it is especially important for control mechanisms between decision systems to interact. Such systems should have mechanisms that automatically limit behavior, and also inform operators about the conditions deemed to require human review.

In the book, it is further argued that the advantages of the new technology are, at the same time, so large that both politicians and the market would welcome them. Thus, it becomes important that morality based decision-making becomes a part of artificial intelligence systems. These systems must be able to evaluate the ethical implications of their possible actions. This could be on several levels, including if laws are broken or not. However, building machines incorporating all the world’s religious and philosophical traditions is not so easy; ethical dilemmas occur frequently.

Most engineers would probably prefer not to develop systems that could hurt someone. Nevertheless, this can potentially be difficult to predict. We can develop a very effective autonomous driving system that reduces the number of accidents and save many lives, but, on the other hand, if the system takes lives because of certain unpredictable behaviors, it would be socially unacceptable. It is also not an option to be responsible for creating or regulatory approve a system where there is a real risk for severe adverse events. We see the effect of this in the relatively slow adoption of autonomous cars. One significant challenge is that of automating moral decisions, such as the possible conflict between protecting a car’s passengers relative to surrounding pedestrians ( Bonnefon et al., 2016 ).

Below follows first an overview of possible ethical challenges we are facing with more intelligent systems and robots in our society, followed by how countermeasures related to technology risks can be taken including with machine ethics and designer precautions, respectively.

Ethical Societal Challenges Arising with Artificial Intelligence and Robots

Our society is facing a number of potential challenges from future highly intelligent systems regarding jobs and technology risks:

• Future jobs: People may become unemployed because of automation . This has been a fear for decades, but experience shows that the introduction of information technology and automation creates far more jobs than those which are lost ( Economist, 2016 ). Further, many will argue that jobs now are more interesting than the repetitive routine jobs that were common in earlier manufacturing companies. Artificial intelligence systems and robots help industry to provide more cost-efficient production especially in high cost countries. Thus, the need for outsourcing and replacing all employees can be reduced. Still, recent reports have argued that in the near future, we will see overall loss of jobs ( Schwab and Samans, 2016 ) and ( Frey and Osborne, 2016 ). However, other researchers mistrust these predictions ( Acemoglu and Restrepo, 2016 ). Fewer jobs and working hours for employees could tend to benefit a small elite and not all members of our society. One proposal to meet this challenge is that of a universal basic income ( Ford, 2015 ). Further, current social security and government services rely on the taxation of human labor—pressure on this system could have major social and political consequences. Thus, we must find mechanisms to support social security in the future, these may be similar to the “robot tax” that was recently considered but rejected by the European Parliament ( Prodhan, 2017 ).

• Future jobs: How much and in what way are we going to work with increased automation? If machines do everything for us, life could, in theory, become quite dull. Normally, we expect that automating tasks will result in shorter working hours. However, what we see is that the distinction between work and leisure becomes gradually less evident, and we can do the job almost from anywhere. Mobile phones and wireless broadband gives us the opportunity to work around the clock. Requirements for being competitive with others result in many today working more than before although with less physical effort than in jobs of the past. Although artificial intelligence contributes to the continued development of technology and this trend, we can simultaneously hope that automated agents might take over some of our tasks and thus also provide us some leisure time.

• Technology risk: Losing human skills due to technological excellence . The foundation for our society for hundreds of years has been training humans to make things, function, work in and understand our increasingly complex society. However, with the introduction of robots, and information and communication technology, the need for human knowledge and skills is gradually decreased with robots making products faster and more accurately than humans. Further, we can seek knowledge and be advised by computers. This lessens our need to train and utilize our cognitive capabilities regarding memory, reasoning, decision making etc. This could have a major impact on how we interact with the world around us. It would be hard to take over if the technology fails and challenging to make sure we get the best solution if only depending on information available on the web. The latter is already today a challenge with the blurred distinction between expert knowledge and alternative sources on the web. Thus, there seems to be a need for training humans also in the future to make sure that the technology works in the most effective way and that we have competence to make our own judgments about automatic decision making.

• Technology risk: Artificial intelligence can be used for destructive and unwanted tasks . Although mostly remotely controlled today, artificial intelligence is expected to be much applicable for future military unmanned aircrafts (drones) in air and for robots on to the ground. It saves lives in the military forces, but can, by miscalculations, kill innocent civilians. Similarly, surveillance cameras are useful for many purposes, but many are skeptical of advanced tracking of people using artificial intelligence. It might become possible to track the movement and behavior of a person moving in a range of interconnected surveillance camera and position information from the user’s smartphone. The British author George Orwell (1903–1950) published in 1949 the novel “1984,” where a not-so-nice future society is described: Continuous audio and video monitoring are conducted by a dictatorial government, led by “Big Brother.” Today’s technology is not far away from making this possible, but few fear that it will be used as in “1984” in our democratic societies. Nevertheless, disclosures (e.g., by Edward Snowden in 2013) have shown that governments can leverage technology in the fight against crime and terror at the risk of the innocent being monitored.

• Technology risk: Successful AI can lead to the extinction of mankind? Almost any technology can be misused and cause severe damage if it gets into the wrong hands. As discussed in the introduction, a number of writers and filmmakers have addressed this issue through dramatic scenes where technology gets out of control. However, the development of technology has not so far led to a global catastrophe. Nuclear power plants have gotten out of control, but the largest nuclear power plant accidents at Chernobyl in Russia (1986) and Fukushima in Japan (2011) were due to human and mechanical failure, not the failure of control systems. At Chernobyl, the reactor exploded because too many control rods were removed by experimentation. In Fukushima cooling pumps failed and reactors melted as a result of the earthquake and subsequent tsunami. The lesson of these disasters must be that it is important that systems have built in mechanisms to prevent human errors and help to predict risk of mechanical failure to the extent possible.

Looking back, new technology brings many benefits, and damage is often in a different form than we first would think of. Misuse of technology is always a danger, and it is probably a far greater danger than the technology itself getting out of control. An example of this is computer software which today is very useful for us in many ways, while we are also vulnerable from those who abuse the technology to create malicious software in the form of infecting and damaging virus programs. In 1999, the Melissa virus spread through e-mails leading to the failures of the e-mail systems in several large companies such as Intel and Microsoft due to overload. There are currently a number of people sharing their concerns regarding lethal autonomous weapons systems ( Lin et al., 2012 ; Russell et al., 2015 ). Others argue that such systems could be better than human soldiers in some situations, if they are programmed to never break agreed laws of war representing the legal requirements and responsibilities of a civilized nation ( Arkin et al., 2009 ).

Programs Undertaking Ethical Decision-Making

The book Moral Machines which begins with the somewhat frightening scenario discussed earlier in this article, also contains a thorough review of how artificial moral agents can be implemented ( Wallach and Allen, 2009 ). This includes the use of ethical expertise in program development. It proposes three approaches: formal logical and mathematical ethical reasoning, machine learning methods based on examples of ethical and unethical behavior and simulation where you see what is happening by following different ethical strategies.

A relevant example is given in the book. Imagine that you go to a bank to apply for a loan. The bank uses an AI-based system for credit evaluation based on a number of criteria. If you are rejected, the question arises about what the reason is. You may come to believe that it is due to your race or skin color rather than your financial situation. The bank can hide behind saying that the program cannot be analyzed to determine why your loan application was rejected. At the same time, they might claim that skin color and race are parameters not being used. A system more open for inspection can, however, show that the residential address was crucial in this case. It has given the result that the selection criteria provide effects almost as if unreasonable criteria should have been used. It is important to prevent this behavior as much as possible by simulating AI systems to detect possibly unethical actions. However, an important ethical challenge related to this is determining how to perform the simulation, e.g., by whom, to what extent, etc.

It is further argued that all software that will replace human evaluation and social function should adhere to criteria such as accountability, inspectability, robustness to manipulation, and predictability. All developers should have an inherent desire to create products that deliver the best possible user experience and user safety. It should be possible to inspect the AI system, so if it comes up with a strange or incorrect action, we can determine the cause and correct the system so that the same thing does not happen again. The ability to manipulate the system must be restricted, and the system must have a predictable behavior. The complexity and generality of an AI system influences how difficult it is to deal with the above criteria. It is obviously easier and more predictable for a robot to move in a known and limited environment than in new and unfamiliar surroundings.

Developers of intelligent and adaptive systems must, in addition to being concerned with ethical issues in how they design systems, try to give the systems themselves the ability to make ethical decisions ( Dennis et al., 2015 ). This is referred to as computer ethics , where one looks at the possibility of giving the actual machines ethical guidelines. The machines should be able to make ethical decisions using ethical frameworks ( Anderson and Anderson, 2011 ). It is argued that ethical issues are too interdisciplinary for programmers alone to explore them. That is, researchers in ethics and philosophy should also be included in the formulation of ethical “conscious” machines that are targeted at providing acceptable machine behavior. Michael and Susan Leigh Anderson have collected contributions from both philosophers and AI researchers in the book Machine Ethics ( Anderson and Anderson, 2011 ). The book discusses why and how to include an ethical dimension in machines that will act autonomously. A robot assisting an elderly person at home needs clear guidelines for what is acceptable behavior for monitoring and interaction with the user. Medically important information must be reported, but at the same time, the person must be able to maintain privacy. Maybe video surveillance is desirable for the user (by relatives or others), but it should be clear to the user when and how it happens. An autonomous robot must also be able to adapt to the user’s personality to have a good dialog.

Other work focuses on the importance of providing robots with internal models to make them self-aware which will lead to enhanced safety and potentially also ethical behavior in Winfield (2014) . It could also be advantageous for multiple robots to share parts of their internally modeled behavior with each other ( Winfield, 2017 ). Self-awareness regards either knowledge about one’s self—private self-awareness—or the surrounding environment—public self-awareness ( Lewis et al., 2015 )—and is applicable across a number of different application areas ( Lewis et al., 2016 ). The models can be organized in a hierarchical and distributed manner ( Demiris and Khadhouri, 2006 ). Several works apply artificial reasoning to verify whether a robotic behavior satisfies a set of predetermined ethical constraints which, to a large extent, have been defined by a symbolic representation using logic ( Arkin et al., 2012 ; Govindarajulu and Bringsjord, 2015 ). However, future systems would probably combine the programmed and machine learning approach ( Deng, 2015 ).

While most work on robot ethics is tested by simulation, there are some work that has been implemented on real robots. An early example was a robot programmed to decide on whether to keep reminding a patient to take medicine, and when to do so, or to accept the patient’s decision not to take the medication ( Anderson and Anderson, 2010 ). The robot (Nao from Aldebaran Robotics) was said to make the following compromises: “Balance three duties: ensuring that the patient receives a possible benefit from taking the medication; preventing the harm that might result from not taking the medication; and respecting the autonomy of the patient (who is assumed to be adult and competent).” The robot notifies the overseer when it gets to the point that the patient could be harmed, or could lose considerable benefit, from not taking the medication. In Winfield et al. (2014) an ethical action selection mechanism in an e-puck mobile robot is applied to make it sometimes choose actions that compromise the robot’s own safety in order to prevent a second robot from coming to harm. This represents a contribution toward making robots that are ethical, as well as safe.

Implementing ethical behavior in robots inspired by the simulation theory of cognition has also been proposed ( Vanderelst and Winfield, 2017 ). This is by utilizing internal simulations of a set of behavioral alternatives, which allow the robot to simulate actions and predict their consequences. Using this concept, it has been demonstrated that the humanoid Nao robot can behave according to Asimov’s laws of robotics.

Ethical Guidelines for Robot Developers

Professor and science fiction writer Isaac Asimov (1920–1992) was already in 1942 foresighted to see the need for ethical rules for robot behavior. Subsequently, his three rules ( Asimov, 1942 ) have often been referenced in the science fiction literature and among researchers who discuss robot morality:

1. A robot may not harm a human being, or through inaction, allow a human to be injured.

2. A robot must obey orders given by human beings except where such orders would conflict with the first law.

3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

It has later been argued that such simple rules are not enough to avoid robots resulting in harm ( Lin et al., 2012 ). José Maria Galvan and Paolo Dario gave birth to Technoethics, and the term was used in a talk by Galvan at the Workshop “Humanoids, A Techno-ontological Approach” at Waseda University in 2001—organized by Paolo Dario and Atsuo Takanishi—where he spoke about the ethical dimension of technology ( Veruggio, 2005 ). The term roboethics was introduced in 2002 by the Italian robot scientist Gian Marco Veruggio ( Veruggio and Operto, 2008 ). He saw a need for development guidelines for robots contributing to making progress in the human society and help preventing abuse against humanity. Veruggio argues that ethics are needed for robot designers, manufacturers and users. We must expect that the robots of the future will be smarter and faster than the people they should obey. It raises questions about safety, ethics and economics. How do we ensure that they are not being misused by persons with malicious intent?

Is there any chance that the robots themselves, by understanding that they are superior to humans, would try to enslave us? We are still far from the worst scenarios that are described in books and movies, yet there is reason to be alert. First, robots are mechanical systems that might unintentionally hurt us. Then, with an effective sensory system, there is a danger that the collected information can be accessed by unauthorized people and be made available to others through the Internet. Today this is a problem related to intrusion on our computers, but future robots may be vulnerable to hacking as well. This would present be a challenge for robots that collect a lot of audio and video information from our homes. We would not like to be surrounded by robots unless we are sure that sensor data are staying within the robots only.

Another problem is that robots could be misused for criminal activities such as burglary. A robot in your own home could either be reprogrammed by people with criminal intent or they might have their own robots carry out the theft. So, having a home robot connected to the Internet will place great demands on security mechanisms to prevent abuse. Although we must assume that anyone who develops robots and AI for them has good intentions, it is important that the developers also have possible abuse in mind. These intelligent systems must be designed so that the robots are friendly and kind, while difficult to abuse for malicious actions in the future.

Part of the robot-ethics discussion concerns military use (see Part III, Lin et al., 2012 ). That is, e.g., applying robots in military activities have ethical concerns. The discussion is natural for several reasons including that military applications are an important driving force in technology development. At the same time, military robot technology is not all negative since it may save lives by replacing human soldiers in danger zones. However, giving robotic military systems too much autonomy increases the risk of misuse including toward civilians.

In 2004 the first international symposium on roboethics was held in Sanremo, Italy. The EU has funded a research program, ETHICBOTS, where a multidisciplinary team of researchers was to identify and analyze techno-ethical challenges in the integration of human and artificial entities. The European Robotics Research Network (Euronet) funded the project Euronet Roboethics Atelier in 2005, with the goal of developing the first roadmap for roboethics ( Veruggio, 2006 ). That is, undertaking a systematic assessment of the ethical issues surrounding robot development. The focus of this project was on human ethics for designers, manufacturers, and users of robots. Here are some examples of recommendations made by the project participants for commercial robots:

• Safety . There must be mechanisms (or opportunities for an operator) to control and limit a robot’s autonomy.

• Security . There must be a password or other keys to avoid inappropriate and illegal use of a robot.

• Traceability . As with aircraft, robots should have a “black box” to record and document their own behavior ( Winfield and Jirotka, 2017 ).

• Identifiability . Robots should have serial numbers and registration number similar to cars.

• Privacy policy . Software and hardware should be used to encrypt and password protect sensitive data that the robot needs to save.

The studies of ethical and social implications of robotics continue and books and articles disseminate recent findings ( Lin et al., 2012 ). It is important to include the user in the design process and several methodologies have been proposed. Value-sensitive design is one consisting of three phases: conceptual, empirical, and technical investigations accounting for human values. The investigations are intended to be iterative, allowing the designer to modify the design continuously ( Friedman et al., 2006 ).

The work has continued including with the publications of the Engineering and Physical Sciences Research Council (a UK government agency) Principles of Robotics in 2011 ( EPSRC, 2011 ). They proposed regulating robots in the real world with the following rules ( Boden et al., 2017 ; Prescott and Szollosy, 2017 ):

1. Robots are multiuse tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

2. Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy.

3. Robots are products. They should be designed using processes which assure their safety and security.

4. Robots are manufactured artifacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.

5. The person with legal responsibility for a robot should be attributed.

Further, the British Standards Institute has published the world’s first standard on ethical guidelines for the design of robots: BS8611, in April 2016 ( BSI, 2016 ). It has been prepared by a committee of scientists, academics, ethicists, philosophers and users to provide guidance on potential hazards and protective measures for the design of robots and autonomous systems being used in everyday life. This was followed by the IEEE Standards Association initiative on AI and Autonomous System ethics publishing an Ethical Aligned Design, version 1 being a “A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems” ( IEEE, 2016 ; Bryson and Winfield, 2017 ). It consists of eight sections, each addressing a specific topic related to AI and autonomous systems that has been discussed by a specific committee of the IEEE Global Initiative. The theme for each of the sections is as follows:

1. General principles.

2. Embedding values into autonomous intelligent systems.

3. Methodologies to guide ethical research and design.

4. Safety and beneficence of artificial general intelligence and artificial superintelligence.

5. Personal data and individual access control.

6. Reframing autonomous weapons systems.

7. Economics/humanitarian issues.

The document will be revised based on an open hearing with deadline April 2017.

Civil law rules for robotics have also been discussed within the European Community resulting in a published European Parliament resolution ( EP, 2017 ). Furthermore, discussing principles for AI were the target for the Asilomar conference gathering leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. It resulted in 23 principles within Research issues; Ethics and Values; and Longer-term Issues, respectively ( Asilomar, 2017 ). They are published on the web and have later been endorsed by a number of leading researchers and business people. Similarly, the Japanese Society for Artificial Intelligence has published nine Ethical Guidelines ( JSAI, 2017 ).

All the initiatives above indicate a concern around the world for the future of AI and robotics technology and a sincere interest in having the researchers themselves contribute to the development of technology that is in every way favorable.

Technology may be viewed and felt like a wave hitting us whether we want it or not. However, many novel and smart devices have been introduced that, through lack of adoption, has resulted in rapid removal from the market. Thus, through what we buy and apply, we have a large impact on what technology that will be adopted and sustained in our society. At the same time, we have limited control over unintentional changes to our behavior by the way we adopt and use technology, e.g., smartphones and the Internet have in many ways changed the way we live our lives and interact with others. Smartphones have also resulted in us being more physically close to technology than any other living being.

In the future, there will be an even more diverse set of technologies surrounding us including for taking care of medical examination, serving us and taking us where we want to go. However, such devices and systems would need to behave properly for us to want them close by. If a robot hits us unintentionally or works too slowly, few would accept it. Mechanical robots with the help of artificial intelligence can be designed to learn to behave in a friendly and user adapted way. However, they would need to contain a lot of sensors similar to our smartphone, and we need some assurance that this data will not be misused . There are also a number of other possible risks and side effects so the work undertaken in a number of committees around the world (referred to in the previous section) is regarded as important and valuable for developing future technology. Still, there is a large divide between current design challenges and science fiction movies’ dystopian portrayal of how future technology might impact or even eradicate humanity. However, the latter probably has a positive effect on our awareness of possible vulnerability that should be addressed in a proactive way. We now see this taking place in the many initiatives to define regulations for AI and robots.

Robots for the elderly living at home is a relevant example to illustrate some of the opportunities and challenges that we are facing. While engineers would work on making intelligent and clever robots, it will be up to the politicians and governments through laws and regulation to limit unwanted changes in the society. For example, their decisions are important for deciding the staff requirements for elderly care when less physical work with elderly is needed. Decisions should build on studies seeking to find the best compromise between dignity and independence on one hand and possible loneliness on the other. At the same time, if robots assume many of our current jobs, people may in general have more free time that could be well spent with the elderly.

A robot arriving in our home can start learning about our behavior and preferences and, like a child, gradually personalize its interactions, leading us to enjoy having it around similarly to having a cat or dog. However, rather than us having to take it out for fresh air, it will take us out for both fresh air and seeing friends as we get old. The exploitation of robots within elderly care is unlikely to have a quick transition. Thus, today’s elderly do not have to worry about being placed under machine care. Rather, those of us who are younger, including current developers of elderly care robots, are more likely to be confronted with these robots when we get old in the future. Thus, it is in our own interest to make them user friendly.

The article has presented some perspectives on the future of AI and robotics including reviewing ethical issues related to the development of such technology and providing gradually more complex autonomous control. Ethical considerations should be taken into account by designers of robotic and AI systems, and the autonomous systems themselves must also be aware of ethical implications of their actions. Although the gap between the dystopian future visualized in movies and the current real world may be considered large, there are reasons to be aware of possible technological risks to be able to act in a proactive way. Therefore, it is appreciable, as outlined in the article, that many leading researchers and business people are now involved in defining rules and guidelines to ensure that future technology becomes beneficial to the limit the risks of a dystopian future.

Author Contributions

The author confirms being the sole contributor of this work and approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This work is partially supported by The Research Council of Norway as a part of the Engineering Predictability with Embodied Cognition (EPEC) project, under grant agreement 240862; Multimodal Elderly Care systems (MECS) project, under grant agreement 247697. I’m thankful for important article draft comments and language corrections provided by Charles Martin. Collaboration on Intelligent Machines (COINMAC) project, under grant agreement 261645

  • ^ https://en.wikipedia.org/wiki/List_of_dystopian_films .
  • ^ http://www.bbc.com/news/technology-30290540 .
  • ^ https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat .
  • ^ http://www.bbc.com/news/31047780 .
  • ^ https://www.reddit.com/r/IAmA/comments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third/ .

Acemoglu, D., and Restrepo, P. (2016). “The race between machine and man: implications of technology for growth, factor shares and employment,” in NBER Working Paper No. 22252 . Available at: https://www.nber.org/papers/w22252.pdf

Google Scholar

Anderson, M., and Anderson, S. L. (2010). Robot be good. Sci. Am. 303, 72–77. doi: 10.1038/scientificamerican1010-72

CrossRef Full Text | Google Scholar

Anderson, M., and Anderson, S. L. (2011). Machine Ethics . New York: Cambridge University Press.

Arkin, R. C., Ulam, P., and Duncan, B. (2009). An Ethical Governor for Constraining Lethal Action in an Autonomous System . Technical Report GIT-GVU-09-02.

Arkin, R. C., Ulam, P., and Wagner, A. R. (2012). Moral decision making in autonomous systems: enforcement, moral emotions, dignity, trust, and deception. Proc. IEEE 100, 571–589. doi:10.1109/JPROC.2011.2173265

Asilomar. (2017). Available at: https://futureoflife.org/ai-principles/

Asimov, I. (1942). “Runaround,” in Astounding Science Fiction , Vol. 29, No. 1. Available at: http://www.isfdb.org/cgi-bin/pl.cgi?57563

Bar-Cohen, Y., and Hanson, D. (2009). The Coming Robot Revolution . New York: Springer.

Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., et al. (2017). Principles of robotics: regulating robots in the real world. Conn. Sci. 29, 124–129. doi:10.1080/09540091.2016.1271400

Bonnefon, J.-F., Shariff, A., and Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science 352, 1573–1576.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies . Oxford: Oxford University Press.

Bostrom, N., and Yudkowsky, E. (2014). “The ethics of artificial intelligence,” in The Cambridge Handbook of Artificial Intelligence , eds F. Keith and M. William (Ramsey: Cambridge University Press), 2014.

Bryson, J., and Winfield, A. F. (2017). Standardizing ethical design for artificial intelligence and autonomous systems. Computer 50, 116–119. doi:10.1109/MC.2017.154

BSI. (2016). Robots and Robotic Devices. Guide to the Ethical Design and Application of Robots and Robotic Systems , Vol. BS 8611 (BSI Standards Publications), 2016. Availble at: http://shop.bsigroup.com/ProductDetail?pid=000000000030320089

Demiris, Y., and Khadhouri, B. (2006). Hierarchical attentive multiple models for execution and recognition of actions. Rob Auton Syst 54, 361–369. doi:10.1016/j.robot.2006.02.003

Deng, B. (2015). Machine ethics: the robot’s dilemma. Nature 523, 20–22. doi:10.1038/523024a

Dennis, L. A., Fisher, M., and Winfield, A. F. T. (2015). Towards verifiably ethical robot behaviour. CoRR abs/1504.03592. Availble at: http://arxiv.org/abs/1504.03592

Earl, B. (2014). The biological function of consciousness. Front. Psychol. 5:697. doi:10.3389/fpsyg.2014.00697

PubMed Abstract | CrossRef Full Text | Google Scholar

Economist. (2016). Artificial intelligence: the impact on jobs – automation and anxiety. Economist . June 25th 2016. Available at: https://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety

Edelman, G. M., Gally, J. A., and Baars, B. J. (2011). Biology of consciousness. Front. Psychol. 2:4. doi:10.3389/fpsyg.2011.00004

EP. (2017). European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) . Availble at: http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-/EP/TEXT+TA+P8-TA-2017-0051+0+DOC+XML+V0/EN

EPSRC. (2011). Principles of Robotics, EPSRC and AHRC Robotics Joint Meeting . Availble at: https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/

Folsom-Kovarik, J. T., Schatz, S., Jones, R. M., Bartlett, K., and Wray, R. E. (2016). AI Challenge Problem: Scalable Models for Patterns of Life , Vol. 35, No. 1. Available at: https://www.questia.com/magazine/1G1-364691878/ai-challenge-problem-scalable-models-for-patterns

Ford, M. (2015). Rise of the Robots: Technology and the Threat of a Jobless Future . New York: Basic Books.

Frey, C. B., and Osborne, M. (2016). Technology at Work v2.0: The Future Is Not What It Used to Be . Oxford Martin School and Citi. Available at: http://www.oxfordmartin.ox.ac.uk/publications/view/2092

Friedman, B., Kahn, P. H. Jr., Borning, A., and Kahn, P. H. (2006). “Value sensitive design and information systems,” in Human-Computer Interaction and Management Information Systems: Foundations , eds P. Zhang, and D. Galletta (New York: ME Sharpe), 348–372.

Goodfellow, I., Yoshua, B., and Aaron, C. (2016). Deep Learning . Cambridge, US: MIT Press.

Govindarajulu, N. S., and Bringsjord, S. (2015). “Ethical regulation of robots must be embedded in their operating systems,” in A Construction Manual for Robots’ Ethical Systems ed. R. Trappl (Springer), 85–99.

Haidegger, T., Barreto, M., Gonçalves, P., Habib, M. K., Veera Ragavan, S. K., Li, H. (2013). Applied ontologies and standards for service robots. Rob. Auton. Syst. 61, 1215–1223. doi:10.1016/j.robot.2013.05.008

IEEE. (2016). Ethical Aligned Design . IEEE Standards Association. Available at: http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf

IFR. (2016). World Robotics Report, 2016 . International Federation of Robotics.

Intl. Federation of Robotics (IFR). (2017). Service Robots . Available at: http://www.ifr.org/service-robots/

JSAI. (2017). Japanese Society for Artificial Intelligence Ethical Guidelines . Available at: http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf

Lewis, P. R., Chandra, A., Funmilade, F., Glette, K., Chen, T., Bahsoon, R., et al. (2015). Arch. aspects of self-aware and self-expressive comp. syst.: from psychology to engineering. IEEE Comput. 48, 62–70. doi:10.1109/MC.2015.235

Lewis, P. R., Platzner, M., Rinner, B., Tørresen, J., and Yao, X. (eds) (2016). Self-Aware Computing Systems . Switzerland: Springer.

Lin, P., Abney, K., and Bekey, G. A. (eds) (2012). Robot Ethics. The Ethical and Social Implications of Robotics . Cambridge, Massachusetts. London, England: The MIT Press.

Lipson, H., and Kurman, M. (2012). Fabricated: The New World of 3D Printing . Hoboken, US: Wiley Press.

MacDorman, K. F., and Ishiguro, H. (2006). The uncanny advantage of using androids in social and cognitive science research. Interact. Stud. 7, 297–337. doi:10.1075/is.7.3.03mac

Manzotti, R. (2013). Machine consciousness: a modern approach. Nat. Intell. INNS Mag. 2, 7–18.

Manzotti, R., and Tagliasco, V. (2008). Artificial consciousness: a discipline between technological and theoretical obstacles. Artif. Intell. Med. 44, 105–117. doi:10.1016/j.artmed.2008.07.002

MAR. (2015). Robotics 2020 Multi-Annual Roadmap for Robotics in Europe . SPARC Robotics, euRobotics AISBL. Available at: https://eu-robotics.net/sparc/upload/Newsroom/Press/2016/files/H2020_Robotics_Multi-Annual_Roadmap_ICT-2017B.pdf

Mitchell, M. (2009). Complexity: A Guided Tour . New York, NY: Oxford University Press, 2009.

Müller, V. C. (2016b). “Editorial: risks of artificial intelligence,” in Risks of Artificial Intelligence , ed. V. C. Müller (London: CRC Press – Chapman & Hall), 1–8.

Müller, V. C. (ed.) (2016a). Risks of Artificial Intelligence . London: Chapman & Hall – CRC Press, 292.

Müller, V. C., and Bostrom, N. (2016). “Future progress in artificial intelligence: a survey of expert opinion,” in Fundamental Issues of Artificial Intelligence , ed. V. C. Müller (Berlin: Synthese Library, Springer), 553–571.

Nof, S. Y. (ed.) (1999). Handbook of Industrial Robotics , 2nd Edn. Hoboken, US: John Wiley & Sons, 1378.

Prescott, T., and Szollosy, M. (2017). Ethical principles of robotics special issue. Conn. Sci. 29. Part 1: Available at: http://www.tandfonline.com/toc/ccos20/29/2?nav=tocList , Part2: Available at: http://www.tandfonline.com/toc/ccos20/29/3?nav=tocList

Prodhan, G. (2017). European Parliament Calls for Robot Law, Rejects Robot Tax . Reuters. Available at: http://www.reuters.com/article/us-europe-robots-lawmaking-idUSKBN15V2KM

Reggia, J. A. (2013). The rise of machine consciousness: studying consciousness with computational models. Neural Netw. 44, 112–131. doi:10.1016/j.neunet.2013.03.011

Russell, S., Hauert, S., Altman, R., and Veloso, M. (2015). Robotics: ethics of artificial intelligence. Nature 521, 415–418. doi:10.1038/521415a

SAE. (2016). “Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles,” in SAE J3016 Standard 2016 (SAE International). Available at: http://standards.sae.org/j3016_201609/

Schwab, K., and Samans, R. (2016). “The Future of Jobs Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution,” in Global Challenge Insight Report (World Economic Forum). Available at: http://www3.weforum.org/docs/WEF_Future_of_Jobs.pdf

Tetlock, P. E. (2017). Expert Political Judgment: How Good Is It? How Can We Know? Princeton, NJ: Princeton University Press.

Tørresen, J. (2013). What is Artificial Intelligence . Oslo: Norwegian, Universitets-forlaget (Hva-er-bokserien).

Torresen, J., Iversen, A. H., and Greisiger, R. (2016). “Data from Past Patients used to Streamline Adjustment of Levels for Cochlear Implant for New Patients,” in Proc. of 2016 IEEE Symposium Series on Computational Intelligence (SSCI) , eds J. Yaochu and K. Stefanos (Athens: IEEE Conference Proceedings).

Unwin, T. (2005). Jules Verne: negotiating change in the nineteenth century. Sci Fiction Stud XXXII, 5–17. Available at: http://jv.gilead.org.il/sfs/Unwin.html

Vanderelst, D., and Winfield, A. (2017). An architecture for ethical robots inspired by the simulation theory of cognition. Cogn. Syst. Res. Available at: http://eprints.uwe.ac.uk/31758

Veruggio, G. (2005). “The birth of roboethics,” in Proc. of IEEE International Conference on Robotics and Automation (ICRA) (Barcelona: Workshop on Robo-Ethics), 2005.

Veruggio, G. (2006). “The EURON roboethics roadmap,” in 2006 6th IEEE-RAS International Conference on Humanoid Robots , Vol. 2006 (Genova), 612–617.

Veruggio, G., and Operto, F. (2008). “Roboethics: social and ethical implications,” in Springer Handbook of Robotics , eds B. Siciliano and O. Khatib (Berlin, Heidelberg: Springer), 1499–1524.

Wallach, W., and Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong . New York: Oxford University Press.

Winfield, A. F. (2014). “Robots with internal models: a route to self-aware and hence safer robots,” in The Computer after Me: Awareness and Self-Awareness in Autonomic Systems , 1st Edn, ed. J. Pitt (London: Imperial College Press), 237–252.

Winfield, A. F. (2017). “When robots tell each other stories: the emergence of artificial fiction,” in Narrating Complexity , eds R. Walsh and S. Stepney (Springer). Available at: http://eprints.uwe.ac.uk/30630

Winfield, A. F., Blum, C., and Liu, W. (2014). “Towards an ethical robot: internal models, consequences and ethical action selection,” in Advances in Autonomous Robotics Systems , eds M. Mistry, A. Leonardis, M. Witkowski, and C. Melhuish (Springer), 85–96.

Winfield, A. F., and Jirotka, M. (2017). “The case for an ethical black box,” in Towards Autonomous Robot Systems , ed. Y. Gao (Springer), 1–12. Available at: http://eprints.uwe.ac.uk/31760

Keywords: review, ethics, technology risks, machine ethics, future perspectives

Citation: Torresen J (2018) A Review of Future and Ethical Perspectives of Robotics and AI. Front. Robot. AI 4:75. doi: 10.3389/frobt.2017.00075

Received: 05 April 2017; Accepted: 20 December 2017; Published: 15 January 2018

Reviewed by:

Copyright: © 2018 Torresen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jim Torresen, jimtoer@ifi.uio.no

  • Research article
  • Open access
  • Published: 18 January 2021

Exploring the impact of Artificial Intelligence and robots on higher education through literature-based design fictions

  • A. M. Cox   ORCID: orcid.org/0000-0002-2587-245X 1  

International Journal of Educational Technology in Higher Education volume  18 , Article number:  3 ( 2021 ) Cite this article

67k Accesses

43 Citations

31 Altmetric

Metrics details

Artificial Intelligence (AI) and robotics are likely to have a significant long-term impact on higher education (HE). The scope of this impact is hard to grasp partly because the literature is siloed, as well as the changing meaning of the concepts themselves. But developments are surrounded by controversies in terms of what is technically possible, what is practical to implement and what is desirable, pedagogically or for the good of society. Design fictions that vividly imagine future scenarios of AI or robotics in use offer a means both to explain and query the technological possibilities. The paper describes the use of a wide-ranging narrative literature review to develop eight such design fictions that capture the range of potential use of AI and robots in learning, administration and research. They prompt wider discussion by instantiating such issues as how they might enable teaching of high order skills or change staff roles, as well as exploring the impact on human agency and the nature of datafication.

Introduction

The potential of Artificial Intelligence (AI) and robots to reshape our future has attracted vast interest among the public, government and academia in the last few years. As in every other sector of life, higher education (HE) will be affected, perhaps in a profound way (Bates et al., 2020 ; DeMartini and Benussi, 2017 ). HE will have to adapt to educate people to operate in a new economy and potentially for a different way of life. AI and robotics are also likely to change how education itself works, altering what learning is like, the role of teachers and researchers, and how universities work as institutions.

However, the potential changes in HE are hard to grasp for a number of reasons. One reason is that impact is, as Clay ( 2018 ) puts it, “wide and deep” yet the research literature discussing it is siloed. AI and robotics for education are separate literatures, for example. AI for education, learning analytics (LA) and educational data mining also remain somewhat separate fields. Applications to HE research as opposed to learning, such as the robot scientist concept or text and data mining (TDM), are also usually discussed separately. Thus if we wish to grasp the potential impact of AI and robots on HE holistically we need to extend our vision across the breadth of these diverse literatures.

A further reason why the potential implications of AI and robots for HE are quite hard to grasp is because rather than a single technology, something like AI is an idea or aspiration for how computers could participate in human decision making. Faith in how to do this has shifted across different technologies over time; as have concepts of learning (Roll and Wylie, 2016 ). Also, because AI and robotics are ideas that have been pursued over many decades there are some quite mature applications: impacts have already happened. Equally there are potential applications that are being developed and many only just beginning to be imagined. So, confusingly from a temporal perspective, uses of AI and robots in HE are past, present and future.

Although hard to fully grasp, it is important that a wider understanding and debate is achieved, because AI and robotics pose a range of pedagogic, practical, ethical and social justice challenges. A large body of educational literature explores the challenges of implementing new technologies in the classroom as a change management issue (e.g. as synthesised by Reid, 2014 ). Introducing AI and robots will not be a smooth process without its challenges and ironies. There is also a strong tradition in the educational literature of critical responses to technology in HE. These typically focus on issues such as the potential of technology to dehumanise the learning experience. They are often driven by fear of commercialisation or neo-liberal ideologies wrapped up in technology. Similar arguments are developing around AI and robotics. There is a particularly strong concentration of critique around the datafication of HE. Thus the questions around the use of AI and robots are as much about what we should do as what is possible (Selwyn, 2019a ). Yet according to a recent literature review most current research about AI in learning is from computer science and seems to neglect both pedagogy and ethics (Zawacki-Richter et al., 2019 ). Research on AIEd has also been recognised to have a WEIRD (western, educated, industrialized, rich and democratic) bias for some time (Blanchard, 2015 ).

One device to make the use of AI and robots more graspable is fiction, with its ability to help us imagine alternative worlds. Science fiction has already had a powerful influence on creating collective imaginaries of technology and so in shaping the future (Dourish and Bell, 2014 ). Science fiction has had a fascination with AI and robots, presumably because they enhance or replace defining human attributes: the mind and the body. To harness the power of fiction for the critical imagination, a growing body of work within Human Computer Interaction (HCI) studies adopts the use of speculative or critical narratives to destabilise assumptions through “design fictions” (Blythe 2017 ): “a conflation of design, science fact, and science fiction” (Bleecker, 2009 : 6). They can be used to pose critical questions about the impact of technology on society and to actively engage wider publics in how technology is designed. This is a promising route for making the impact of AI and robotics on HE easier to grasp. In this context, the purpose of this paper is to describe the development of a collection of design fictions to widen the debate about the potential impact of AI and robots on HE, based on a wide-ranging narrative literature review. First, the paper will explain more fully the design fiction method.

Method: design fictions

There are many types of fictions that are used for our thinking about the future. In strategic planning and in future studies, scenarios—essentially fictional narratives—are used to encapsulate contrasting possible futures (Amer et al., 2013 ; Inayatullah, 2008 ). These are then used collaboratively by stakeholders to make choices about preferred directions. On a more practical level, in designing information systems traditional design scenarios are short narratives that picture use of a planned system and that are employed to explain how it could be used to solve existing problems. As Carroll ( 1999 ) argues, such scenarios are also essentially stories or fictions and this reflects the fact that system design is inherently a creative process (Blythe, 2017 ). They are often used to involve stakeholders in systems design. The benefit is that the fictional scenario prompts reflection outside the constraints of trying to produce something that simply works (Carroll, 1999 ). But they tend to represent a system being used entirely as intended (Nathan et al., 2007 ). They typically only include immediate stakeholders and immediate contexts of use, rather than thinking about the wider societal impacts of pervasive use of the technology. A growing body of work in the study of HCI refashions these narratives:

Design fiction is about creative provocation, raising questions, innovation, and exploration. (Bleecker, 2009 : 7).

Design fictions create a speculative space in which to raise questions about whether a particular technology is desirable, the socio-cultural assumptions built into technologies, the potential for different technologies to make different worlds, our relation to technology in general, and indeed our role in making the future happen.

Design fictions exist on a spectrum between speculative and critical. Speculative fictions are exploratory. More radical, critical fictions ask fundamental questions about the organisation of society and are rooted in traditions of critical design (Dunne and Raby, 2001 ). By definition they challenge technical solutionism: the way that technologies seem to be built to solve a problem that does not necessarily exist or ignore the contextual issues that might impact its success (Blythe et al., 2016 ).

Design fictions can be used in research in a number of ways, where:

Fictions are the output themselves, as in this paper.

Fictions (or an artefact such as a video based on them) are used to elicit research data, e.g. through interviews or focus groups Lyckvi et al. ( 2018 ).

Fictions are co-created with the public as part of a process of raising awareness (e.g. Tsekleves et al. 2017 ).

For a study of the potential impact of AI and robots on HE, design fictions are a particularly suitable method. They are already used by some authors working in the field such as Pinkwart ( 2016 ), Luckin and Holmes ( 2017 ) and Selwyn et al. ( 2020 ). As a research tool, design fictions can encapsulate key issues in a short, accessible form. Critically, they have the potential to change the scope of the debate, by shifting attention away from the existing literature and its focus on developing and testing specific AI applications (Zawacki-Richter et al., 2019 ) to weighing up more or less desirable directions of travel for society. They can be used to pose critical questions that are not being asked by developers because of the WEIRD bias in the research community itself (Blanchard, 2015 ), to shift focus onto ethical and social justice issues, and also raise doubts based on practical obstacles to their widespread adoption. Fictions engage readers imaginatively and on an affective level. Furthermore, because they are explicitly fictions readers can challenge their assumptions, even get involved in actively rewriting them.

Design fictions are often individual texts. But collections of fictions create potential for reading against each other, further prompting thoughts about alternative futures. In a similar way, in future studies, scenarios are often generated around four or more alternatives, each premised on different assumptions (Inayatullah, 2008 ). This avoids the tendency towards a utopian/ dystopian dualism found in some use of fiction (Rummel et al., 2016 ; Pinkwart 2016 ). Thus in this study the aim was to produce a collection of contrasting fictions that surface the range of debates revolving around the application of AI and robotics to HE.

The process of producing fictions is not easy to render transparent.

In this study the foundation for the fictions was a wide-ranging narrative review of the literature (Templier and Paré, 2015 ). The purpose of the review was to generate a picture of the pedagogic, social, ethical and implementation issues raised by the latest trends in the application of AI and robots to teaching, research and administrative functions in HE, as a foundation for narratives which could instantiate the issues in a fictional form. We know from previous systematic reviews that these type of issue are neglected at least in the literature on AIEds (Zawacki-Richter et al., 2019 ). So the chief novelty of the review lay in (a) focusing on social, ethical, pedagogic and management implications (b) encompassing both AI and robotics as related aspects of automation and (c) seeking to be inclusive across the full range of functions of HE, including impacts on learning, but also on research and scholarly communications, as well as administrative functions, and estates management (smart campus).

In order to gather references for the review, systematic searches on the ERIC database for relevant terms such as “AI or Artificial Intelligence”; “conversational agent”, “AIED” were conducted. Selection was made for items which either primarily addressed non-technical issues or which themselves contained substantial literature reviews that could be used to gain a picture of the most recent applications. This systematic search was combined with snowballing (also known as pearl growing techniques) using references by and to highly relevant matches to find other relevant material. While typically underreported in systematic reviews this method has been shown to be highly effective in retrieving more relevant items (Badampudi et al. 2015 ). Some grey literature was included because there are a large number of reports by governmental organisations summarizing the social implications of AI and robots. Because many issues relating to datafication are foreshadowed in the literature on learning analytics, this topic was also included. In addition, some general literature on AI and robots, while not directly referencing education, was deemed to be relevant, particularly as it was recognised that education might be a late adopter and so impacts would be felt through wider social changes rather than directly through educational applications. Literature reviews which suggested trends in current technologies were included but items which were detailed reports of the development of technologies were excluded. Items prior to 2016 tended also to be excluded, because the concern was with the latest wave of AI and robots. As a result of these searches in the order of 500 items were consulted, with around 200 items deemed to be of high relevance. As such there is no claim that this was an “exhaustive” review, rather it should be seen as complimenting existing systematic reviews by serving a different purpose. The review also successfully identified a number of existing fictions in the literature that could then be rewritten to fit the needs of the study, such as to apply to HE, to make them more concise or add new elements (fictions 1, 3, 4).

As an imaginative act, writing fictions is not reducible to a completely transparent method, although some aspects can be described (Lyckvi et al., 2018 ). Some techniques to create effective critical designs are suggested by Auger ( 2013 ) such as placing something uncanny or unexpected against the backdrop of mundane normality and a sense of verisimilitude (perhaps achieved through mixing fact and fiction). Fiction 6, for example, exploits the mundane feel of committee meeting minutes to help us imagine the debates that would occur among university leaders implementing AI. A common strategy is to take the implications of a central counterfactual premise to its logical conclusion: asking: “what if?” For example, fiction 7 extends existing strategies of gathering data and using chatbots to act on them to its logical extension as a comprehensive system of data surveillance. Another technique used here was to exploit certain genres of writing such as in fiction 6 where using a style of writing from marketing and PR remind us of the role of EdTech companies in producing AI and robots.

Table 1 offers a summary of the eight fictions produced through this process. The fictions explore the potential of AI and robots in different areas of university activity, in learning, administration and research (Table 1 column 5). They seek to represent some different types of technology (column 2). Some are rather futuristic, most seem feasible today, or in the very near future (column 3). The full text of the fictions and supporting material can be downloaded from the University of Sheffield data repository, ORDA, and used under a cc-by-sa licence ( https://doi.org/10.35542/osf.io/s2jc8 ). The following sections describe each fiction in turn, showing how it relates to the literature and surfaces relevant issues. Table 2 below will summarise the issues raised.

In the following sections each of the eight fictions is described, set in the context of the literature review material that shaped their construction.

AI and robots in learning: Fiction 1, “AIDan, the teaching assistant”

Much of the literature around AI in learning focuses on tools that directly teach students (Baker and Smith, 2019 ; Holmes et al., 2019 ; Zawacki-Richter et al., 2019 ). This includes classes of systems such as:

Intelligent tutoring systems (ITS) which teach course content step by step, taking an approach personalised to the individual. Holmes et al. ( 2019 ) differentiate different types of Intelligent Tutoring Systems, based on whether they adopt a linear, dialogic or more exploratory model.

One emerging area of adaptivity is using sensors to detect the emotional and physical state of the learner, recognising the embodied and affective aspects of learning (Luckin, et al., 2016 ); a further link is being made to how virtual and augmented reality can be used to make the experience more engaging and authentic (Holmes et al., 2019 ).

Automatic writing evaluation (AWE) which are tools to assess and offer feedback on writing style (rather than content) such as learnandwrite, Grammarly and Turnitin’s Revision Assistant (Strobl, et al. 2019 ; Hussein et al., 2019 ; Hockly, 2019 ).

Conversational agents (also known as Chatbots or virtual assistants) which are AI tools designed to converse with humans (Winkler and Sӧllner, 2018 ).

The adaptive pedagogical agent, which is an “anthropomorphic virtual character used in an online learning environment to serve instructional purposes” (Martha and Santoso, 2017 ).

Many of these technologies are rather mature, such as AWE and ITS. However, there are also a wide range of different type of systems within each category, e.g. conversational agents can be designed for short or long term interaction, and could act as tutors, engage in language practice, answer questions, promote reflection or act as co-learners. They could be based on text or verbal interaction (Følstad et al., 2019 ; Wellnhammer et al., 2020 ).

Much of such literature reflects the development of AI technologies and their evaluation compared to other forms of teaching. However, according to a recent review it is primarily written by computer scientists mostly from a technical point of view with relatively little connection to pedagogy or ethics (Zawacki-Richter et al., 2019 ). In contrast some authors such as Luckin and Holmes, seek to move beyond the rather narrow development of tools and their evaluation, to envisioning how AI can address the grand challenges of learning in the twenty-first century (Luckin, et al. 2016 ; Holmes et al., 2019 ; Woolf et al., 2013 ). According to this vision many of the inefficiencies and injustices of the current global education system can be addressed by applying AI.

To surface such discussion around what is possible fiction 1 is based loosely on a narrative published by Luckin and Holmes ( 2017 ) themselves. In their paper, they imagine a school classroom ten years into the future from the time of writing, where a teacher is working with an AI teaching assistant. Built into their fiction are the key features of their vision of AI (Luckin et al. 2016 ), thus emphasis is given to:

AI designed to support teachers rather than replacing them;

Personalisation of learning experiences through adaptivity;

Replacement of one-off assessment by continuous monitoring of performance (Luckin, 2017 );

The monitoring of haptic data to adjust learning material to students’ emotional and physical state in real time;

The potential of AI to support learning twenty-first century skills, such as collaborative skills;

Teachers developing skills in data analysis as part of their role;

Students (and parents) as well as teachers having access to data about their learning.

While Luckin and Holmes ( 2017 ) acknowledge that the vision of AI sounds a “bit big brother” it is, as one would expect, essentially an optimistic piece in which all the key technologies they envisage are brought together to improve learning in a broad sense. The fiction developed here retains most of these elements, but reimagined for an HE context, and with a number of other changes:

Reference is also made to rooting teaching in learning science, one of the arguments for AI Luckin makes in a number of places (e.g. Luckin et al. 2016 ).

Students developing a long term relationship with the AI. It is often seen as a desirable aspect of providing AI as a lifelong learning partner (Woolf, et al. 2013 ).

Of course, the more sceptical reader may be troubled by some aspects of this vision, including the potential effects of continuously monitoring performance as a form of surveillance. The emphasis on personalization of learning through AI has been increasingly questioned (Selwyn, 2019a ).

The following excerpt gives a flavour of the fiction:

Actually, I partly picked this Uni because I knew they had AI like AIDan which teach you on principles based in learning science. And exams are a thing of the past! AIDan continuously updates my profile and uses this to measure what I have learned. I have set tutorials with AIDan to analyse data on my performance. Jane often talks me through my learning data as well. I work with him planning things like my module choices too. Some of my data goes to people in the department (like my personal tutor) to student and campus services and the library to help personalise their services.

Social robots in learning: Fiction 2, “Footbotball”

Luckin and Holmes ( 2017 ) see AI as instantiated by sensors and cameras built into the classroom furniture. Their AI does not seem to have a physical form, though it does have a human name. But there is also a literature around educational robots: a type of social robot for learning.

a physical robot, in the same space as the student. It has an intelligence that can support learning tasks and students learn by interacting with it through suitable semiotic systems (Catlin et al., 2018 ).

There is some evidence that learning is better when the learner interacts with a physical entity rather than purely virtual agent and certainly there might be beneficial where what is learned involves embodiment (Belpaeme et al., 2018 ). Fiction 2 offers an imaginative account of what learning alongside robots might be like, in the context of university sport rather than within the curriculum. The protagonist describes how he is benefiting from using university facilities to participate in an imaginary sport, footbotball.

Maybe it’s a bit weird to say, but it’s about developing mutual understanding and… respect. Like the bots can sense your feelings too and chip in with a word just to pick you up if you make a mistake. And you have to develop an awareness of their needs too. Know when is the right time to say something to them to influence them in the right direction. When you watch the best teams they are always like talking to each other. But also just moving together, keeping eyes on and moving as a unit.

The protagonist in fiction 2 describes the high level and employability skills he is learning from a sporting application of robotics. This also reminds us of how the widespread use of AI and robots in wider social contexts may be a key driver for adoption in HE.

Fiction 3: “Criticalbot in conversation”

Both fictions 1 and 2 are glossy science fictions, with a strongly futuristic feel and, as in traditional design scenarios the technology seems to be used as intended by the designer. In contrast, the third fiction is inspired by Bayne’s ( 2015 ) description of Teacherbot, a chatbot developed to support a MOOC on elearning and digital cultures. Teacherbot does not masquerade as human. Students on the course are told what it is and its interactions are clumsy enough to reveal this anyway. Yet Bayne’s ( 2015 ) argument is that it offers positive non-human affordances. Students seem to learn from it, partly by reflecting on its place in their learning. Thus fiction 3, echoing the sample transcripts between Teacherbot and real students quoted in Bayne ( 2015 ), is a fictional dialogue between a conversational agent and a group of students working on an assignment (itself on the topic of bias in AI). Criticalbot, as its name suggests, is being imagined used to teach the students to be more critical, e.g. by prompting them to think harder about how to read an academic text, raising questions about the authority of authors, and prompting discussion around quotes from a key text.

Student 1: @CriticalBot we have been set the task to research bias in AIED. What do you know about bias in AIED? CriticalBot: Blanchard is recommended in the reading list. Student 2: I think he means rtfm. Student 1: Blanchard? CriticalBot: Sorry. I should have typed: Blanchard, E. G. (2015). Socio-cultural imbalances in AIED research: Investigations, implications and opportunities. International Journal of Artificial Intelligence in Education, 25(2), 204–228. No one’s perfect. Student 1: @CriticalBot What is the main point he makes? CriticalBot: Nice try. Try reading the abstract. Abstracts summarise key points. It may also be useful to read the conclusion section to grasp the author’s main argument before deciding on whether to undertake a closer reading.

As the quotation from the fiction illustrates, echoing Bayne ( 2015 ), the conversation in Fiction 2 is not necessarily smooth; misunderstandings and conflicts occur. The fiction brings into view the less compliant vision of the student who might wish to game the system, a potential problem with AI which is apparent in the literature of AWE (Hussein et al. 2019 ). This fiction encapsulates an important alternative potential imaginary of AI, as a simple, low-tech intervention. At the same time in being designed to promote critical thinking it can also be seen as teaching a key, high-level skill. This challenges us to ask if an AI can truly do that and how.

The intelligent campus: Fiction 4, “The intelligent campus app”

The AIED literature with its emphasis on the direct application of AI to learning accounts for a big block of the literature about AI in Higher Education, but not all of it. Another rather separate literature exists around the smart or intelligent campus (e.g. JISC 2018; Min-Allah and Alrashed, 2020 ; Dong et al., 2020 ). This is the application of Internet of Things and increasingly AI to the management of the campus environment. This is often oriented towards estates management, such as monitoring room usage and controlling lighting and heating. But it does also encompass support of wayfinding, attendance monitoring, and ultimately of student experience, so presents an interesting contrast to the AIEd literature.

The fourth fiction is adapted from a report each section of which is introduced by quotes from an imaginary day in the life of a student, Leda, who reflects on the benefits of the intelligent/smart campus technologies to her learning experience (JISC, 2018). The emphasis in the report is on:

Data driven support of wayfinding and time management;

Integration of smart campus with smart city features (e.g. bus and traffic news);

Attendance monitoring and delivery of learning resources;

The student also muses about the ethics of the AI. She is presented as a little ambivalent about the monitoring technologies, and as in Luckin and Holmes ( 2017 ), it is referred to in her own words as potentially “a bit big brother” (JISC 2018: 9). But ultimately she concludes that the smart campus improves her experience as a student. In this narrative, unlike in the Luckin and Holmes ( 2017 ) fiction, the AI is much more in the background and lacks a strong personality. It is a different sort of optimistic vision geared towards convenience rather than excellence. There is much less of a futuristic feel, indeed one could say that not only does the technology exist to deliver many of the services described, they are already available and in use—though perhaps not integrated within one application.

Sitting on the bus I look at the plan for the day suggested in the University app. A couple of timetabled classes; a group work meeting; and there is a reminder about that R205 essay I have been putting off. There is quite a big slot this morning when the App suggests I could be in the library planning the essay – as well as doing the prep work for one of the classes it has reminded me about. It is predicting that the library is going to be very busy after 11AM anyway, so I decide to go straight there.

The fiction seeks to bring out more about the idea of “nudging” to change behaviours a concept often linked to AI and the ethics of which are queried by Selwyn ( 2019a ). The issue of how AI and robots might impact the agency of the learner recurs across the first four fictions.

AI and robotics in research: Fiction 5, “The Research Management Suite TM”

So far in this paper most of the focus has been on the application of AI and robotics to learning. AI also has applications in university research, but it is an area far less commonly considered than learning and teaching. Only 1% of CIOs responding to a survey of HEIs by Gartner had deployed AI for research, compared to 27% for institutional analytics and 10% for adaptive learning (Lowendahl and Williams, 2018 ). Some AI could be used directly in research, not just to perform analytical tasks, but to generate hypotheses to be tested (Jones et al., 2019 ). The “robot scientist” being tireless and able to work in a precise way could carry through many experiments and increase reproducibility (King, et al., 2009 ; Sparkes et al., 2010 ). It might have the potential to make significant discoveries independently, perhaps by simply exploiting its tirelessness to test every possible hypothesis rather than use intuition to select promising ones (Kitano, 2016 ).

Another direct application of AI to research is text and data mining (TDM). Given the vast rate of academic publishing there is growing need to mine published literature to offer summaries to researchers or even to develop and test hypotheses (McDonald and Kelly, 2012 ). Advances in translation also offer potential to make the literature in other languages more accessible, with important benefits.

Developments in publishing give us a further insight into how AI might be applied in the research domain. Publishers are investing heavily in AI (Gabriel, 2019 ). One probable landmark was that in 2019, Springer published the first “machine generated research book” (Schoenenberger, 2019 : v): a literature review of research on Lithium-Ion batteries, written entirely automatically. This does not suggest the end of the academic author, Springer suggest, but does imply changing roles (Schoenenberger, 2019 ). AI is being applied to many aspects of the publication process: to identify peer reviewers (Price and Flach, 2017 ), to assist review by checking statistics, to summarise open peer reviews, to check for plagiarism or for the fabrication of data (Heaven, 2018 ), to assist copy editing, to suggest keywords and to summarise and translate text. Other tools claim to predict the future citation of articles (Thelwall, 2019 ). Data about academics, their patterns of collaboration and citation through scientometrics are currently based primarily on structured bibliographic data. The cutting edge is the application of text mining techniques to further analyse research methods, collaboration patterns, and so forth (Atanassova et al., 2019 ). This implies a potential revolution in the management and evaluation of research. It will be relevant to ask what responsible research metrics are in this context (Wilsdon, 2015 ).

Instantiating these developments, the sixth fiction revolves around a university licensing “Research Management Suite TM “a set of imaginary proprietary tools to offer institutional level support to its researchers to increase and perhaps measure their productivity. A flavour of the fiction can be gleaned from this except:

Academic Mentor ™ is our premium meta analysis service. Drawing on historic career data from across the disciplines, it identifies potential career pathways to inform your choices in your research strategy. By identifying structural holes in research fields it enables you to position your own research within emerging research activity, so maximising your visibility and contribution. Mining data from funder strategy, the latest publications, preprints and news sources it identifies emergent interdisciplinary fields, matching your research skills and interests to the complex dynamics of the changing research landscape.

This fiction prompts questions about the nature of the researcher’s role and ultimately about what research is. At what point does the AI become a co-author, because it is making a substantive intellectual contribution to writing a research output, making a creative leap or even securing funding? Given the centrality of research to academic identity this indeed may feel even more challenging than the teaching related scenarios. This fiction also recognised the important role of EdTech companies in how AI reaches HE, partly because of the high cost of AI development. The reader is also prompted to wonder how the technology might disrupt the HE landscape if those investing in these technologies were ambitious newer institutions keen to rise in university league tables.

Tackling pragmatic barriers: Fiction 6, “Verbatim minutes of University AI project steering committee: AI implementation phase 3”

A very large literature around technologies in HE in general focuses on the challenges of implementing them as a change management problem. Reid ( 2014 ), for example, seeks to develop a model of the differing factors that block the smooth implementation of learning technologies in the classroom, such as problems with access to the technology, project management challenges, as well as issues around teacher identity. Echoing these arguments, Tsai et al.’s ( 2017 , 2019 ) work captures why for all the hype around it, Learning Analytics have not yet found extensive practical application in HE. Given that AI requires intensive use of data, by extension we can argue that the same barriers will probably apply to AI. Specifically Tsai et al. ( 2017 , 2019 ) identify barriers in terms of technical, financial and other resource demands, ethics and privacy issues, failures of leadership, a failure to involve all stakeholders (students in particular) in development, a focus on technical issues and neglect of pedagogy, insufficient staff training and a lack of evidence demonstrating the impact on learning. There are hints of similar types of challenge around the implementation of administration focussed applications (Nurshatayeva, et al., 2020 ) and TDM (FutureTDM, 2016 ).

Reflecting these thoughts, the fifth fiction is an extract from an imaginary committee meeting, in which senior university managers discuss the challenges they are facing in implementing AI. It seeks to surface issues around teacher identity, disciplinary differences and resource pressures that might shape the extensive implementation of AI in practice.

Faculty of Humanities Director: But I think there is a pedagogic issue here. With the greatest of respect to Engineering, this approach to teaching, simply does not fit our subject. You cannot debate a poem or a philosophical treatise with a machine. Faculty of Engineering Director: The pilot project also showed improved student satisfaction. Data also showed better student performance. Less drop outs. Faculty of Humanities Director: Maybe that’s because… Vice Chancellor: All areas where Faculty of Humanities has historically had a strategic issue. Faculty of Engineering Director: The impact on employability has also been fantastic, in terms of employers starting to recognise the value of our degrees now fluency with automation is part of our graduate attributes statement. Faculty of Humanities Director: I see the benefits, I really do. But you have to remember you are taking on deep seated assumptions within the disciplinary culture of Humanities at this university. Staff are already under pressure with student numbers not to mention in terms of producing world class research! I am not sure how far this can be pushed. I wouldn’t want to see more industrial action.

Learning analytics and datafication: Fiction 7, “Dashboards”

Given the strong relation between “big data” and AI, the claimed benefits and the controversies that already exist around LA are relevant to AI too (Selwyn, 2019a ). The main argument for LA is that they give teachers and learners themselves information to improve learning processes. Advocates talk of an obligation to act. LA can also be used for the administration of admissions decisions and ensuring retention. Chatbots are now being used to assist applicants through complex admissions processes or to maintain contact to ensure retention and appear to offer a cheap and effective alternative (Page and Gehlbach, 2017 ; Nurshatayeva et al., 2020 ). Gathering more data about HE also promotes public accountability.

However, data use in AI does raise many issues. The greater the dependence on data or data driven AI the greater the security issues associated with the technology. Another inevitable concern is with legality and the need to abide by appropriate privacy legislation, such as GDPR in Europe. Linked to this are clearly privacy issues, implying consent, the right to control over the use of one’s data and the right to withdraw (Fjeld et al., 2020 ). Yet a recent study by Jones ( 2020 ) found students knew little of how LA were being used in their institution or remembered consenting to allowing their data to be used. These would all be recognised as issues by most AI projects.

However, increasingly critiques of AI in learning centre around the datafication of education (Jarke and Breiter, 2019 ; Williamson and Eynon, 2020 ; Selwyn, 2019 a; Kwet and Prinsloo, 2020 ). A data driven educational system has the potential to be used or experienced as a surveillance system. “What can be accomplished with data is usually a euphemism for what can be accomplished with surveillance” (Kwet and Prinsloo, 2020 : 512). Not only might individual freedoms be threatened by institutions or commercial providers undertaking surveillance of student and teaching staff behaviour, there is also a chilling effect just through the fear of being watched (Kwet and Prinsloo, 2020 ). Students become mere data points, as surveillance becomes intensified and normalised (Manolev et al. 2019 ). While access to their own learning data could be empowering for students, techniques such as nudging intended to influence people without their knowledge undermine human agency (Selwyn, 2019b ). Loss of human agency is one of the fears revolving around AI and robots.

Further, a key issue with AI is that although predictions can be accurate or useful it is quite unclear how these were produced. Because AI “learns” from data, even the designers do not fully understand how the results were arrived at so they are certainly hard to explain to the public. The result is a lack of transparency, and so of accountability, leading to deresponsibilisation.

Much of the current debate around big data and AI revolves around bias, created by using training data that does not represent the whole population, reinforced by the lack of diversity among designers of the systems. If data is based on existing behaviour, this is likely to reproduce existing patterns of disadvantage in society, unless AI design takes into account social context—but datafication is driven by standardisation. Focussing on technology diverts attention from the real causes of achievement gaps in social structures, it could be argued (Macgilchrist, 2019 ). While often promoted as a means of empowering learners and their teachers, mass personalisation of education redistributes power away from local decision making (Jarke and Breiter, 2019 ; Zeide, 2017 ). In the context of AIEd there is potential for assumptions about what should be taught to show very strong cultural bias, in the same way that critics have already argued that plagiarism detection systems impose culturally specific notions of authorship and are marketed in a way to reinforce crude ethnic stereotypes (Canzonetta and Kannan, 2016 ).

Datafication also produces performativity: the tendency of institutions (and teachers and students) to shift their behaviour towards doing what scores well against the metric, in a league table mentality. Yet what is measured is often a proxy of learning or reductive of what learning in its full sense is, critics argue (Selwyn, 2019b ). The potential impact is to turn HE further into a marketplace (Williamson, 2019 ). It is evident that AI developments are often partly a marketing exercise (Lacity, 2017 ). Edtech companies play a dominant role in developing AI (Williamson and Eynon, 2020 ). Selwyn ( 2019a ) worries that those running education will be seduced by glittering promises of techno-solutionism, when the technology does not really work. The UK government has invested heavily in gathering more data about HE in order to promote the reform of HE in the direction of marketisation and student choice (Williamson and Eynon, 2020 ). Learning data could also increasingly itself become a commodity, further reinforcing the commercialisation of HE.

Thus fiction 6 explores the potential to gather data about learning on a huge scale, make predictions based on it and take actions via conveying information to humans or through chatbots. In the fiction the protagonist explains an imaginary institutional level system that is making data driven decisions about applicants and current students.

Then here we monitor live progress of current students within their courses. We can dip down into attendance, learning environment use, library use, and of course module level performance and satisfaction plus the extra-curricula data. Really low-level stuff some of it. It’s pretty much all there, monitored in real time. We are really hot on transition detection and monitoring. The chatbots are used just to check in on students, see they are ok, nudge things along, gather more data. Sometimes you just stop and look at it ticking away and think “wow!”. That all gets crunched by the system. All the time we feed the predictives down into departmental dashboards, where they pick up the intervention work. Individual teaching staff have access via smart speaker. Meanwhile, we monitor the trend lines up here.

In the fiction the benefits in terms of being able to monitor and address attainment gaps is emphasised. The protagonist’s description of projects that are being worked on suggests competing drivers behind such developments including meeting government targets, cost saving and the potential to make money by reselling educational data.

Infrastructure: Fiction 8, “Minnie—the AI admin assistant”

A further dimension to the controversy around AI is to consider its environmental cost and the societal impact of the wider infrastructures needed to support AI. Brevini ( 2020 ) points out that a common AI training model in linguistics can create the equivalent of five times the lifetime emissions of an average US car. This foregrounds the often unremarked environmental impact of big data and AI. It also prompts us to ask questions about the infrastructure required for AI. Crawford and Joler’s ( 2018 ) brilliant Anatomy of an AI system reveals that making possible the functioning of a physically rather unassuming AI like Amazon echo, is a vast global infrastructure based on mass human labour, complex logistic chains and polluting industry.

The first part of fiction 8 describes a personal assistant based on voice recognition, like Siri, which answers all sorts of administrative questions.The protagonist expresses some unease with how the system works, reflecting the points made by Rummel et al. ( 2016 ) about the failure of systems if despite their potential sophistication they lack nuance and flexibility in their application. There is also a sense of alienation (Griffiths, 2015 ). The second part of the fiction extends this sense of unease to a wider perspective on the usually invisible, but very material infrastructure which AI requires, as captured in Crawford and Joler ( 2018 ). In addition, imagery is drawn from Maughan’s ( 2016 ) work where he travels backwards up the supply chain for consumer electronics from the surreal landscape of hi-tech docks then visiting different types of factories and ending up visiting a huge polluted lake created by mining operations for rare earth elements in China. This perspective queries all the other fictions with their focus on using technologies or even campus infrastructure by widening the vision to encompass the global infrastructures that are required to make AI possible.

The vast effort of global logistics to bring together countless components to build the devices through which we interact with AI. Lorries queuing at the container port as another ship comes in to dock. Workers making computer components in hi-tech factories in East Asia. All dressed in the same blue overalls and facemasks, two hundred workers queue patiently waiting to be scan searched as they leave work at the end of the shift. Exploitative mining extracting non-renewable, scarce minerals for computer components, polluting the environment and (it is suspected) reducing the life expectancy of local people. Pipes churn out a clayey sludge into a vast lake.

Conclusion: using the fictions together

As we have seen each of the fictions seeks to open up different positive visions or dimensions of debate around AI (summarised in Table 2 below). All implicitly ask questions about the nature of human agency in relationship to AI systems and robots, be that through empowerment through access to learning data (fiction 1), their power to play against the system (Fiction 3) or the hidden effects of nudging (Fiction 4) and the reinforcements of social inequalities. Many raise questions about the changing role of staff or the skills required to operate in this environment. They are written in a way seeking to avoid taking sides, e.g. not to always undercut a utopian view or simply present a dark dystopia. Each contains elements that might be inspirational or a cause of controversy. Specifically, they can be read together to suggest tensions between different possible futures. In particular fictions 7 and 8 and the commercial aspects implied by the presentation of fiction 5, reveal aspects of AI largely invisible in the glossy strongly positive images in fictions 1 and 2, or the deceptive mundanity of fiction 3. It is also anticipated that the fictions will be read “against the grain” by readers wishing to question what the future is likely to be or should be like. This is one of the affordances of them being fictions.

The most important contribution of the paper was the wide-ranging narrative literature review emphasising the social, ethical, pedagogic and management issues of automation through AI and robots on HE as a whole. On the basis of the understanding gained from the literature review a secondary contribution was the development of a collection of eight accessible, repurposable design fictions that prompt debate about the potential role of AI and robots in HE. This prompts us to notice common challenges, such as around commodification and the changing role of data. It encompasses work written by developers, by those with more visionary views, those who see the challenges as primarily pragmatic and those coming from much more critical perspectives.

The fictions are intended to be used to explore staff and student responses through data collection using the fictions to elicit views. The fictions could also be used in teaching to prompt debate among students, perhaps setting them the task to write new fictions (Rapp, 2020 ). Students of education could use them to explore the potential impact of AI on educational institutions and to discuss the role of technologies in educational change more generally. The fictions could be used in teaching students of computer science, data science, HCI and information systems in courses about computer ethics, social responsibility and sustainable computing—as well as those directly dealing with AI. They could also be used in Media Studies and Communications, e.g. to compare them with other future imaginaries in science fiction or to design multimedia creations inspired by such fictions. They might also be used for management studies as a case study of strategizing around AI in a particular industry.

While there is an advantage in seeking to encompass the issues within a small collection of engaging fictions that in total run to less than 5000 words, it must be acknowledged that not every issue is reflected. For example, what is not included is the different ways that AI and robots might be used in teaching different disciplines, such as languages, computer science or history. The many ways that robots might be used in background functions or to play the role themselves of learner also requires further exploration. Most of the fictions were located in a fairly near future, but there is also potential to develop much more futuristic fictions. These gaps leave room for the development of more fictions.

The paper has explained the rationale and process of writing design fictions. To the growing literature around design fictions, the paper seeks to make a contribution by emphasising the use of design fictions as collections, exploiting different narratives and styles and genre of writing to set up intertextual reflections that help us ask questions about technologies in the widest sense.

Availability of data and materials

Data from the project is available from the University of Sheffield repository, ORDA. https://doi.org/10.35542/osf.io/s2jc8 .

Amer, M., Daim, T., & Jetter, A. (2013). A review of scenario planning. Futures, 46, 23–40.

Article   Google Scholar  

Atanassova, I., Bertin, M., & Mayr, P. (2019). Editorial: mining scientific papers: NLP-enhanced bibliometrics. Frontiers in Research Metrics and Analytics . https://doi.org/10.3389/frma.2019.00002 .

Auger, J. (2013). Speculative design: Crafting the speculation. Digital Creativity, 24 (1), 11–35.

Badampudi, D., Wohlin, C., & Petersen, K. (2015). Experiences from using snowballing and database searches in systematic literature studies. In Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering (pp. 1–10).

Baker, T., Smith, L. and Anissa, N. (2019). Educ-AI-tion Rebooted? Exploring the future of artificial intelligence in schools and colleges. NESTA. https://www.nesta.org.uk/report/education-rebooted/ .

Bates, T., Cobo, C., Mariño, O., & Wheeler, S. (2020). Can artificial intelligence transform higher education? International Journal of Educational Technology in Higher Education . https://doi.org/10.1186/s41239-020-00218-x .

Bayne, S. (2015). Teacherbot: interventions in automated teaching. Teaching in Higher Education, 20 (4), 455–467.

Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., & Tanaka, F. (2018). Social robots for education: A review. https://doi.org/10.1126/scirobotics.aat5954 .

Blanchard, E. G. (2015). Socio-cultural imbalances in AIED research: Investigations, implications and opportunities. International Journal of Artificial Intelligence in Education, 25 (2), 204–228.

Bleecker, J. (2009). Design fiction: A short essay on design, science, fact and fiction. Near Future Lab.

Blythe, M. (2017). Research fiction: storytelling, plot and design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 5400–5411).

Blythe, M., Andersen, K., Clarke, R., & Wright, P. (2016). Anti-solutionist strategies: Seriously silly design fiction. Conference on Human Factors in Computing Systems - Proceedings (pp. 4968–4978). Association for Computing Machinery.

Brevini, B. (2020). Black boxes, not green: Mythologizing artificial intelligence and omitting the environment. Big Data & Society, 7 (2), 2053951720935141.

Canzonetta, J., & Kannan, V. (2016). Globalizing plagiarism & writing assessment: a case study of Turnitin. The Journal of Writing Assessment , 9(2). http://journalofwritingassessment.org/article.php?article=104 .

Carroll, J. M. (1999) Five reasons for scenario-based design. In Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences . HICSS-32. Abstracts and CD-ROM of Full Papers, Maui, HI, USA, 1999, pp. 11. https://doi.org/10.1109/HICSS.1999.772890 .

Catlin, D., Kandlhofer, M., & Holmquist, S. (2018). EduRobot Taxonomy a provisional schema for classifying educational robots. 9th International Conference on Robotics in Education 2018, Qwara, Malta.

Clay, J. (2018). The challenge of the intelligent library. Keynote at What does your eResources data really tell you? 27th February, CILIP.

Crawford, K., & Joler, V. (2018) Anatomy of an AI system , https://anatomyof.ai/ .

Darby, E., Whicher, A., & Swiatek, A. (2017). Co-designing design fictions: a new approach for debating and priming future healthcare technologies and services. Archives of design research. Health Services Research, 30 (2), 2.

Google Scholar  

Demartini, C., & Benussi, L. (2017). Do Web 4.0 and Industry 4.0 Imply Education X.0? IT Pro , 4–7.

Dong, Z. Y., Zhang, Y., Yip, C., Swift, S., & Beswick, K. (2020). Smart campus: Definition, framework, technologies, and services. IET Smart Cities, 2 (1), 43–54.

Dourish, P., & Bell, G. (2014). “resistance is futile”: Reading science fiction alongside ubiquitous computing. Personal and Ubiquitous Computing, 18 (4), 769–778.

Dunne, A., & Raby, F. (2001). Design noir: The secret life of electronic objects . New York: Springer Science & Business Media.

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. SSRN Electronic Journal . https://doi.org/10.2139/ssrn.3518482 .

Følstad, A., Skjuve, M., & Brandtzaeg, P. (2019). Different chatbots for different purposes: Towards a typology of chatbots to understand interaction design. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 11551 LNCS , pp. 145–156. Springer Verlag.

Future TDM. (2016). Baseline report of policies and barriers of TDM in Europe. https://project.futuretdm.eu/wp-content/uploads/2017/05/FutureTDM_D3.3-Baseline-Report-of-Policies-and-Barriers-of-TDM-in-Europe.pdf .

Gabriel, A. (2019). Artificial intelligence in scholarly communications: An elsevier case study. Information Services & Use, 39 (4), 319–333.

Griffiths, D. (2015). Visions of the future, horizon report . LACE project. http://www.laceproject.eu/visions-of-the-future-of-learning-analytics/ .

Heaven, D. (2018). The age of AI peer reviews. Nature, 563, 609–610.

Hockly, N. (2019). Automated writing evaluation. ELT Journal, 73 (1), 82–88.

Holmes, W., Bialik, M. and Fadel, C. (2019). Artificial Intelligence in Education . The center for curriculum redesign. Boston, MA.

Hussein, M., Hassan, H., & Nassef, M. (2019). Automated language essay scoring systems: A literature review. PeerJ Computer Science . https://doi.org/10.7717/peerj-cs.208 .

Inayatullah, S. (2008). Six pillars: Futures thinking for transforming. foresight, 10 (1), 4–21.

Jarke, J., & Breiter, A. (2019). Editorial: the datafication of education. Learning, Media and Technology, 44 (1), 1–6.

JISC. (2019). The intelligent campus guide. Using data to make smarter use of your university or college estate . https://www.jisc.ac.uk/rd/projects/intelligent-campus .

Jones, E., Kalantery, N., & Glover, B. (2019). Research 4.0 Interim Report. Demos.

Jones, K. (2019). “Just because you can doesn’t mean you should”: Practitioner perceptions of learning analytics ethics. Portal, 19 (3), 407–428.

Jones, K., Asher, A., Goben, A., Perry, M., Salo, D., Briney, K., & Robertshaw, M. (2020). “We’re being tracked at all times”: Student perspectives of their privacy in relation to learning analytics in higher education. Journal of the Association for Information Science and Technology . https://doi.org/10.1002/asi.24358 .

King, R. D., Rowland, J., Oliver, S. G., Young, M., Aubrey, W., Byrne, E., et al. (2009). The automation of science. Science, 324 (5923), 85–89.

Kitano, H. (2016). Artificial intelligence to win the nobel prize and beyond: Creating the engine for scientific discovery. AI Magazine, 37 (1), 39–49.

Kwet, M., & Prinsloo, P. (2020). The ‘smart’ classroom: a new frontier in the age of the smart university. Teaching in Higher Education, 25 (4), 510–526.

Lacity, M., Scheepers, R., Willcocks, L. & Craig, A. (2017). Reimagining the University at Deakin: An IBM Watson Automation Journey . The Outsourcing Unit Working Research Paper Series.

Lowendahl, J.-M., & Williams, K. (2018). 5 Best Practices for Artificial Intelligence in Higher Education. Gartner. Research note.

Luckin, R. (2017). Towards artificial intelligence-based assessment systems. Nature Human Behaviour, 1 (3), 1–3.

Luckin, R., & Holmes, W. (2017). A.I. is the new T.A. in the classroom. https://howwegettonext.com/a-i-is-the-new-t-a-in-the-classroom-dedbe5b99e9e .

Luckin, R., Holmes, W., Griffiths, M., & Pearson, L. (2016). Intelligence unleashed an argument for AI in Education. Pearson. https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/about-pearson/innovation/open-ideas/Intelligence-Unleashed-v15-Web.pdf .

Lyckvi, S., Wu, Y., Huusko, M., & Roto, V. (2018). Eagons, exoskeletons and ecologies: On expressing and embodying fictions as workshop tasks. ACM International Conference Proceeding Series (pp. 754–770). Association for Computing Machinery.

Macgilchrist, F. (2019). Cruel optimism in edtech: When the digital data practices of educational technology providers inadvertently hinder educational equity. Learning, Media and Technology, 44 (1), 77–86.

Manolev, J., Sullivan, A., & Slee, R. (2019). The datafication of discipline: ClassDojo, surveillance and a performative classroom culture. Learning, Media and Technology, 44 (1), 36–51.

Martha, A. S. D., & Santoso, H. B. (2019). The design and impact of the pedagogical agent: A systematic literature review. Journal of Educators Online, 16 (1), n1.

Maughan, T. (2016). The hidden network that keeps the world running. https://datasociety.net/library/the-hidden-network-that-keeps-the-world-running/ .

McDonald, D., & Kelly, U. (2012). The value and benefits of text mining . England: HEFCE.

Min-Allah, N., & Alrashed, S. (2020). Smart campus—A sketch. Sustainable Cities and Society . https://doi.org/10.1016/j.scs.2020.102231 .

Nathan, L. P., Klasnja, P. V., & Friedman, B. (2007). Value scenarios: a technique for envisioning systemic effects of new technologies. In CHI'07 extended abstracts on human factors in computing systems (pp. 2585–2590).

Nurshatayeva, A., Page, L. C., White, C. C., & Gehlbach, H. (2020). Proactive student support using artificially intelligent conversational chatbots: The importance of targeting the technology. EdWorking paper, Annenberg University https://www.edworkingpapers.com/sites/default/files/ai20-208.pdf .

Page, L., & Gehlbach, H. (2017). How an artificially intelligent virtual assistant helps students navigate the road to college. AERA Open . https://doi.org/10.1177/2332858417749220 .

Pinkwart, N. (2016). Another 25 years of AIED? Challenges and opportunities for intelligent educational technologies of the future. International journal of artificial intelligence in education, 26 (2), 771–783.

Price, S., & Flach, P. (2017). Computational support for academic peer review: A perspective from artificial intelligence. Communications of the ACM, 60 (3), 70–79.

Rapp, A. (2020). Design fictions for learning: A method for supporting students in reflecting on technology in human–computer interaction courses. Computers & Education, 145, 103725.

Reid, P. (2014). Categories for barriers to adoption of instructional technologies. Education and Information Technologies, 19 (2), 383–407.

Renz, A., & Hilbig, R. (2020). Prerequisites for artificial intelligence in further education: Identification of drivers, barriers, and business models of educational technology companies. International Journal of Educational Technology in Higher Education . https://doi.org/10.1186/s41239-020-00193-3 .

Roll, I., & Wylie, R. (2016). Evolution and Revolution in Artificial Intelligence in Education. International Journal of Artificial Intelligence in Education, 26 (2), 582–599.

Rummel, N., Walker, E., & Aleven, V. (2016). Different futures of adaptive collaborative learning support. International Journal of Artificial Intelligence in Education, 26 (2), 784–795.

Schoenenberger, H. (2019). Preface. In H. Schoenenberger (Ed.), Lithium-ion batteries a machine-generated summary of current research (v–xxiii) . Berlin: Springer.

Selwyn, N. (2019a). Should robots replace teachers? AI and the future of education . New Jersey: Wiley.

Selwyn, N. (2019b). What’s the problem with learning analytics? Journal of Learning Analytics, 6 (3), 11–19.

Selwyn, N., Pangrazio, L., Nemorin, S., & Perrotta, C. (2020). What might the school of 2030 be like? An exercise in social science fiction. Learning, Media and Technology, 45 (1), 90–106.

Sparkes, A., Aubrey, W., Byrne, E., Clare, A., Khan, M. N., Liakata, M., et al. (2010). Towards robot scientists for autonomous scientific discovery. Automated Experimentation, 2 (1), 1.

Strobl, C., Ailhaud, E., Benetos, K., Devitt, A., Kruse, O., Proske, A., & Rapp, C. (2019). Digital support for academic writing: A review of technologies and pedagogies. Computers and Education, 131, 33–48.

Templier, M., & Paré, G. (2015). A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems, 37 (1), 6.

Thelwall, M. (2019). Artificial intelligence, automation and peer review . Bristol: JISC.

Tsai, Y., & Gasevic, D. (2017). Learning analytics in higher education—Challenges and policies: A review of eight learning analytics policies. ACM International Conference Proceeding Series (pp. 233–242). Association for Computing Machinery.

Tsai, Y. S., Poquet, O., Gašević, D., Dawson, S., & Pardo, A. (2019). Complexity leadership in learning analytics: Drivers, challenges and opportunities. British Journal of Educational Technology, 50 (6), 2839–2854.

Tsekleves, E., Darby, A., Whicher, A., & Swiatek, P. (2017). Co-designing design fictions: A new approach for debating and priming future healthcare technologies and services. Archives of Design Research, 30 (2), 5–21.

Wellnhammer, N., Dolata, M., Steigler, S., & Schwabe, G. (2020). Studying with the help of digital tutors: Design aspects of conversational agents that influence the learning process. Proceedings of the 53rd Hawaii International Conference on System Sciences , (pp. 146–155).

Williamson, B. (2019). Policy networks, performance metrics and platform markets: Charting the expanding data infrastructure of higher education. British Journal of Educational Technology, 50 (6), 2794–2809.

Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology. https://doi.org/10.1080/17439884.2020.1798995 .

Wilsdon, J. (2015). The metric tide: Independent review of the role of metrics in research assessment and management . Sage.

Winkler, R. & Söllner, M. (2018). Unleashing the potential of chatbots in education: A state-of-the-art analysis. In: Academy of Management Annual Meeting (AOM). Chicago, USA.

Woolf, B. P., Lane, H. C., Chaudhri, V. K., & Kolodner, J. L. (2013). AI grand challenges for education. AI Magazine, 34 (4), 66–84.

Zawacki-Richter, O., Marín, V., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—where are the educators? International Journal of Educational Technology in Higher Education . https://doi.org/10.1186/s41239-019-0171-0 .

Zeide, E. (2017). The structural consequences of big data-driven education. Big Data, 5 (2), 164–172.

Download references

Acknowledgements

Not applicable.

The project was funded by Society of Research into Higher Education—Research Scoping Award—SA1906.

Author information

Authors and affiliations.

Information School, The University of Sheffield, Level 2, Regent Court, 211 Portobello, Sheffield, S1 4DP, UK

You can also search for this author in PubMed   Google Scholar

Contributions

AC conceived and wrote the entire article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to A. M. Cox .

Ethics declarations

Competing interests.

The author declares that he has no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cox, A.M. Exploring the impact of Artificial Intelligence and robots on higher education through literature-based design fictions. Int J Educ Technol High Educ 18 , 3 (2021). https://doi.org/10.1186/s41239-020-00237-8

Download citation

Received : 04 September 2020

Accepted : 24 November 2020

Published : 18 January 2021

DOI : https://doi.org/10.1186/s41239-020-00237-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial Intelligence
  • Social robots
  • Learning analytics
  • Design fiction

essay on robotics and artificial intelligence

essay on robotics and artificial intelligence

How the A.I. That Drives ChatGPT Will Move Into the Physical World

Covariant, a robotics start-up, is designing technology that lets robots learn skills much like chatbots do.

By combining camera and sensory data with the enormous amounts of text used to train chatbots like ChatGPT, Covariant has built A.I. technology that gives its robots a much broader understanding of the world around it. Credit...

Supported by

  • Share full article

By Cade Metz

Photographs and Video by Balazs Gardi

Cade Metz spent two days at Covariant’s headquarters in Emeryville, Calif., to report this article.

  • March 11, 2024

Companies like OpenAI and Midjourney build chatbots , image generators and other artificial intelligence tools that operate in the digital world.

Now, a start-up founded by three former OpenAI researchers is using the technology development methods behind chatbots to build A.I. technology that can navigate the physical world.

Covariant, a robotics company headquartered in Emeryville, Calif ., is creating ways for robots to pick up, move and sort items as they are shuttled through warehouses and distribution centers. Its goal is to help robots gain an understanding of what is going on around them and decide what they should do next.

The technology also gives robots a broad understanding of the English language, letting people chat with them as if they were chatting with ChatGPT.

The technology, still under development, is not perfect. But it is a clear sign that the artificial intelligence systems that drive online chatbots and image generators will also power machines in warehouses, on roadways and in homes.

Like chatbots and image generators, this robotics technology learns its skills by analyzing enormous amounts of digital data. That means engineers can improve the technology by feeding it more and more data.

Covariant, backed by $222 million in funding, does not build robots. It builds the software that powers robots. The company aims to deploy its new technology with warehouse robots, providing a road map for others to do much the same in manufacturing plants and perhaps even on roadways with driverless cars.

Three people, smiling and talking to each other, sit in front of a laptop at a desk inside Covariant’s headquarters, which has tall ceilings and large glass structures.

The A.I. systems that drive chatbots and image generators are called neural networks , named for the web of neurons in the brain.

By pinpointing patterns in vast amounts of data, these systems can learn to recognize words, sounds and images — or even generate them on their own. This is how OpenAI built ChatGPT, giving it the power to instantly answer questions, write term papers and generate computer programs. It learned these skills from text culled from across the internet. (Several media outlets, including The New York Times, have sued OpenAI for copyright infringement.)

essay on robotics and artificial intelligence

Companies are now building systems that can learn from different kinds of data at the same time. By analyzing both a collection of photos and the captions that describe those photos, for example, a system can grasp the relationships between the two. It can learn that the word “banana” describes a curved yellow fruit.

OpenAI employed that system to build Sora , its new video generator. By analyzing thousands of captioned videos, the system learned to generate videos when given a short description of a scene, like “a gorgeously rendered papercraft world of a coral reef, rife with colorful fish and sea creatures.”

essay on robotics and artificial intelligence

Covariant, founded by Pieter Abbeel, a professor at the University of California, Berkeley, and three of his former students, Peter Chen, Rocky Duan and Tianhao Zhang, used similar techniques in building a system that drives warehouse robots.

The company helps operate sorting robots in warehouses across the globe . It has spent years gathering data — from cameras and other sensors — that shows how these robots operate.

“It ingests all kinds of data that matter to robots — that can help them understand the physical world and interact with it,” Dr. Chen said.

essay on robotics and artificial intelligence

By combining that data with the huge amounts of text used to train chatbots like ChatGPT, the company has built A.I. technology that gives its robots a much broader understanding of the world around it.

After identifying patterns in this stew of images, sensory data and text, the technology gives a robot the power to handle unexpected situations in the physical world. The robot knows how to pick up a banana, even if it has never seen a banana before.

It can also respond to plain English, much like a chatbot. If you tell it to “pick up a banana,” it knows what that means. If you tell it to “pick up a yellow fruit,” it understands that, too.

It can even generate videos that predict what is likely to happen as it tries to pick up a banana. These videos have no practical use in a warehouse, but they show the robot’s understanding of what’s around it.

essay on robotics and artificial intelligence

“If it can predict the next frames in a video, it can pinpoint the right strategy to follow,” Dr. Abbeel said.

The technology, called R.F.M., for robotics foundational model, makes mistakes, much like chatbots do . Though it often understands what people ask of it, there is always a chance that it will not. It drops objects from time to time.

essay on robotics and artificial intelligence

Gary Marcus, an A.I. entrepreneur and an emeritus professor of psychology and neural science at New York University, said the technology could be useful in warehouses and other situations where mistakes are acceptable. But he said it would be more difficult and riskier to deploy in manufacturing plants and other potentially dangerous situations.

“It comes down to the cost of error,” he said. “If you have a 150-pound robot that can do something harmful, that cost can be high.”

essay on robotics and artificial intelligence

As companies train this kind of system on increasingly large and varied collections of data, researchers believe it will rapidly improve.

That is very different from the way robots operated in the past. Typically, engineers programmed robots to perform the same precise motion again and again — like pick up a box of a certain size or attach a rivet in a particular spot on the rear bumper of a car. But robots could not deal with unexpected or random situations.

By learning from digital data — hundreds of thousands of examples of what happens in the physical world — robots can begin to handle the unexpected. And when those examples are paired with language, robots can also respond to text and voice suggestions, as a chatbot would.

This means that like chatbots and image generators, robots will become more nimble.

“What is in the digital data can transfer into the real world,” Dr. Chen said.

Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology. More about Cade Metz

Explore Our Coverage of Artificial Intelligence

News  and Analysis

Amazon said it had added $2.75 billion to its investment in Anthropic , an A.I. start-up that competes with companies like OpenAI and Google.

Gov. Bill Lee of Tennessee signed a bill  to prevent the use of A.I. to copy a performer’s voice. It is the first such measure in the United States.

French regulators said Google failed to notify news publishers  that it was using their articles to train its A.I. algorithms, part of a wider ruling against the company for its negotiating practices with media outlets.

Apple is in discussions with Google  about using Google’s generative A.I. model called Gemini for its next iPhone.

The Age of A.I.

The Caribbean island Anguilla made $32 million last year, more than 10 percent of its G.D.P., from companies registering web addresses that end in .ai .

When it comes to the A.I. that powers chatbots like ChatGPT, China trails the United States. But when it comes to producing the scientists behind a new generation of humanoid technologies, China is pulling ahead . Silicon Valley leaders are lobbying Congress on the dangers of falling behind .

By interacting with data about genes and cells, A.I. models have made some surprising discoveries and are learning what it means to be alive. What could they teach us someday ?

Covariant, a robotics start-up, is using the technology behind chatbots  to build robots that learn skills much like ChatGPT does.

Advertisement

Artificial Intelligence and Robotics: Impact & Open issues of automation in Workplace

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Artificial Intelligence Essay

500+ words essay on artificial intelligence.

Artificial intelligence (AI) has come into our daily lives through mobile devices and the Internet. Governments and businesses are increasingly making use of AI tools and techniques to solve business problems and improve many business processes, especially online ones. Such developments bring about new realities to social life that may not have been experienced before. This essay on Artificial Intelligence will help students to know the various advantages of using AI and how it has made our lives easier and simpler. Also, in the end, we have described the future scope of AI and the harmful effects of using it. To get a good command of essay writing, students must practise CBSE Essays on different topics.

Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use techniques such as machine learning and deep learning to solve problems in particular domains without hard coding all possibilities (i.e. algorithmic steps) in software. Due to this, AI started showing promising solutions for industry and businesses as well as our daily lives.

Importance and Advantages of Artificial Intelligence

Advances in computing and digital technologies have a direct influence on our lives, businesses and social life. This has influenced our daily routines, such as using mobile devices and active involvement on social media. AI systems are the most influential digital technologies. With AI systems, businesses are able to handle large data sets and provide speedy essential input to operations. Moreover, businesses are able to adapt to constant changes and are becoming more flexible.

By introducing Artificial Intelligence systems into devices, new business processes are opting for the automated process. A new paradigm emerges as a result of such intelligent automation, which now dictates not only how businesses operate but also who does the job. Many manufacturing sites can now operate fully automated with robots and without any human workers. Artificial Intelligence now brings unheard and unexpected innovations to the business world that many organizations will need to integrate to remain competitive and move further to lead the competitors.

Artificial Intelligence shapes our lives and social interactions through technological advancement. There are many AI applications which are specifically developed for providing better services to individuals, such as mobile phones, electronic gadgets, social media platforms etc. We are delegating our activities through intelligent applications, such as personal assistants, intelligent wearable devices and other applications. AI systems that operate household apparatus help us at home with cooking or cleaning.

Future Scope of Artificial Intelligence

In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is becoming a popular field in computer science as it has enhanced humans. Application areas of artificial intelligence are having a huge impact on various fields of life to solve complex problems in various areas such as education, engineering, business, medicine, weather forecasting etc. Many labourers’ work can be done by a single machine. But Artificial Intelligence has another aspect: it can be dangerous for us. If we become completely dependent on machines, then it can ruin our life. We will not be able to do any work by ourselves and get lazy. Another disadvantage is that it cannot give a human-like feeling. So machines should be used only where they are actually required.

Students must have found this essay on “Artificial Intelligence” useful for improving their essay writing skills. They can get the study material and the latest updates on CBSE/ICSE/State Board/Competitive Exams, at BYJU’S.

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

essay on robotics and artificial intelligence

  • Share Share

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

close

Counselling

What is artificial general intelligence (AGI)?

A profile of a 3d head made of concrete that is sliced in half creating two separate parts. Pink neon binary numbers travel from one half of the a head to the other by a stone bridge that connects the two parts.

You’ve read the think pieces. AI—in particular, the generative AI (gen AI) breakthroughs achieved in the past year or so—is poised to revolutionize not just the way we create content but the very makeup of our economies and societies as a whole. But although gen AI tools such as ChatGPT may seem like a great leap forward, in reality they are just a step in the direction of an even greater breakthrough: artificial general intelligence, or AGI.

Get to know and directly engage with senior McKinsey experts on AGI

Aamer Baig is a senior partner in McKinsey’s Chicago office; Federico Berruti is a partner in the Toronto office; Ben Ellencweig is a senior partner in the Stamford, Connecticut, office; Damian Lewandowski is a consultant in the Miami office; Roger Roberts is a partner in the Bay Area office, where Lareina Yee is a senior partner;  Alex Singla  is a senior partner in the Chicago office and the global leader of QuantumBlack, AI by McKinsey;  Kate Smaje  and Alex Sukharevsky  are senior partners in the London office;   Jonathan Tilley is a partner in the Southern California office; and Rodney Zemmel is a senior partner in the New York office.

AGI is AI with capabilities that rival those of a human . While purely theoretical at this stage, someday AGI may replicate human-like cognitive abilities including reasoning, problem solving, perception, learning, and language comprehension. When AI’s abilities are indistinguishable from those of a human, it will have passed what is known as the Turing test , first proposed by 20th-century computer scientist Alan Turing.

But let’s not get ahead of ourselves. AI has made significant strides in recent years, but no AI tool to date has passed the Turing test. We’re still far from reaching a point where AI tools can understand, communicate, and act with the same nuance and sensitivity of a human—and, critically, understand the meaning behind it. Most researchers and academics believe we are decades away from realizing AGI; a few even predict we won’t see AGI this century (or ever). Rodney Brooks, a roboticist at the Massachusetts Institute of Technology and cofounder of iRobot, believes AGI won’t arrive until the year 2300 .

If you’re thinking that AI already seems pretty smart, that’s understandable. We’ve seen gen AI  do remarkable things in recent years, from writing code to composing sonnets in seconds. But there’s a critical difference between AI and AGI. Although the latest gen AI technologies, including ChatGPT, DALL-E, and others, have been hogging headlines, they are essentially prediction machines—albeit very good ones. In other words, they can predict, with a high degree of accuracy, the answer to a specific prompt because they’ve been trained on huge amounts of data. This is impressive, but it’s not at a human level of performance in terms of creativity, logical reasoning, sensory perception, and other capabilities . By contrast, AGI tools could feature cognitive and emotional abilities (like empathy) indistinguishable from those of a human. Depending on your definition of AGI, they might even be capable of consciously grasping the meaning behind what they’re doing.

The timing of AGI’s emergence is uncertain. But when it does arrive—and it likely will at some point—it’s going to be a very big deal for every aspect of our lives, businesses, and societies. Executives can begin working now to better understand the path to machines achieving human-level intelligence and making the transition to a more automated world.

Learn more about QuantumBlack, AI by McKinsey .

What is needed for AI to become AGI?

Here are eight capabilities AI needs to master before achieving AGI. Click each card to learn more.

How will people access AGI tools?

Today, most people engage with AI in the same ways they’ve accessed digital power for years: via 2D screens such as laptops, smartphones, and TVs. The future will probably look a lot different. Some of the brightest minds (and biggest budgets) in tech are devoting themselves to figuring out how we’ll access AI (and possibly AGI) in the future. One example you’re likely familiar with is augmented reality and virtual reality headsets , through which users experience an immersive virtual world . Another example would be humans accessing the AI world through implanted neurons in the brain. This might sound like something out of a sci-fi novel, but it’s not. In January 2024, Neuralink implanted a chip in a human brain, with the goal of allowing the human to control a phone or computer purely by thought.

A final mode of interaction with AI seems ripped from sci-fi as well: robots. These can take the form of mechanized limbs connected to humans or machine bases or even programmed humanoid robots.

What is a robot and what types of robots are there?

The simplest definition of a robot is a machine that can perform tasks on its own or with minimal assistance from humans. The most sophisticated robots can also interact with their surroundings.

Programmable robots have been operational since the 1950s. McKinsey estimates that 3.5 million robots are currently in use, with 550,000 more deployed every year. But while programmable robots are more commonplace than ever in the workforce, they have a long way to go before they outnumber their human counterparts. The Republic of Korea, home to the world’s highest density of robots, still employs 100 times as many humans as robots.

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

But as hardware and software limitations become increasingly surmountable, companies that manufacture robots are beginning to program units with new AI tools and techniques. These dramatically improve robots’ ability to perform tasks typically handled by humans, including walking, sensing, communicating, and manipulating objects. In May 2023, Sanctuary AI, for example, launched Phoenix, a bipedal humanoid robot that stands 5’ 7” tall, lifts objects weighing as much as 55 pounds, and travels three miles per hour—not to mention it also folds clothes, stocks shelves, and works a register.

As we edge closer to AGI, we can expect increasingly sophisticated AI tools and techniques to be programmed into robots of all kinds. Here are a few categories of robots that are currently operational:

  • Stand-alone autonomous industrial robots : Equipped with sensors and computer systems to navigate their surroundings and interact with other machines, these robots are critical components of the modern automated manufacturing industry.
  • Collaborative robots : Also known as cobots, these robots are specifically engineered to operate in collaboration with humans in a shared environment. Their primary purpose is to alleviate repetitive or hazardous tasks. These types of robots are already being used in environments such as restaurant kitchens and more.
  • Mobile robots : Utilizing wheels as their primary means of movement, mobile robots are commonly used for materials handling in warehouses and factories. The military also uses these machines for various purposes, such as reconnaissance and bomb disposal.
  • Human–hybrid robots : These robots have both human and robotic features. This could include a robot with an appearance, movement capabilities, or cognition that resemble those of a human, or a human with a robotic limb or even a brain implant.
  • Humanoids or androids : These robots are designed to emulate the appearance, movement, communicative abilities, and emotions of humans while continuously enhancing their cognitive capabilities via deep learning models. In other words, humanoid robots will think like a human, move like a human, and look like a human.

What advances could speed up the development of AGI?

Advances in algorithms, computing, and data  have brought about the recent acceleration of AI. We can get a sense of what the future may hold by looking at these three capabilities:

Algorithmic advances and new robotics approaches . We may need entirely new approaches to algorithms and robots to achieve AGI. One way researchers are thinking about this is by exploring the concept of embodied cognition. The idea is that robots will need to learn very quickly from their environments through a multitude of senses, just like humans do when they’re very young. Similarly, to develop cognition in the same way humans do, robots will need to experience the physical world like we do (because we’ve designed our spaces based on how our bodies and minds work).

The latest AI-based robot systems are using gen AI technologies including large language models (LLMs) and large behavior models (LBMs). LLMs give robots advanced natural-language-processing capabilities like what we’ve seen with generative AI models and other LLM-enabled tools. LBMs allow robots to emulate human actions and movements. These models are created by training AI on large data sets of observed human actions and movements. Ultimately, these models could allow robots to perform a wide range of activities with limited task-specific training.

A real advance would be to develop new AI systems that start out with a certain level of built-in knowledge, just like a baby fawn knows how to stand and feed without being taught. It’s possible that the recent success of deep-learning-based AI systems may have drawn research attention away from the more fundamental cognitive work required to make progress toward AGI.

  • Computing advancements. Graphics processing units (GPUs) have made the major AI advances of the past few years possible . Here’s why. For one, GPUs are designed to handle multiple tasks related to visual data simultaneously, including rendering images, videos, and graphics-related computations. Their efficiency at handling massive amounts of visual data makes them useful in training complex neural networks. They also have a high memory bandwidth, meaning faster data transfer. Before AGI can be achieved, similar significant advancements will need to be made in computing infrastructure. Quantum computing  is touted as one way of achieving this. However, today’s quantum computers, while powerful, aren’t yet ready for everyday applications. But once they are, they could play a role in the achievement of AGI.

Growth in data volume and new sources of data . Some experts believe 5G  mobile infrastructure could bring about a significant increase in data. That’s because the technology could power a surge in connected devices, or the Internet of Things . But, for a variety of reasons, we think most of the benefits of 5G have already appeared . For AGI to be achieved, there will need to be another catalyst for a huge increase in data volume.

New robotics approaches could yield new sources of training data. Placing human-like robots among us could allow companies to mine large sets of data that mimic our own senses to help the robots train themselves. Advanced self-driving cars are one example: data is being collected from cars that are already on the roads, so these vehicles are acting as a training set for future self-driving cars.

What can executives do about AGI?

AGI is still decades away, at the very least. But AI is here to stay—and it is advancing extremely quickly. Smart leaders can think about how to respond to the real progress that’s happening, as well as how to prepare for the automated future. Here are a few things to consider:

  • Stay informed about developments in AI and AGI . Connect with start-ups and develop a framework for tracking progress in AGI that is relevant to your business. Also, start to think about the right governance, conditions, and boundaries for success within your business and communities.
  • Invest in AI now . “The cost of doing nothing,” says McKinsey senior partner Nicolai Müller , “is just too high  because everybody has this at the top of their agenda. I think it’s the one topic that every management board  has looked into, that every CEO  has explored across all regions and industries.” The organizations that get it right now will be poised to win in the coming era.
  • Continue to place humans at the center . Invest in human–machine interfaces, or “human in the loop” technologies that augment human intelligence. People at all levels of an organization need training and support to thrive in an increasingly automated world. AI is just the latest tool to help individuals and companies alike boost their efficiency.
  • Consider the ethical and security implications . This should include addressing cybersecurity , data privacy, and algorithm bias.
  • Build a strong foundation of data, talent, and capabilities . AI runs on data; having a strong foundation of high-quality data is critical to its success.
  • Organize your workers for new economies of scale and skill . Yesterday’s rigid organizational structures and operating models aren’t suited to the reality of rapidly advancing AI. One way to address this is by instituting flow-to-the-work models, where people can move seamlessly between initiatives and groups.
  • Place small bets to preserve strategic options in areas of your business that are exposed to AI developments . For example, consider investing in technology firms that are pursuing ambitious AI research and development projects in your industry. Not all these bets will necessarily pay off, but they could help hedge some of the existential risk your business may face in the future.

Learn more about QuantumBlack, AI by McKinsey . And check out AI-related job opportunities if you’re interested in working at McKinsey.

Articles referenced:

  • “ Generative AI in operations: Capturing the value ,” January 3, 2024, Marie El Hoyek and  Nicolai Müller
  • “ The economic potential of generative AI: The next productivity frontier ,” June 14, 2023, Michael Chui , Eric Hazan , Roger Roberts , Alex Singla , Kate Smaje , Alex Sukharevsky , Lareina Yee , and Rodney Zemmel
  • “ What every CEO should know about generative AI ,” May 12, 2023, Michael Chui , Roger Roberts , Tanya Rodchenko, Alex Singla , Alex Sukharevsky , Lareina Yee , and Delphine Zurkiya
  • “ An executive primer on artificial general intelligence ,” April 29, 2020, Federico Berruti , Pieter Nel, and Rob Whiteman
  • “ Notes from the AI frontier: Applications and value of deep learning ,” April 17, 2018, Michael Chui , James Manyika , Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Malhotra
  • “ Augmented and virtual reality: The promise and peril of immersive technologies ,” October 3, 2017, Stefan Hall and Ryo Takahashi

A profile of a 3d head made of concrete that is sliced in half creating two separate parts. Pink neon binary numbers travel from one half of the a head to the other by a stone bridge that connects the two parts.

Want to know more about artificial general intelligence (AGI)?

Related articles.

An executive primer on artificial general intelligence

An executive primer on artificial general intelligence

Moving illustration of wavy blue lines that was produced using computer code

What every CEO should know about generative AI

Visualizing the uses and potential impact of AI and other analytics

Notes from the AI frontier: Applications and value of deep learning

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

  • International edition
  • Australia edition
  • Europe edition

Illustration on feature about AI use in universities.

‘Full-on robot writing’: the artificial intelligence challenge facing universities

AI is becoming more sophisticated, and some say capable of writing academic essays. But at what point does the intrusion of AI constitute cheating?

  • Follow our Australia news live blog for the latest updates
  • Get our morning and afternoon news emails , free app or daily news podcast

“Waiting in front of the lecture hall for my next class to start, and beside me two students are discussing which AI program works best for writing their essays. Is this what I’m marking? AI essays?”

The tweet by historian Carla Ionescu late last month captures growing unease about what artificial intelligence portends for traditional university assessment. “No. No way,” she tweeted. “Tell me we’re not there yet.”

But AI has been banging on the university’s gate for some time now.

In 2012, computer theorist Ben Goertzel proposed what he called the “robot university student test” , arguing that an AI capable of obtaining a degree in the same way as a human should be considered conscious.

Goertzel’s idea – an alternative to the more famous “Turing test” – might have remained a thought experiment were it not for the successes of AIs employing natural language processing (NLP): most famously, GPT-3 , the language model created by the OpenAi research laboratory.

Two years ago, computer scientist Nassim Dehouche published a piece demonstrating that GPT-3 could produce credible academic writing undetectable by the usual anti-plagiarism software.

“[I] found the output,” Dehouche told Guardian Australia, “to be indistinguishable from an excellent undergraduate essay, both in terms of soundness and originality. [My article] was initially subtitled, ‘The best time to act was yesterday, the second-best time is now’. Its purpose was to call for an urgent need to, at the very least, update our concepts of plagiarism.”

Ben Goertzel

He now thinks we’re already well past the time when students could generate entire essays (and other forms of writing) using algorithmic methods.

“A good exercise for aspiring writers,” he says, “would be a sort of reverse Turing test: ‘Can you write a page of text that could not have been generated by an AI, and explain why?’ As far as I can see, unless one is reporting an original mathematics theorem and its proof, it is not possible. But I would love to be proven wrong.”

Many others now share his urgency. In news and opinion articles, GPT-3 has convincingly written on whether it poses a threat to humanity ( it says it doesn’t ), and about animal cruelty in the styles of both Bob Dylan and William Shakespeare.

A 2021 Forbes article about AI essay writing culminated in a dramatic mic-drop: “this post about using an AI to write essays in school,” it explained, “was written using an artificial intelligence content writing tool”.

Of course, the tech industry thrives on unwarranted hype. Last month S Scott Graham in a piece for Inside Higher Education described encouraging students to use the technology for their assignments with decidedly mixed results. The very best, he said, would have fulfilled the minimum requirements but little more. Weaker students struggled, since giving the system effective prompts (and then editing its output) required writing skills of a sufficiently high level to render the AI superfluous.

“I strongly suspect,” he concluded, “full-on robot writing will always and forever be ‘just around the corner’.”

That might be true, though only a month earlier, Slate’s Aki Peritz concluded precisely the opposite, declaring that “with a little bit of practice, a student can use AI to write his or her paper in a fraction of the time that it would normally take”.

Nevertheless, the challenge for higher education can’t be reduced merely to “full-on robot writing”.

Universities don’t merely face essays or assignments entirely generated by algorithms: they must also adjudicate a myriad of more subtle problems. For instance, AI-powered word processors habitually suggest alternatives to our ungrammatical phrases. But if software can algorithmically rewrite a student’s sentence, why shouldn’t it do the same with a paragraph – and if a paragraph, why not a page?

At what point does the intrusion of AI constitute cheating?

Deakin University’s Prof Phillip Dawson specialises in digital assessment security .

He suggests regarding AI merely as a new form of a technique called cognitive offloading.

“Cognitive offloading,” he explains, is “when you use a tool to reduce the mental burden of a task. It can be as simple as writing something down so you don’t have to try to remember it for later. There have long been moral panics around tools for cognitive offloading, from Socrates complaining about people using writing to pretend they knew something, to the first emergence of pocket calculators.’

Dawson argues that universities should make clear to students the forms and degree of cognitive offloading permitted for specific assessments, with AI increasingly incorporated into higher level tasks.

“I think we’ll actually be teaching students how to use these tools. I don’t think we’re going to necessarily forbid them.”

The occupations for which universities prepare students will, after all, soon also rely on AI, with the humanities particularly affected. Take journalism, for instance. A 2019 survey of 71 media organisations from 32 countries found AI already a “significant part of journalism”, deployed for news gathering (say, sourcing information or identifying trends), news production (anything from automatic fact checkers to the algorithmic transformation of financial reports into articles) and news distribution (personalising websites, managing subscriptions, finding new audiences and so on). So why should journalism educators penalise students for using a technology likely to be central to their future careers?

University students

“I think we’ll have a really good look at what the professions do with respect to these tools now,” says Dawson, “and what they’re likely to do in the future with them, and we’ll try to map those capabilities back into our courses. That means figuring out how to reference them, so the student can say: I got the AI to do this bit and then here’s what I did myself.”

Yet formulating policies on when and where AI might legitimately be used is one thing – and enforcing them is quite another.

Dr Helen Gniel directs the higher education integrity unit of the Tertiary Education Quality and Standards Agency (TEQSA), the independent regulator of Australian higher education.

Like Dawson, she sees the issues around AI as, in some senses, an opportunity – a chance for institutions to “think about what they are teaching, and the most appropriate methods for assessing learning in that context”.

Transparency is key.

“We expect institutions to define their rules around the use of AI and ensure that expectations are clearly and regularly communicated to students.”

She points to ICHM, the Institute of Health Management and Flinders Uni as three providers now with explicit policies, with Flinders labelling the submission of work “generated by an algorithm, computer generator or other artificial intelligence” as a form of “contract cheating”.

But that comparison raises other issues.

In August, TEQSA blocked some 40 websites associated with the more traditional form of contract cheating – the sale of pre-written essays to students. The 450,000 visits those sites received each month suggests a massive potential market for AI writing, as those who once paid humans to write for them turn instead to digital alternatives.

Research by Dr Guy Curtis from the University of Western Australia found respondents from a non-English speaking background three times more likely to buy essays than those with English as a first language. That figure no doubt reflects the pressures heaped on the nearly 500,000 international students taking courses at Australian institutions, who may struggle with insecure work, living costs, social isolation and the inherent difficulty of assessment in a foreign language.

But one could also note the broader relationship between the expansion of contract cheating and the transformation of higher education into a lucrative export industry. If a university degree becomes merely a product to be bought and sold, the decision by a failing student to call upon an external contractor (whether human or algorithmic) might seem like simply a rational market choice.

It’s another illustration of how AI poses uncomfortable questions about the very nature of education.

Ben Goertzel imagined his “robot university student test” as a demonstration of “artificial general intelligence”: a digital replication of the human intellect. But that’s not what NLP involves. On the contrary, as Luciano Floridi and Massimo Chiriatti say , with AI, “we are increasingly decoupling the ability to solve a problem effectively … from any need to be intelligent to do so”.

Bob Dylan

The new AIs train on massive data sets, scouring vast quantities of information so they can extrapolate plausible responses to textual and other prompts. Emily M Bender and her colleagues describe a language model as a “stochastic parrot”, something that “haphazardly [stitches] together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning”.

So if it’s possible to pass assessment tasks without understand their meaning, what, precisely, do the tasks assess?

In his 2011 book For the University: Democracy and the Future of the Institution , the University of Warwick’s Thomas Docherty suggests that corporatised education replaces open-ended and destabilising “knowledge” with “the efficient and controlled management of information”, with assessment requiring students to demonstrate solely that they have gained access to the database of “knowledge” … and that they have then manipulated or “managed” that knowledge in its organisation of cut-and-pasted parts into a new whole.

The potential proficiency of “stochastic parrots” at tertiary assessment throws a new light on Docherty’s argument, confirming that such tasks do not, in fact, measure knowledge (which AIs innately lack) so much as the transfer of information (at which AIs excel).

To put the argument another way, AI raises issues for the education sector that extend beyond whatever immediate measures might be taken to govern student use of such systems. One could, for instance, imagine the technology facilitating a “boring dystopia” , further degrading those aspects of the university already most eroded by corporate imperatives. Higher education has, after all, invested heavily in AI systems for grading , so that, in theory, algorithms might mark the output of other algorithms, in an infinite process in which nothing whatsoever ever gets learned.

But maybe, just maybe, the challenge of AI might encourage something else. Perhaps it might foster a conversation about what education is and, most importantly, what we want it to be. AI might spur us to recognise genuine knowledge, so that, as the university of the future embraces technology, it appreciates anew what makes us human.

  • Australian universities
  • Australian education
  • Artificial intelligence (AI)

Comments (…)

Most viewed.

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Engineering household robots to have a little common sense

Press contact :, media download.

About five photos of a robotic experiment are collaged together. A robotic arm uses a spoon to pick up red marbles and place in a bowl. A human hand pushes and pulls the robotic hand. Marbles are scattered on the table and are also being poured into the new bowl.

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

About five photos of a robotic experiment are collaged together. A robotic arm uses a spoon to pick up red marbles and place in a bowl. A human hand pushes and pulls the robotic hand. Marbles are scattered on the table and are also being poured into the new bowl.

Previous image Next image

From wiping up spills to serving up food, robots are being taught to carry out increasingly complicated household tasks. Many such home-bot trainees are learning through imitation; they are programmed to copy the motions that a human physically guides them through.

It turns out that robots are excellent mimics. But unless engineers also program them to adjust to every possible bump and nudge, robots don’t necessarily know how to handle these situations, short of starting their task from the top.

Now MIT engineers are aiming to give robots a bit of common sense when faced with situations that push them off their trained path. They’ve developed a method that connects robot motion data with the “common sense knowledge” of large language models, or LLMs.

Their approach enables a robot to logically parse many given household task into subtasks, and to physically adjust to disruptions within a subtask so that the robot can move on without having to go back and start a task from scratch — and without engineers having to explicitly program fixes for every possible failure along the way.   

“Imitation learning is a mainstream approach enabling household robots. But if a robot is blindly mimicking a human’s motion trajectories, tiny errors can accumulate and eventually derail the rest of the execution,” says Yanwei Wang, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “With our method, a robot can self-correct execution errors and improve overall task success.”

Wang and his colleagues detail their new approach in a study they will present at the International Conference on Learning Representations (ICLR) in May. The study’s co-authors include EECS graduate students Tsun-Hsuan Wang and Jiayuan Mao, Michael Hagenow, a postdoc in MIT’s Department of Aeronautics and Astronautics (AeroAstro), and Julie Shah, the H.N. Slater Professor in Aeronautics and Astronautics at MIT.

Language task

The researchers illustrate their new approach with a simple chore: scooping marbles from one bowl and pouring them into another. To accomplish this task, engineers would typically move a robot through the motions of scooping and pouring — all in one fluid trajectory. They might do this multiple times, to give the robot a number of human demonstrations to mimic.

“But the human demonstration is one long, continuous trajectory,” Wang says.

The team realized that, while a human might demonstrate a single task in one go, that task depends on a sequence of subtasks, or trajectories. For instance, the robot has to first reach into a bowl before it can scoop, and it must scoop up marbles before moving to the empty bowl, and so forth. If a robot is pushed or nudged to make a mistake during any of these subtasks, its only recourse is to stop and start from the beginning, unless engineers were to explicitly label each subtask and program or collect new demonstrations for the robot to recover from the said failure, to enable a robot to self-correct in the moment.

“That level of planning is very tedious,” Wang says.

Instead, he and his colleagues found some of this work could be done automatically by LLMs. These deep learning models process immense libraries of text, which they use to establish connections between words, sentences, and paragraphs. Through these connections, an LLM can then generate new sentences based on what it has learned about the kind of word that is likely to follow the last.

For their part, the researchers found that in addition to sentences and paragraphs, an LLM can be prompted to produce a logical list of subtasks that would be involved in a given task. For instance, if queried to list the actions involved in scooping marbles from one bowl into another, an LLM might produce a sequence of verbs such as “reach,” “scoop,” “transport,” and “pour.”

“LLMs have a way to tell you how to do each step of a task, in natural language. A human’s continuous demonstration is the embodiment of those steps, in physical space,” Wang says. “And we wanted to connect the two, so that a robot would automatically know what stage it is in a task, and be able to replan and recover on its own.”

Mapping marbles

For their new approach, the team developed an algorithm to automatically connect an LLM’s natural language label for a particular subtask with a robot’s position in physical space or an image that encodes the robot state. Mapping a robot’s physical coordinates, or an image of the robot state, to a natural language label is known as “grounding.” The team’s new algorithm is designed to learn a grounding “classifier,” meaning that it learns to automatically identify what semantic subtask a robot is in — for example, “reach” versus “scoop” — given its physical coordinates or an image view.

“The grounding classifier facilitates this dialogue between what the robot is doing in the physical space and what the LLM knows about the subtasks, and the constraints you have to pay attention to within each subtask,” Wang explains.

The team demonstrated the approach in experiments with a robotic arm that they trained on a marble-scooping task. Experimenters trained the robot by physically guiding it through the task of first reaching into a bowl, scooping up marbles, transporting them over an empty bowl, and pouring them in. After a few demonstrations, the team then used a pretrained LLM and asked the model to list the steps involved in scooping marbles from one bowl to another. The researchers then used their new algorithm to connect the LLM’s defined subtasks with the robot’s motion trajectory data. The algorithm automatically learned to map the robot’s physical coordinates in the trajectories and the corresponding image view to a given subtask.

The team then let the robot carry out the scooping task on its own, using the newly learned grounding classifiers. As the robot moved through the steps of the task, the experimenters pushed and nudged the bot off its path, and knocked marbles off its spoon at various points. Rather than stop and start from the beginning again, or continue blindly with no marbles on its spoon, the bot was able to self-correct, and completed each subtask before moving on to the next. (For instance, it would make sure that it successfully scooped marbles before transporting them to the empty bowl.)

“With our method, when the robot is making mistakes, we don’t need to ask humans to program or give extra demonstrations of how to recover from failures,” Wang says. “That’s super exciting because there’s a huge effort now toward training household robots with data collected on teleoperation systems. Our algorithm can now convert that training data into robust robot behavior that can do complex tasks, despite external perturbations.”

Share this news article on:

Press mentions.

MIT researchers  have developed a new technique that uses a large language model to allow robots to self-correct after making a mistake, reports Brian Heater for TechCrunch . “Researchers behind the study note that while imitation learning (learning to do a task through observation) is popular in the world of home robotics, it often can’t account for the countless small environmental variations that can interfere with regular operation, thus requiring a system to restart from square one,” writes Heater. “The new research addresses this, in part, by breaking demonstrations into smaller subsets, rather than treating them as part of a continuous action." 

Previous item Next item

Related Links

  • Department of Aeronautics and Astronautics
  • Computer Science and Artificial Intelligence Laboratory

Related Topics

  • Aeronautical and astronautical engineering
  • Assistive technology
  • Artificial intelligence
  • Computer modeling
  • Computer Science and Artificial Intelligence Laboratory (CSAIL)
  • Computer science and technology
  • Human-computer interaction

Related Articles

The background is blue and shows a pattern of coffee mugs. In the foreground on the left, a human hand holds a Tim the Beaver coffee mug. To the right of that, a robotic hand holds another Tim mug.

A faster way to teach a robot

Elijah Stanger-Jones, Hongmin Kim, and Andrew SaLoutos look at a metal robot gripper that resembles an excavator. The gripper holds a pink cup as it sits atop a wooden table.

Speedy robo-gripper reflexively organizes cluttered spaces

Robotic arm squishes a block of play dough into an x shape

Robots play with play dough

robot with camera

A robot that finds lost items

More mit news.

Three close up photos of speakers at a conference: Julie Shah, Ben Armstrong, and Kate Kellogg

MIT launches Working Group on Generative AI and the Work of the Future

Read full story →

Two men in hardhats and safety vests, seen from behind, inspect a forest of electrical pylons and wires on a cloudless day

Atmospheric observations in China show rise in emissions of a potent greenhouse gas

A view of the steps and columns of 77 Mass Ave, as seen through The Alchemist Sculpture. Glimpses of the numbers and mathematical symbols are seen around the image.

Second round of seed grants awarded to MIT scholars studying the impact and applications of generative AI

A view from behind of about 15 people dressed head-to-toe in white cleanroom suits, facing another gowned-up individual gesturing as they speak.

VIAVI Solutions joins MIT.nano Consortium

Four teenagers sit at a classroom table and write.

Is it the school, or the students?

A young Black woman wearing a brightly patterned top and braids smiles over a set table at a restaurant.

Student spotlight: Victory Yinka-Banjo

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Artificial Intelligence Essay for Students and Children

500+ words essay on artificial intelligence.

Artificial Intelligence refers to the intelligence of machines. This is in contrast to the natural intelligence of humans and animals. With Artificial Intelligence, machines perform functions such as learning, planning, reasoning and problem-solving. Most noteworthy, Artificial Intelligence is the simulation of human intelligence by machines. It is probably the fastest-growing development in the World of technology and innovation . Furthermore, many experts believe AI could solve major challenges and crisis situations.

Artificial Intelligence Essay

Types of Artificial Intelligence

First of all, the categorization of Artificial Intelligence is into four types. Arend Hintze came up with this categorization. The categories are as follows:

Type 1: Reactive machines – These machines can react to situations. A famous example can be Deep Blue, the IBM chess program. Most noteworthy, the chess program won against Garry Kasparov , the popular chess legend. Furthermore, such machines lack memory. These machines certainly cannot use past experiences to inform future ones. It analyses all possible alternatives and chooses the best one.

Type 2: Limited memory – These AI systems are capable of using past experiences to inform future ones. A good example can be self-driving cars. Such cars have decision making systems . The car makes actions like changing lanes. Most noteworthy, these actions come from observations. There is no permanent storage of these observations.

Type 3: Theory of mind – This refers to understand others. Above all, this means to understand that others have their beliefs, intentions, desires, and opinions. However, this type of AI does not exist yet.

Type 4: Self-awareness – This is the highest and most sophisticated level of Artificial Intelligence. Such systems have a sense of self. Furthermore, they have awareness, consciousness, and emotions. Obviously, such type of technology does not yet exist. This technology would certainly be a revolution .

Get the huge list of more than 500 Essay Topics and Ideas

Applications of Artificial Intelligence

First of all, AI has significant use in healthcare. Companies are trying to develop technologies for quick diagnosis. Artificial Intelligence would efficiently operate on patients without human supervision. Such technological surgeries are already taking place. Another excellent healthcare technology is IBM Watson.

Artificial Intelligence in business would significantly save time and effort. There is an application of robotic automation to human business tasks. Furthermore, Machine learning algorithms help in better serving customers. Chatbots provide immediate response and service to customers.

essay on robotics and artificial intelligence

AI can greatly increase the rate of work in manufacturing. Manufacture of a huge number of products can take place with AI. Furthermore, the entire production process can take place without human intervention. Hence, a lot of time and effort is saved.

Artificial Intelligence has applications in various other fields. These fields can be military , law , video games , government, finance, automotive, audit, art, etc. Hence, it’s clear that AI has a massive amount of different applications.

To sum it up, Artificial Intelligence looks all set to be the future of the World. Experts believe AI would certainly become a part and parcel of human life soon. AI would completely change the way we view our World. With Artificial Intelligence, the future seems intriguing and exciting.

{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [{ “@type”: “Question”, “name”: “Give an example of AI reactive machines?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Reactive machines react to situations. An example of it is the Deep Blue, the IBM chess program, This program defeated the popular chess player Garry Kasparov.” } }, { “@type”: “Question”, “name”: “How do chatbots help in business?”, “acceptedAnswer”: { “@type”: “Answer”, “text”:”Chatbots help in business by assisting customers. Above all, they do this by providing immediate response and service to customers.”} }] }

Customize your course in 30 seconds

Which class are you in.

tutor

  • Travelling Essay
  • Picnic Essay
  • Our Country Essay
  • My Parents Essay
  • Essay on Favourite Personality
  • Essay on Memorable Day of My Life
  • Essay on Knowledge is Power
  • Essay on Gurpurab
  • Essay on My Favourite Season
  • Essay on Types of Sports

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

Artificial Intelligence Act: MEPs adopt landmark law  

Share this page:  .

  • Facebook  
  • Twitter  
  • LinkedIn  
  • WhatsApp  
  • Safeguards on general purpose artificial intelligence  
  • Limits on the use of biometric identification systems by law enforcement  
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities  
  • Right of consumers to launch complaints and receive meaningful explanations  

Personal identification technologies in street surveillance cameras

On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation.

The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

Banned applications

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

Law enforcement exemptions

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Obligations for high-risk systems

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

Transparency requirements

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs

Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

During the plenary debate on Tuesday, the Internal Market Committee co-rapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development”.

Civil Liberties Committee co-rapporteur Dragos Tudorache (Renew, Romania) said: “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.

It will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).

The Artificial Intelligence Act responds directly to citizens’ proposals from the Conference on the Future of Europe (COFE), most concretely to proposal 12(10) on enhancing EU’s competitiveness in strategic sectors, proposal 33(5) on a safe and trustworthy society, including countering disinformation and ensuring humans are ultimately in control, proposal 35 on promoting digital innovation, (3) while ensuring human oversight and (8) trustworthy and responsible use of AI, setting safeguards and ensuring transparency, and proposal 37 (3) on using AI and digital tools to improve citizens’ access to information, including persons with disabilities.

Contacts:  

Yasmina yakimova  .

  • Phone number: (+32) 2 28 42626 (BXL)  
  • Mobile number: (+32) 470 88 10 60  
  • E-mail: [email protected]  
  • Twitter account: @EP_SingleMarket  

Janne OJAMO  

  • Phone number: (+32) 2 284 12 50 (BXL)  
  • Mobile number: (+32) 470 89 21 92  
  • E-mail: [email protected]  
  • Twitter account: @EP_Justice  

Further information  

  • Link to adopted text (13.03.2024)  
  • Plenary debate (12.03.2024)  
  • Procedure file  
  • EP Research Service: compilation of studies on Artificial Intelligence  
  • Result of roll-call votes (13.03.2024)  
  • Committee on the Internal Market and Consumer Protection  
  • Committee on Civil Liberties, Justice and Home Affairs  

Product information  

TechBullion

TechBullion

Unlocking the potential: how artificial intelligence revolutionizes robot learning.

essay on robotics and artificial intelligence

Introduction

Innovation in the realm of robotics has always been at the forefront of technological advancement. However, with the integration of artificial intelligence (AI), this evolution has taken an exponential leap forward. Today, we delve into the transformative power of AI in propelling robot learning to new heights.

The Fusion of AI and Robotics: A Paradigm Shift

The convergence of AI and robotics marks a paradigm shift in how machines perceive, learn, and interact with the world around them. Unlike traditional programmed robots, AI-powered machines possess the ability to adapt, learn from experience, and continuously improve their performance.

Enhanced Perception and Sensing Capabilities

One of the key areas where AI has revolutionized robot learning is in perception and sensing. Through advanced algorithms and sensor technologies, AI-enabled robots can perceive and interpret their environment with unprecedented accuracy. From identifying objects and obstacles to understanding complex spatial relationships, these robots can navigate dynamic environments with ease.

Adaptive Learning and Autonomous Decision-Making

Another remarkable aspect of AI-driven robot learning is adaptive learning and autonomous decision-making. By leveraging machine learning algorithms, robots can analyze vast amounts of data in real-time, allowing them to adapt their behavior and decision-making processes on the fly . This capability is particularly valuable in scenarios where robots must operate in unpredictable or rapidly changing conditions, such as disaster response or exploration missions.

Efficiency and Optimization in Task Execution

AI-powered robots excel in efficiency and optimization when it comes to task execution. Through continuous learning and refinement, these machines can streamline processes, minimize errors, and maximize productivity. Whether it’s in manufacturing, logistics, or service industries, AI-driven robots are reshaping the way tasks are performed, leading to significant cost savings and operational efficiencies.

Human-Robot Collaboration: A Synergistic Partnership

Contrary to the fear of robots replacing humans, AI has paved the way for a new era of human-robot collaboration. By augmenting human capabilities with robotic assistance, tasks can be completed faster, safer, and with greater precision. This collaborative approach not only enhances productivity but also creates new opportunities for innovation and creativity.

Challenges and Considerations in AI-Powered Robot Learning

While the potential of AI-powered robot learning is immense, it is not without its challenges and considerations. One such challenge is ensuring the ethical and responsible use of AI, particularly in sensitive areas such as healthcare and security. Additionally, there are concerns surrounding job displacement and the socio-economic implications of widespread automation. Addressing these challenges will require a concerted effort from researchers, policymakers, and industry stakeholders.

Looking Ahead: The Future of AI in Robotics

As we look to the future, the possibilities of AI in robotics are limitless. From personalized service robots to intelligent companions, the integration of AI promises to redefine our relationship with machines. However, realizing this vision will require continued investment in research, development, and education to ensure that AI-driven robotics benefits society as a whole.

In conclusion, the marriage of artificial intelligence and robotics represents a watershed moment in technological innovation. By harnessing the power of AI, we can unlock the full potential of robots, propelling them to new heights of learning, adaptability, and efficiency . As we navigate this transformative journey, it is essential to embrace the opportunities that AI presents while remaining mindful of the ethical and societal implications. Together, we can shape a future where AI-powered robots enhance our lives and inspire generations to come.

essay on robotics and artificial intelligence

Recommended for you

essay on robotics and artificial intelligence

Trending Stories

Client Collaboration Platform For Professional Services, Consultants, Coaches, And Agencies

Client Collaboration Platform For Professional Services, Consultants, Coaches, And Agencies: Interview with Paul Sher, CEO of FuseBase

You might remember them as Nimbus Platform, but they’ve rebranded and evolved. Now, FuseBase...

essay on robotics and artificial intelligence

Post-Pandemic CRM Trends and How They’re Reshaping the Industry By Arun Gupta,  Microsoft Dynamics 365 Architect and Digital Transformation Leader

In the wake of the COVID-19 pandemic, the landscape of Customer Relationship Management (CRM)...

essay on robotics and artificial intelligence

Monbase: Revolutionizing Crypto Trading for Investors

The cryptocurrency market attracts the attention of numerous investors due to its immense profit...

Pigcoin And The Major New Meme Coins.

Top 5 New Meme Coins To Watch This 2024; Pigcoin And The Major New Meme Coins.

2024 is shaping up to be a pivotal year for meme coins in the...

Interview with Ken Clement, CEO of Hayabusa Fightwear

A Leading Brand In Boxing & Martial Arts Gear: Interview with Ken Clement, CEO of Hayabusa Fightwear

As a top brand in boxing and martial arts gear, Hayabusa Fightwear is renowned...

essay on robotics and artificial intelligence

Future of Technology with Techno Sami’s New Cutting-Edge Courses on AI Development and Blockchain

DHAKA, Bangladesh  Techno Sami, an innovative company dedicated to empowering new freelancers and tech...

essay on robotics and artificial intelligence

Cosmos (ATOM) and Litecoin (LTC) Investors Eye Kelexo (KLXO) Amid Crypto Market Dynamics

Big investors lost huge in this market downturn. A significant number of them moved...

essay on robotics and artificial intelligence

How the AI Experience is Revolutionizing Customer Experience and Retention By Pratiksha Agarwal, Senior Product Marketing Manager & Solutions Manager

In today’s hyper-competitive marketplace, businesses are increasingly turning to cutting-edge technologies to elevate their...

Click & Buy Real Estate

HOMEBOURSE: The Click & Buy Marketplace for New Development

In light of the National Association of Realtors’ groundbreaking settlement, which loosens restrictions on...

essay on robotics and artificial intelligence

Adapting Your Marketing Strategy for the Voice Technology Revolution

Voice technology is rapidly transforming the digital marketing landscape, revolutionizing how brands connect with...

essay on robotics and artificial intelligence

Ripple (XRP) and Tether (USDT) Investors Turn to Kelexo (KLXO) Amid P2P Lending Presale Buzz, Regardless of Ethereum’s (ETH) Price

During the prevailing market downturn, investors are actively seeking avenues to safeguard their investments,...

essay on robotics and artificial intelligence

Unlocking Automotive Secrets: Exploring the Universal VIN Decoder at VinDecoderz.com

In the labyrinth of automotive identification, decoding the Vehicle Identification Number (VIN) can be...

essay on robotics and artificial intelligence

The Power of PR: Why it’s Crucial for Success in Digital Marketing

In today’s fast-paced digital world, the art of public relations has never been more...

essay on robotics and artificial intelligence

Revolutionizing Crypto: The Power of User Experience – An Interview with Mel Gelderman, CEO of token.com

If Steve Jobs taught us anything, it is that great UX is the only...

Empower Your Workforce: Why an Office Applications Skill Course is Essential

Empower Your Workforce: Why an Office Applications Skill Course is Essential

In today’s rapidly evolving workplace landscape, possessing proficient skills in office applications is no...

How Does AI Impact Daily Consumer Experiences?

How Does AI Impact Daily Consumer Experiences?

How Does AI Impact Daily Consumer Experiences? Exploring the transformative role of AI in...

Better, Faster, Stronger Online Transaction Security

Better, Faster, Stronger Online Transaction Security

Most of the time, people feel secure about making payments online. Yet, a variety...

essay on robotics and artificial intelligence

Post Bitcoin Halving: Marathon Digital CEO Warns Small Miners Of Impending Challenges

Fred Thiel, the CEO of Marathon Digital Holdings, issued a warning after the April...

How Has AI Improved Your Daily Time Management?

How Has AI Improved Your Daily Time Management?

How Has AI Improved Your Daily Time Management? Exploring the transformative impact of artificial...

essay on robotics and artificial intelligence

Xiaomi to Launch its First Electric Car SU7 in China

Xiaomi to launch the company’s first electric car in China Takeaway Points Xiaomi said...

Like Us On Facebook

Latest interview.

Interview with Ken Clement, CEO of Hayabusa Fightwear

As a top brand in boxing and martial arts gear, Hayabusa Fightwear is renowned for its elite-level products, trusted by professional fighters...

Latest Press Release

NTT DATA and Reiz Tech Announce New Venture to Target DACH

NTT DATA and Reiz Tech Announce New Venture to Target DACH 

The joint venture – LITIT — built on previous collaborative successes of its parent companies, aims to transform the IT landscape of...

Pin It on Pinterest

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Robot AI

Logo of frobt

Augmented Reality Meets Artificial Intelligence in Robotics: A Systematic Review

Nikolaos Doulamis , National Technical University of Athens, Greece

Umair Rehman , University of Waterloo, Canada

Associated Data

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Recently, advancements in computational machinery have facilitated the integration of artificial intelligence (AI) to almost every field and industry. This fast-paced development in AI and sensing technologies have stirred an evolution in the realm of robotics. Concurrently, augmented reality (AR) applications are providing solutions to a myriad of robotics applications, such as demystifying robot motion intent and supporting intuitive control and feedback. In this paper, research papers combining the potentials of AI and AR in robotics over the last decade are presented and systematically reviewed. Four sources for data collection were utilized: Google Scholar, Scopus database, the International Conference on Robotics and Automation 2020 proceedings, and the references and citations of all identified papers. A total of 29 papers were analyzed from two perspectives: a theme-based perspective showcasing the relation between AR and AI, and an application-based analysis highlighting how the robotics application was affected. These two sections are further categorized based on the type of robotics platform and the type of robotics application, respectively. We analyze the work done and highlight some of the prevailing limitations hindering the field. Results also explain how AR and AI can be combined to solve the model-mismatch paradigm by creating a closed feedback loop between the user and the robot. This forms a solid base for increasing the efficiency of the robotic application and enhancing the user’s situational awareness, safety, and acceptance of AI robots. Our findings affirm the promising future for robust integration of AR and AI in numerous robotic applications.

Introduction

Artificial intelligence (AI) is the science of empowering machines with human-like intelligence ( Nilsson, 2009 ). It is a broad branch of computer science that mimics human capabilities of functioning independently and intelligently ( Nilsson, 1998 ). Although AI concepts date back to the 1950s when Alan Turing proposed his famous Turing test ( Turing, 1950 ), its techniques and algorithms were abandoned for a while as the computational power needed was still insufficient. Recently, the advent of big data and the Internet of Things (IoT), supercomputers, and cheap accessible storage have paved the way for a long-awaited renaissance in artificial intelligence. Currently, research in AI is involved in many domains including robotics ( Le et al., 2018 ; Gonzalez-Billandon et al., 2019 ), natural language processing (NLP) ( Bouaziz et al., 2018 ; Mathews, 2019 ), and expert systems ( Livio and Hodhod, 2018 ; Nicolotti et al., 2019 ). It is becoming ubiquitous in almost every field that requires humans to perform intelligent tasks like detecting fraudulent transactions, diagnosing diseases, and driving cars on crowded streets.

Specifically, in the field of robotics, AI is optimizing a robot’s autonomy in planning tasks and interacting with the world. The AI robot offers a greater advantage over the conventional robot that can only apply pre-defined reflex actions ( Govers, 2018 ). AI robots can learn from experience, adapt to an environment, and make reasonable decisions based on their sensing capabilities. For example, research is now leveraging AI’s learning algorithms to make robots learn the best path to take for different cases ( Kim and Pineau, 2016 ; Singh and Thongam, 2019 ), NLP for an intuitive human-robot interaction ( Kahuttanaseth et al., 2018 ), and deep neural networks to develop an understanding of emotional intents in human-robot interactions (HRI) ( Chen et al., 2020a ; Chen et al., 2020b ). Computer vision is also another field of AI that has enhanced the perception and awareness of robots. It combines machine learning with image capture and analysis to support robot navigation and automatic inspection. This ability of a robot to possess self-awareness is facilitating the field of HRI ( Busch et al., 2017 ).

The field of robotics has also benefited from the rising technology of augmented reality (AR). AR expands a user’s physical world by augmenting his/her view with digital information ( Van Krevelen and Poelman, 2010 ). AR devices are used to support the augmented interface and are classified into eye-wear devices like head-mounted displays (HMD) and glasses, handheld devices like tablets and mobile phones, and spatial projectors. Two other extended reality (XR) technologies exist that we need to distinguish from AR, and they are virtual reality (VR) and mixed reality (MR). VR is a system that, compared to AR which augments information on a live view of the real world, simulates a 3D graphical environment totally different from the physical world, and enables a human to naturally and intuitively interact with it ( Tzafestas, 2006 ). MR combines AR and VR, meaning that it merges physical and virtual environments ( Milgram and Kishino, 1994 ). Recently, the research sector witnessed a booming activity of integrating augmented reality in supporting robotics applications ( Makhataeva and Varol, 2020 ). These applications include robot-assisted surgery (RAS) ( Pessaux et al., 2015 ; Dickey et al., 2016 ), navigation and teleoperation ( Dias et al., 2015 ; Papachristos and Alexis, 2016 ; Yew et al., 2017 ), socially assistive robots ( Čaić et al., 2020 ), and human-robot collaboration ( Gurevich et al., 2015 ; Walker et al., 2018 ; Makhataeva et al., 2019 ; Wang and Rau, 2019 ). AR has also revolutionized the concepts of human-robot interaction (HRI) by providing a user-friendly medium for perception, interaction, and information exchange ( De Tommaso et al., 2012 ).

What has preceded affirms that the benefits of combining AI and AR in robotics are manifold, and special attention should be given to such efforts. There are several review papers highlighting the integration of augmented reality to robotics from different perspectives such as human-robot interaction ( Green et al., 2008 ; Williams et al., 2018 ), industrial robotics ( De Pace et al., 2020 ), robotic-assisted surgery (L. Qian et al., 2020 ), and others ( Makhataeva and Varol, 2020 ). Similarly, there exist papers addressing the potential of integrating artificial intelligence in robotics as reviewed in Loh (2018) , De Pace et al. (2020) and Tussyadiah (2020) . A recent review ( Makhataeva and Varol, 2020 ) summarizes the work done at the intersection of AR and Robotics, yet it only mentions how augmented reality has been used within the context of robotics and does not touch on the intelligence in the system from different perspectives as highlighted in this paper. Similarly, another systematic review ( Norouzi et al., 2019 ) presented the convergence of three technologies: Augmented reality, intelligent virtual agents, and internet of things (IOT). However, it did not focus on robotics as the main intelligent systems and even excludes agents having physical manifestations of humanoid robots. Consequently, this paper systematically reviews literature done over the past 10 years at the intersection of AI, AR, and robotics. The purpose of this review is to compile what has been previously done, analyze how augmented reality is supporting the integration of artificial intelligence in robotics and vice versa, and suggest prospective research opportunities. Ultimately, we contribute to future research through building a foundation on the current state of AR and AI in robotics, specifically addressing the following research questions:

  • 1) What is the current state of the field on research incorporating both AR and AI in Robotics?
  • 2) What are the various elements and disciplines of AR and AI used and how are they intertwined?
  • 3) What are some of the current applications that have benefited from the inclusion of AR and AI? And how were these applications affected?

To the best of our knowledge, this is the first literature review combining AR and AI in robotics where papers are systematically collected, reviewed, and analyzed. A categorical analysis is presented, where papers are classified based on which technology supports the other, i.e., AR supporting AI or vice versa, all under the hood of robotics. We also classify papers into their perspective robotic applications (for example grasping) and explain how this application was improved. Research questions 1 and 2 are answered in Results , and research question 3 is answered in Discussion .

The remainder of the paper is organized according to the following sections: Methods, which specifies the survey methodology adopted as well as inclusion and exclusion criteria, Results, which presents descriptive statistics and analysis on the total number of selected papers in this review (29 papers), Discussion, which presents an analysis on each paper from different perspectives, and finally Concluding Remarks, which highlights key findings and proposes future research.

This paper follows a systematic approach in collecting literature. We adopt the systematic approach set forth in Pickering and Byrne (2014) , which is composed of 15 steps as illustrated in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is frobt-08-724798-g001.jpg

The adopted systematic approach in this review paper.

Steps 1 and 2 were explicitly identified in the Introduction . This section outlines the used keywords (step 3) and the used databases (step 4).

Search Strategy and Data Sources

Regarding keywords, this review targets papers that combine augmented reality with artificial intelligence in robotics. The first source used was Google Scholar denoted by G. Initially, we excluded the words surgery and education (search keys G1, G2, and G3) to narrow down the total number of output papers. Concurrently, there are several papers reviewing AI Robots in surgical applications ( Loh, 2018 ; Andras et al., 2020 ; Bhandari et al., 2020 ) and AI in education ( Azhar et al., 2020 ; Chen et al., 2020a ; Chen et al., 2020b ). Then, search keys G4 and G5 were used (Where we re-included the terms “surgery” and “education”) to cover a wider angle and returned a large number of papers, upon which we scrutinized only the first 35 pages. The second source of information is Scopus Database denoted by S, upon which two search keys were used, S1 and S2, and the third is ICRA 2020 proceedings. Finally, the references and citations of the corresponding selected outputs from these three sources were checked.

The time range of this review includes papers spanning the years between 2010 and 2020. Note that the process of paper collection for search keys G1, G2, G3, G4, S1, and S2 started on the 30 th of June and ended in July 21 st 2020. G5 search key was explored between August 11 th and August 20 th , 2020 and finally, ICRA 2020 proceedings were explored between August 20 th and August 31 st 2020.

Study Selection Criteria

The selection process was as follows: First duplicates, patents, and non-English papers were excluded. Then, some papers were directly excluded by scanning their titles, while others were further evaluated by looking into their abstract and keywords and downloading those that are relevant. Downloaded papers are then scanned through quickly going over their headers, sub-headers, figures, and conclusions. Starting from a total of 1,200, 329, and 1,483 papers from Google Scholar, Scopus database, and ICRA proceedings respectively, the total number of selected papers were funneled down to 13, 8, and 3 papers, respectively. After that, we looked into the references and citations of these 24 papers and selected a total of five papers. The inclusion and exclusion criteria were as follows:

Exclusion Criteria

  • • Papers with a non-English content
  • • Duplicate papers
  • • Patents

Inclusion Criteria

  • • The application should directly involve a robot
  • • Artificial Intelligence is involved in the Robotics Application. Although the words artificial intelligence and machine learning are used interchangeably in this paper, most of the cited work is more accurately a machine learning application. Artificial intelligence remains the broader concept of machines acting with intelligence and thinking as humans, with machine learning being the subset of algorithms mainly concerned with developing models based on data in order to identify patterns and make decisions.
  • • An Augmented Reality technology is utilized in the paper.

The process flow is also illustrated in Figure 2 according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines ( Moher et al., 2009 ).

An external file that holds a picture, illustration, etc.
Object name is frobt-08-724798-g002.jpg

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) chart.

A total of 29 papers were selected and individually examined through checking their abstracts, conclusions, and analyzing their main content. This section presents how the collected literature was classified into categories and presents some descriptive statistics visualized through figures and tables.

Categorization

There are two parallel categorizations in this paper: a theme-based categorization and an application-based categorization. Initially, all papers were grouped into two clusters based on a theme-based grouping of how AR and AI serve each other in a certain robotics application. The two distinguished clusters were as follows: AR supports AI, and AI supports AR. Each of these clusters is further explained below along with the total number of papers per group.

AR Supports AI (18 Papers)

This cluster groups papers in which a certain augmented reality visualization facilitates the integration of artificial intelligence in robotics. An example is an augmented reality application which provides visual feedback that aids in AI robot performance testing.

AI Supports AR (11 Papers)

Papers in which the output of AI algorithms and neural networks support an accurate display of augmented reality markers and visualizations.

Another remarkable pattern was noted among the 29 papers in terms of the specific robotics application that this AR-AI alliance is serving. In consequence, a parallel categorization of the 29 reviewed articles is realized, and three clusters were distinguished as follows:

Learning (12 Papers)

A robot learns to achieve a certain task, and the task is visualized to the human using AR. This category combines papers on learning from demonstration (LFD) and learning to augment human performance.

Planning (8 Papers)

A robot intelligently plans a certain path, task, or grasp, and the user can visualize robot information and feedback through AR.

Perception (9 Papers)

A robot depends on AI vision algorithms to localize itself or uses object detection and recognition to perceive the environment. AR serves here in identifying the robot’s intent.

Statistical Data

For the sake of analyzing historical and graphical aspects of the reviewed topic, Figures 3 , ​ ,4 4 present the yearly and regional distribution of reviewed papers, respectively. Historically, the number of publications integrating AR and AI in robotics applications has increased significantly between the years 2010 and 2020 (2020 is equal to 2019 but the year has not ended yet), demonstrating the growing interest in combining the capabilities of AR and AI to solve many challenges in robotics applications. Regionally, the united states is the leading country in terms of the number of published articles, followed by Germany. Note that we only considered the country of the first author for each paper.

An external file that holds a picture, illustration, etc.
Object name is frobt-08-724798-g003.jpg

The growing rate of published papers addressing our target topic over time.

An external file that holds a picture, illustration, etc.
Object name is frobt-08-724798-g004.jpg

The distribution of reviewed papers over their countries of origin.

Additional quantitative data are detailed in Table 1 . For each article, the table identifies five types of information: The AR technology and platform, the type of robot platform, the used AI algorithm, and to which cluster (from each category) it belongs. Overall, the most commonly used AR component is the HMD (48% of papers), mainly Microsoft HoloLens ( Microsoft HoloLens, 2020 ), Oculus Rift ( Oculus, 2021 ), or custom designed headsets. This is followed by desktop-based monitors (28%) and AR applications on handheld tablets and mobile phones (21%). Projection-based spatial AR were the least implemented (3%), which can be explained by the added complexity of the setup and lack of mobility. Unity3D game engine was the most commonly used for developing AR applications and visualizations, in comparison to Unreal Engine. Other options were using the Tango AR features supported by the Google Tango tablet or creating applications from scratch using the OpenGL graphics library. Regarding the type of robot used, aerial robots, such as UAVs and drones, were the least utilized (13%) in comparison to mobile robots (48%) and robotic arms (39%). Deep Neural networks were the most investigated in literature (52%) along with other state-of-the-art machine learning algorithms. Furthermore, the majority of papers were involved in creating visualizations that support AI integration to robotics, rather than implementing AI to enhance the augmented reality application in robotics.

Descriptive elements on the type of the used AR Component, robotics platform, AI component, and categorization for all reviewed papers.

Another set of distinctive features were extracted through analyzing three attributes, mainly the type of robot platform used, the type of AR technology employed, and the nature of the AI method performed, for each of the three robotics applications. The results are depicted in Figure 5 . The majority of papers (around 70%) that fall under the “Learning category” were using robot arms and manipulators as their robot platform. This is mainly because the Learning category reviews the learning from demonstration application, which is historically more common for industrial robotics applications in which a user demonstrates the trajectory of the end effector (EE) of a robot arm ( Billard et al., 2008 ; Mylonas et al., 2013 ; Zhu and Hu, 2018 ) than in the context of mobile robots ( Simões et al., 2020 ) or aerial robots ( Benbihi et al., 2019 ). On the other hand, around 70% of reviewed papers targeting robot “Perception” applications were using mobile robots. The reason is that vision-based localization algorithms are usually more ubiquitous for mobile robots ( Bonin-Font et al., 2008 ) compared to the other two platforms. The three robot platforms were almost equally distributed in the “Planning” category with a relatively higher prevalence of mobile robots.

An external file that holds a picture, illustration, etc.
Object name is frobt-08-724798-g005.jpg

The quantity distribution of three factors: Robot platform, AR technology, and AI method, over the three robot applications: Learning, Planning, and Perception.

Regarding the type of AR hardware/technology used, it was noted that the HMD was the most commonly used for all robotics applications covered, followed by the tablet or the desktop-based monitor. Spatial AR, or projection-based AR, was the least commonly used given its rigidness in terms of mobility and setup. As for the used AI, there was a variety of methods used, including regression, support vector machine (SVM), and Q-learning. However, neural networks, including YOLO and SSD deep neural networks, were the more commonly used across the three robotics applications. Neural networks were utilized in 42, 25, and 80% of the reviewed papers in the learning, planning, and perception categories, respectively.

Augmented reality technology has created a new paradigm for human-robot interaction. Through enabling a human-friendly visualization of how a robot is perceiving its environment, an improved human-in-the-loop model can be achieved ( Sidaoui et al., 2019 ; Gong et al., 2017 ). The use of AR technology for robotics has been elevated by the aid of several tools, mainly Vuforia Engine ( Patel et al., 2019 ; Makita et al., 2021 ; Comes et al., 2021 ), RosSharp ( Kästner and Lambrecht, 2019 ; Rosen et al., 2019 ; Qiu et al., 2021 ), ARCore ( Zhang et al., 2019 ; Chacko et al., 2020 ; Mallik and Kapila, 2020 ), and ARKit ( Feigl et al., 2020 ; McHenry et al., 2021 ). ARCore and ARKit are tools that have enhanced the AR experience for motion tracking, environmental understanding, light estimation, among other features. RosSharp has provided an open-source software for communication between ROS and Unity, which have greatly facilitated the use of AR for robot applications and provided useful easy-access functionalities, such as publishing and subscribing to topics and transferring URDF files.

In this section, 29 papers are analyzed in two parallel categorizations as explained in Results , a theme-based analysis capturing the relation between AR and AI in a robotic context (AR supports AI, AI supports AR) and an application-based analysis focusing on the perspective of how the robotic application itself was improved. We have also compiled a qualitative table ( Table 2 ) highlighting several important aspects in each paper. The highlighted aspects include the type of robot used, the nature of the experiment and number of human subjects, the human-robot interaction aspect, and the advantages, disadvantages and limitations of integrating AR and AI.

Qualitative information and analysis of each paper.

Theme-Based Analysis

The two themes highlighted here depend on the nature of the AR-AI alliance. Consequently, 18 papers in which an augmented reality technology is facilitating the integration of AI to robotics are reviewed under the “AR supports AI” theme, and 11 papers in which AI has been integrated to enhance the AR experience for a certain robotics application are reviewed under the “AI supports AR” theme.

AR Supports AI

In this cluster, augmented reality is used as an interface to facilitate AI, such as visualizing the output of AI algorithms in real-time. Papers are grouped depending on the type of robotic platform used: mobile robots, robotic arms, or aerial robots. Some papers contain both and are categorized based on the more relevant type.

Mobile Robots

An AR interface was developed in El Hafi et al. (2020) for an intelligent robotic system to improve the interaction of service robots with non-technical employees and customers in a retail store. The robot performs unsupervised learning to autonomously form multimodal place categorization from a user’s language command inputs and associates them to spatial concepts. The interface provided by an HMD enables the employee to monitor the robot’s training in real-time and confirm its AI status.

After investigating possible interfaces that allow user-friendly interactive teaching of a robot’s virtual borders ( Sprute et al., 2019a ), the authors in Sprute et al. (2019b) used a Google-Tango tablet to develop an AR application which prompts the user to specify virtual points on a live video of the environment from the tablet’s camera. The used system incorporates a Learning and Support Module which learns from previous user-interactions and supports users through recommending new virtual borders. The borders will be augmented on the live stream and the user can directly select and integrate them to the Occupancy Grid Map (OGM).

An augmented reality framework was proposed in Muvva et al. (2017) to provide a cost-effective medium for training a robot an optimal policy using Q-learning. The authors used ODG-R7 glasses to augment virtual objects at locations specified by fiducial markers. A CMU pixy sensor was used to detect both physical and virtual objects.

An AR mobile application was developed in Tay et al. (n.d.) that can inform the user of specific motion abnormalities of a Turtlebot, predict their causes, and indicate future failure. This information will be augmented on the live video of a mobile phone and sent to the user via email. The system uses the robot’s IMU data to train a gradient boosting algorithm which classifies the state of the motor into fault conditions indicating the level of balancing of the robot (tilting). This system decreases the downtime of the robot and the time spent on troubleshooting.

The authors in Corotan and Irgen-Gioro (2019) investigated the capabilities of augmented reality (ARCore) as an all in one solution for localization, indoor routing, and detecting obstacles. The application runs on a Google Pixel smartphone, which acts as both the controller (through a three-view user interface) and the sensor. Using its on-board localization features, an optimal path is planned from a starting position to an end position based on a Q-learning algorithm.

Omidshafiei et al. ( Measurable Augmented Reality for Prototyping Cyberphysical Systems, 2016 ) implemented an AR environment that provides visual feedback of hidden information to assist users in hardware prototyping and testing of learning and planning algorithms. In this framework, a ceiling-mounted projection system augments the physical environment in the laboratory with specific mission-related features, such as visualizing the state observation probabilities. In this system, the tracking of mobile and aerial robots is based on motion-capture cameras. Similarly, Hastie et al. (2018) presented the MIRIAM interface developed by the ORCA Hub: a user-centered interface that supports on-demand explainable AI through natural language processing and AR visualizations.

Robotic Arms

An Android mobile AR application was developed in Dias et al. (2020) as a training interface for a multi-robot system to perform a task variant. The tablet acts as a data collection interface based on the captured input demonstrations of several users. The application visualizes detected robots (using AR markers) and enables each user to construct a toy building of their choice through sequential tasks. Deep Q-learning ( Hester et al., 2017 ) has been employed to learn from the sequence of user demonstrations, predict valid variants for the given complex task, and achieve this task through a team of robots. The accuracy achieved in task prediction was around 80%.

The authors in Warrier and Devasia (2018) implemented a Complex Gaussian Process Regression model to learn the intent of a novice user during his/her teaching of the End Effector (EE) position trajectory. A Kinect camera captures the user’s motion, and an AR HMD visualizes the desired trajectory versus the demonstrated trajectory, which allows the operator to estimate the error (i.e., difference between the two trajectories) and correct accordingly. This approach was tested by a single operator and showed a 20% decrease in the tracking errors of demonstrations compared to manual tracking.

AR solutions were investigated in Ong et al. (2010) to correct for model mismatches in programming by demonstration (PBD), where they used an HMD as a feedback and data collection interface for robot path planning in an unknown environment. The user moves a virtual robot (a probe with AR markers) along a desired 3D curve with a consistent orientation, while evaluating the drawn curve using AR. The collected data points are then fed to a three-stage curve learning method, which increased the accuracy of the desired curve. The system was further enhanced in Fang et al. (2013) through considering robot dynamics, basically the end effector (EE) orientation. Once the output curve is generated, a collision-free volume (CFV) is displayed and augmented on a desktop screen to the user who can select control points for EE orientation. Some limitations in the proposed interface were found, such as the difficulty in aligning the virtual robot with the interactive tool, occluding markers or moving them out of the camera’s view, and selecting inclination angles that are not within range, causing the EE to disappear from the display. Consequently, the used AR visual cues were further developed for a robust HRI in Fang et al. (2014) , such as the use of virtual cones to define the orientation range of the EE, colors to distinguish dataset points, control points, and points outside the range of the CFV, and an augmented path rendered by a set of selected control points.

A HoloLens HMD was also used in Liu et al. (2018) as an AR interface in the teaching process of interpretable knowledge to a 7-DoF Baxter robot. The full tree of robot coordinates TF and latent force data were augmented on the physical robot. The display also offers the user to turn on the robot’s learned knowledge represented by a “Temporal And-Or graph,” which presents live feedback of the current knowledge and the future states of the robot.

A semi-automatic object labeling method was developed in De Gregorio et al. (2020) based on an AR pen and a 2D tracking camera system mounted on the arm. In this method, a user first outlines objects with virtual boxes using an AR pen (covered with markers) and a robot acquires different camera poses through scanning the environment. These images are used to augment bounding boxes on a GUI which enables the user to refine them.

The authors in Gadre (2018) implemented a training interface facilitated by Microsoft HoloLens for learning from demonstration. The user can control the EE position by clicking commands on a transparent sphere augmented on the EE and use voice commands to start and end the recording of the demonstration. Through clicking on the sphere at a specific EE position, the system will store it as a critical point (CP) and augment a transparent hologram of the robot on its position as a visual reminder of all saved CPs. The saved CPs are then used to learn a Dynamic Movement Primitive (DMV).

A spatial programming by demonstration (PBD) called GhostAR was developed in Cao et al. (2019) , which captures the real-time motion of the human, feeds it to a dynamic time warping (DTW) algorithm which maps it to an authored human motion, and outputs corresponding robot actions in a human-lead robot-assist scenario. The captured human motions and the corresponding robot actions are saved and visualized to the user who can observe the complete demonstration with saved AR ghosts of both the human and robot and interactively perform edits on robot actions to clarify user intent.

The authors in Zhang et al. (2020) created the Dex-Net deep grasp planner, a distributed open-source pipeline that can predict 100 potential grasps from the object’s depth image based on a pre-trained Grasp Quality CNN. The grasp with the highest Quality value will be overlaid on the object’s depth map and visualized on the object through an AR application interface provided by ARKit. The system was able to produce optimal grasps in cases where the top-down approach doesn’t detect the object’s complex geometry.

An AR assistive-grasping system was implemented in Weisz et al. (2017) that can be used by impaired individuals in cluttered scenes. The system is facilitated by a surface electromyography (sEMG) input device (a facial muscle signal) and can be evaluated using an augmented reality desktop-based display of the grasping process. The interface allows a visualization of the planned grasp. The probabilistic road map planner ( Kavraki et al., 1996 ) was used to verify the reachability of an object and a K-nearest neighbor (KNN) classifier for classifying objects into reachable and unreachable.

The authors in Chakraborti et al. (2017) proposed combining AR technology with electroencephalographic (EEG) signals to enhance Human-robot collaboration specifically in shared workspaces. Two AR interaction modalities were implemented via an HMD. The first facilitates the human-in-the-loop task planning while the other enhances situational awareness. Through observing the emotions from EEG signals, the robot can be trained through reinforcement learning to understand the user’s preferences and learn the process of human-aware task planning.

Aerial Robots

A teleoperation system was developed in Zein et al. (2020) that recognizes specific desired motions from the user joystick input and accordingly suggests to auto-complete the predicted motion through an augmented user interface. The proposed system was tested on Gazebo using a simulated Parrot Ar. Drone 2.0 and performed better than manual steering by 14.8, 16.4, and 7.7% for the average distance, time, and Hausdorff metric, respectively.

The authors in Bentz et al. (2019) implemented a system in which an aerial collaborative robot feeds the data from the head motions of a human performing a multitasking job to an Expectation-Maximization that learns which environment views have the highest visual interest to the user. Consequently, the co-robot is directed to capture these relevant views through its camera, and an AR HMD supplements the human’s field of view with views when needed.

Overall, the advantages of augmented reality in facilitating the integration of AI to robotics applications are manifold. AR technologies can provide a user-friendly and intuitive medium to visualize the learning process and provide the live learned state of the robot. They also provide a medium for the robot to share its present and future intent, such as the robot perceived knowledge and the robot’s planned actions based on its AI algorithms. Although the AR HMDs - such as those provided by Microsoft HoloLens and Oculus Rift - are the most commonly used for an intuitive HRI, they still have their limitations such as their narrow field of view (FOV) and impractical weight. Other AR interfaces used included mobile phones, tablets, and desktop displays. The latter is more practical in simulations, otherwise, the user will need to split attention between the actual robot and the augmented display. Tablets and mobile phones are generally more intuitive but impractical in situations where the user has to use both hands. Spatial AR, also known as projection-based AR, is less used due to its mobility restrictions.

AI Supports AR

In this cluster, AI contributes to an accurate and more reliable augmented reality application, or interface, such as applying deep learning for detecting obstacles in the robot’s path. Papers are also grouped depending on the type of robotic platform used.

The authors in Ghiringhelli et al. (2014) implemented an AR overlay on the camera view of a multi-robot system. The system supports three types of information: textual, symbolic, and spatially situated. While the first two reveal insights about the internal state of each robot without considering its orientation or camera perspective, spatially situated information depends on how the robot perceives its surrounding environment and are augmented on each robot using its frame of reference. Properly augmenting information depends on a visual tracking algorithm that identifies robots from the blinking code of an onboard RGB LED.

In Wang et al. (2018) , the authors used deep learning to obtain the location of a target in the robot’s view. The robot first runs simultaneous localization and mapping (SLAM) to localize and map the place in an urban search and rescue scenario. Once the robot detects a target in the area, an AR marker is placed on its global coordinate and displayed to the user on the augmented remote screen. Even when the detected target is not within display, the location of the marker changes according to its place relative to the robot.

The authors in Kastner et al. (2020) developed a markerless calibration method between a HoloLens HMD and a mobile robot. The point cloud data acquired from the 3D depth sensor of the AR device are fed into a modified neural network based on VoteNet. Although the approach was feasible in terms of an accurate localization and augmentation of the robot by a 3D bounding box, the intensive live processing operations of point cloud data was very slow. Two seconds was the time needed for the user to stay still while the neural network processes the incoming data, which can be impractical and lead to a bad user experience.

Alternatively, Kästner et al. (2020) investigated using the 2D RGB data provided by the HoloLens instead, which is relatively faster to process than 3D data and can be applied to any AR device. SSPE neural networks were deployed in order to localize the six DOF pose of a robot. Meanwhile, the resulting bounding boxes are augmented to the user, who can evaluate the live training process. This method is around 3% less accurate than the first one but almost 97% faster.

The authors in Puljiz et al. (2019) reviewed the referencing and object detection methods used in the robotics field in general and the referencing methods currently used between a robot and the HMD in particular. Based on this, authors proposed three referencing algorithms that can serve this particular domain: Semi-Automatic One Shot, Automatic One Shot, and Automatic Continuous. While the trials for the proposed automatic methods (based on neural networks) are still in their infancy, a detailed implementation of Semi-Automatic referencing (ICP and Super4PCS algorithms) was tested on a KUKA KR-5 robot. With a minimal user input - positioning a cube (a seed hologram) on the base of the robot and rotating its z-axis towards its front - the referenced robot will be augmented on the actual one via the Microsoft HoloLens display.

An AR teleoperation interface was implemented in Gradmann et al. (2018) of a KUKA lightweight robot using a Google Tango Tablet. The interface allows the user to change the robot joint configuration, move the tool center point, and perform grasping and placing objects. The application provides a preview of the future location of the robot by augmenting its corresponding virtual one according to the new joint configuration. Object Detection was done using Tango’s built-in depth camera and RGB camera and is based on DBSCAN algorithm.

The authors in ( Chu et al., 2008 .) used a Tongue Drive System as input for an assistive grasping system facilitated through an AR interface. The system implements the YOLO neural network [39] for object detection and a deep grasp algorithm ( Chu and Vela, 2018 ) for detecting the graspable locations for each object. Consequently, this information (bounding boxes and grasp lines) will be properly augmented on objects within the user’s FOV. Furthermore, a virtual menu provides the user with robot affordances that can be performed.

A teleoperation surveillance system was proposed in Sawarkar et al. (2016) composed of an unmanned ground vehicle (UGV) and an unmanned aerial vehicle (UAV) in the context of a hostile environment. The IMU measurements of a VR goggle are used to control the rotations of a camera mounted on each vehicle. The live video stream is processed to detect individuals and their probabilities of being terrorists using a CNN. This information is then augmented to the user through the goggle.

As implied in literature, artificial intelligence techniques are a great means for a robust visualization and an improved user experience. Traditional techniques to augment information on objects or targets are mainly using fiducial AR markers, which are impractical in cases of new environments such as in urban search and rescue (USAR) scenarios. On one hand deep learning can improve robot perception of its environment to detect objects and properly augment related information on each. On the other hand, it can be used to localize the robot itself and reveal information during its live performance. A key consideration for these systems is the processing requirements versus the current capabilities of the hardware.

Application-Based Analysis

This section focuses on the areas in which AR and AI were applied. In other words, we explain here how the challenges of a certain robotics application - such as learning from demonstration and robot localization - were addressed through leveraging resources from augmented reality and artificial intelligence. We divide this into three main headings: Learning (12 papers), Planning (8 papers), and Perception (9 papers). Tables 3 , ​ ,4, 4 , and ​ and5 5 summarize the advantages as well as disadvantages and limitations of each method in each of the three subheadings respectively.

The advantages as well as the disadvantages and limitations of each method in the Learning sub-heading.

The advantages as well as the disadvantages and limitations of each method in the Planning sub-heading.

The advantages as well as the disadvantages and limitations of each method in the Perception sub-heading.

In general terms, a robot is said to learn from its environment or from the human if it can develop novel skills from past experience and adapt according to the situation at hand. According to the collected literature, we divide the scope of learning here to two basic paradigms: Learning from demonstration and Learning to augment human performance.

Learning From Demonstration

Robot learning from demonstration (LFD) is described as the ability of a robot to learn a policy – identified as a mapping between the robot world state and the needed actions – through utilizing the dataset of user demonstrated behavior ( Argall et al., 2009 ). This dataset is called the training dataset, and it is formally composed of pairs of observations and actions. Consequently, training channels are a bottleneck in such applications, and this is where augmented reality comes very handy. AR interfaces can act as a means for demonstrating the required behavior, and more importantly, improve the overall process through demystifying user intent. Consequently, the user can intuitively understand the “robot intent” (i.e., how the robot is understanding his/her demonstration). On the other hand, AI can be used for the robot to learn the “user intent” (i.e., understand what the user wants the robot to perform and adapt accordingly), and visualize this intent through AR. The following analysis clarifies this within the context of LFD.

In Ong et al. (2010) and Fang et al. (2013 , 2014) , data points of the demonstrated trajectory (of a virtual robot) are collected, edited, and visualized through a HMD/GUI allowing the user to intuitively clarify his/her intent of the desired trajectory. These demonstrations are first parameterized using a Piecewise Linear Parameterization (PLP) algorithm, then fed to a Bayesian neural network (BNN), and finally reparametrized. Authors compared error metrics and demonstrated that the proposed three-stage curve learning method (PLP, BNN, and reparameterization) improved the accuracy of the output curve much faster than the basic approach. Similarly, authors in Gadre (2018) used the Microsoft HoloLens as an interface for data collection in demonstrating a desired curve for a real Baxter robot. The interface allows the user to interactively control a teleoperation sphere augmented on the robot EE. The environment is modeled as a Markov Decision Process, and the agent (robot) learns a Dynamic Movement Primitive based on the user-defined critical points. Data from demonstrations were processed through a least-square function. Although this methodology supports an intuitive interface for collecting training data, it was prone to errors as the real robot and hologram were not lining up all the time, causing inaccurate representation of locations. Furthermore, the system was only tested by a single expert demonstrator.

In Warrier and Devasia (2018) , the authors trained a kernel-based regression model to predict the desired trajectory of the EE based on a database of human-motor dynamics. Through observing the human-motor actions collected through a Microsoft Kinect camera, the model can infer the intent of the user of the desired trajectory. A single trial allows the robot to infer a new desired trajectory, which is then visualized to the user through the HoloLens against the actual demonstrated trajectory. This allows the user to spatially correct the error through moving their hand (tracked using the Skeleton Tracking routine) to minimize the distance between the demonstrated and desired trajectories. Alternatively, the authors in Liu et al. (2018) captured demonstrations by tracking hand-object interactions collected through a LeapMotion sensor. After manually segmenting the captured data into groups of atomic actions (such as pinch, twist, and pull), this data is used to train a modified version of the unsupervised learning algorithm: ADIOS (Automatic Distillation of Structure). This induces a Temporal and Or Graph (AOG), a stochastic structural model which provides a hierarchical representation of entities. The AR interface then allows to interactively guide the robot without any physical interactions, for example through dragging the hologram of the virtual robot to a new pose.

In Cao et al. (2019) , the human motion is captured through the AR elements (Oculus Rift and two Oculus Touch Controllers) and saved as ghost holograms. Dynamic Time Warping is used to infer the human motion in real time from a previously compiled list of groups that represent human authorized motions. The workflow of the proposed system consists of five modes: The Human Authoring Mode in which the demonstrations are recorded, The Robot Authoring Mode in which the user can interactively author the collaborative robot task, The Action Mode in which the user performs the new collaborative task, and The Observation and Preview Modes for visualizing saved holograms and an animation of the whole demonstration.

A tablet was used in Dias et al. (2020) for data collection, prompting the user to construct a toy building through controlling a multi-robot system consisting of two mobile robots to carry blocks of different types and one robot arm for pick and place and a grid of state cells is used to represent the workspace. Given that the user can select between 135 possible actions to construct the toy, the application stores this data for training the DNN model. The model computes the posterior probability of the uncertain action (how the user is building the structure), predicting the move with the highest probability depending on what the current state is in the occupancy grid. Although the model performed successful task variants for 80% of the trials, authors indicated that further improvements should be done to improve the prediction of sequential actions and investigate more complex tasks.

Learning to Augment Human Performance

Machine Learning opens a great avenue for improving the quality and efficiency of tasks performed by humans, such as maintenance and troubleshooting, multitasking work, or even teleoperation. AI would be used for understanding data and providing suggestions that would augment (improve) human performance of the task at hand. In the following analysis, we analyze content within this perspective, focusing on how the application was improved.

Multitasking is improved in Bentz et al. (2019) , where data from a HMD are fit to a model that identifies views of interest to the human, directs an aerial co-robot to capture these views, and augments them on his/her. The input data is the head pose collected through a VICON motion capture system. A function, modeled as a mixture of Gaussians, receives this data and estimates the human visual interest via expectation maximization (EM). Although the average time to complete the primary task increased by around 10–16 s, the head motions recorded throughout the experiment were reduced by around 0.47 s per subject.

In Tay et al. (n.d.) , the authors investigated two machine learning models trained on IMU sensor data of a Turtlebot to predict possible motor failures. SAS Visual Data Modelling and Machine Learning (VDMML) was used to test which of the Random Forest Model and Gradient Boosting would perform better to track the balance (tilting) of the robot. Gradient Boosting was chosen as it showed a lower average squared error in predictions, with 315 generated decision trees and 426 maximum leaf size.

An “Autocomplete” framework was proposed in Zein et al. (2020) that would support novice users in teleoperating complex systems such as drones. The system takes the human input from a Joystick, predicts what the actual desired teleoperation command is, and then shares it with the user through an augmented reality interface. The used model is an SVM trained on 794 motion examples to classify the input motion as one from a library of motion primitives which currently are lines, arcs, 3D helix motions, and sine motion.

In this section, two learning paradigms were discussed, robot learning from demonstration (LFD) and robot learning to augment human performance. The presented literature affirms that AR and AI will be extensively integrated in these two robotics applications in the near future. In the former, AR serves as a user-friendly training interphase and has a great potential for swarm mobile robotics, as multiple users can more easily train a multi-robot system. In the context of manipulators and robotic arms, visualizing demonstrations in real time allows the user to understand trajectories, correct for errors, and introduce new constraints to the system. In the latter, there is a potentially growing avenue to employ AI in robotic applications that understand user instructions (of the task at hand) and employ AR to visualize what the robot understands and interactively ask for feedback from the user. This has a great potential in complex applications where multiple factors concurrently affect the process, such as in the cases of teleoperating unmanned aerial vehicles (UAVs) or controlling mobile robots in dynamic environments like in the case of USAR.

This cluster groups papers in which AI is integrated to improve task planning, path planning, and grasping.

Task Planning

In Chakraborti et al. (2017) , a system for human-aware task planning was proposed featuring an “Augmented Workspace” allowing the robot to visualize their intent such as their current planning state, and a “Consciousness Cloud” which learns from EEG signals the intent of the human collaborator while executing the task. This cloud is two-fold: an SVM model is used to classify input EEG signals into specific robot commands, and a Q-learning model which learns from the task-coupled emotions (mainly stress and excitement levels) the preferences of the human to plan accordingly. Although results were promising on novice users, authors reflected that the significance of the system might drastically decrease when tested on experienced individuals and proposed this as a future work.

Path Planning

Optimal path planning through reinforcement learning was done in Muvva et al. (2017) in a working environment combining both physical and AR (virtual) obstacles. The environment is represented as a Markov Decision Process, and the Depth First Search (DFS) was used for a sub-optimal solution. Then the robot is trained to find the optimal path in grid world using Q-learning which returns the path as the optimal policy learned. Similarly in Corotan and Irgen-Gioro (2019) , the robot learns the shortest path to its destination using Q-learning while relying solely on ARCore capabilities of localization and object avoidance. However, authors concluded that the robot’s dependence on one input (basically the camera of a smart phone mounted on the robot) supported by ARCore is inefficient. Whenever anything obstructs the sensor, the robot loses its localization and routing performance.

A deep AR grasp planning system was proposed in Zhang et al. (2020) which utilizes the ARKit platform to collect point cloud data of the object-to-grasp as well as visualizing the planned grasp vector overlaid on the object’s depth map. The pipeline is five-folds: Recording RGB images of the object to grasp, extracting the point cloud using Structure from Motion (SFM), cleaning the data using RANSAC and KNN, transforming the data to an artificial depth map, and finally feeding this map to a pre-trained GQ – CNN. Although this methodology was efficient in detecting optimal grasps for cases where the traditional top-down approach fails, its downside is the very high time taken for collecting data (2 min per object).

The authors in Chu et al. (2018) also investigated AR and AI solutions for grasping, specifically those controlled by a Tongue Drive System (TDS). The input is RGB-D images from the META AR glasses, and the output is potential grasp predictions each represented by a 5D grasp rectangle augmented on the target object. Before applying the deep grasp algorithm ( Chu et al., 2018 ), YOLO ( Redmon et al., 2016 ) is first applied on the RGB-D for generating 2D bounding boxes, which are further manipulated into 3D bounding boxes for localization. The system achieved competitive results with state-of-the-art TDS manipulation tasks.

Through using grasp quality measurements in Weisz et al. (2017) taking into consideration the uncertainty of the grasp acquisition and the object’s local geometry in a cluttered scene, the system can robustly perform grasps that match the user’s intent. The presented human-in-the-loop system was tested on both healthy and impaired individuals and subjects successfully grasped 82% of the objects. However, subjects found some difficulties in the grasp-refinement phase mainly due to their lack of the gripper’s friction properties.

Based on the literature presented, we foresee several opportunities for the utilization of AR and AI in future planning and manipulation tasks. This can result in a paradigm shift in collaborative human-in-the-loop frameworks, where AI can add the needed system complexities and AR can bridge the gap for the user to understand these complexities. For example, the challenges of assistive robotic manipulators ( Graf et al., 2004 ; Chen et al., 2013 ) to people with disabilities can be mitigated, and the integration of new input modalities to grasp planning can be facilitated. Concurrently, in all planning frameworks, attention should be given to the added mental load of AR visualizations, which might obstruct the user in some cases or even hinder efficient performance.

This cluster groups papers in which AI is integrated for robot and environment perception through object detection or localization.

Object Detection

In Sawarkar et al. (2016) the data received from the IP camera mounted on the UGV is initially de-noised using the Gaussian filter, then processed using two algorithms for detecting individuals: an SVM trained with HOG features, and a Haar Cascade classifier. These algorithms detect the human anatomy and selects it as the ROI, which is then fed to a CNN trained to recognize individuals holding several types of guns. Once the data is processed, the detected human is augmented with a colored bounding box and a percentage representing his/her probability of being a terrorist.

In Wang et al. (2018) , an automatic target detection mode was developed for the AR system based on an object semi-supervised segmentation applied to a convolutional neural network. The segmentation algorithm used is the One-Shot Video Object Segmentation (OSVOS). The methodology is limited as the chosen algorithm was prone to errors especially when there is no target in the view. Furthermore, post-processing the results was needed unless the user manually specifies whether a target is within view or not.

In De Gregorio et al. (2020) , authors compared the results of two object-detecting CNNs: YOLO and SSD on the dataset they generated using ARS, an AR semi-automatic object self-annotating method. The proposed method enabled the annotation of nine sequences of around 35,000 frames in 1 hour compared to manual annotation which usually takes around 10 h to annotate 1,000 frames improving the data annotation process. Furthermore, both recall and precision metrics were increased by around 15% compared to manual labeling. In El Hafi et al. (2020) , authors developed a method to form spatial concepts based on multimodal inputs from imaged features obtained by AlexNet-based CNN ( Krizhevsky et al., 2012 ), self-location information from the Monte Carlo localizer, and word information obtained from a speech recognition system.

To reduce the time spent on restricting the workspace of mobile co-robots, authors in Sprute et al. (2019b) developed a learning and support system that learns from previous user-defined virtual borders and recommends similar ones that can be directly selected through an AR application. The system uses a perception module based on RGB cameras and applies a deep learning algorithm (ResNet101) to the semantically segmented images of previous user interactions. Some limitations are mainly due to occlusion from furniture or having a camera setup that doesn’t cover the whole area.

The DBSCAN algorithm was used in Gradmann et al. (2018) to detect objects for a pick and place task. Objects are clustered according to their depth and color information provided by the depth camera of the Google Tango tablet. AR provides a live visual interface of the detected objects and a preview of robot intent (future position). 82% of pick and place tasks with different object positions were performed successfully, although the algorithm’s runtime can be impractical for some applications.

Robot Localization

In order to localize the robot and properly augment the information on each robot in a multi-robot system, authors in Ghiringhelli et al. (2014) used an active marker (one blinking RGB LED per robot) imaged by a fixed camera overlooking the robots environment. The blinking of each LED is set to a predefined pattern alternating two colors (blue and green). Initially, bright objects were detected through a fast beacon-detection frame-based algorithm. These detected objects were filtered first through evaluating the Track Quality Index, and then through a linear binary model which classifies the tracked points of the RGB color into either blue or green, based on a logistic regression learning of the blue and green color features applied during calibration.

The authors in Puljiz et al. (2019) presented a review of different approaches that can potentially be used for referencing between a robot and the AR HMD, such as training a neural network to estimate the joint positions of a robot manipulator based on RGB data ( Heindl et al., 2019 ). This was actually done in Kästner et al. (2020) to localize the six DOF pose of a mobile robot instead, while evaluating the training process through the AR interface. Authors compared two state-of-the-art neural networks – SSPE and BetaPose - previously trained on real and artificial datasets. The artificial dataset is based on a 3D robot model generated by Unreal Engine and annotated using NDDS plugin tool. Both networks, upon receiving a live video stream from the HoloLens, predicted accurate 3D pose of the robot, with the SSPE being 66% faster. Estimating the pose based on depth sensor data was investigated in Kastner et al. (2020) . Authors also developed an open source 6D annotation tool for 2D RGB images.

In this section, almost all the literature is an integration of AI to improve the AR experience, whether in innovating robust calibration methods or improving the tracking and object detection capabilities of AR systems. This provides an insight of what is done and what can be done to achieve a smooth integration of augmented reality applications. These methods are still limited in terms of robustness to ambient conditions like lighting, and the problem of increased computational time is still impractical for some applications. However, this can be mitigated in the future as hardware power is constantly improving and cloud computing is becoming ubiquitous.

The Ethical Perspective of Robotics and AI

As robots become ubiquitous, there are ethical considerations ranging from liability to privacy. The notion of a robot’s ability to do ethical decision making was first framed in Wallach and Allen (2009) yet the need to set rules for robot morality has been foresighted much earlier in Asimov’s fiction literature. Several organizations are trying to set guidelines and standards for such systems, we mention the IEEE 7010–2020 standard on ethically aligned design. The ethical challenges arising from complex intelligent systems span civilian and military use. Several aspects of concern emerged, ranging from discrimination and bias to privacy and surveillance. Service robots, which are designed to accompany humans at home or work present some of the greatest concerns as they serve in private and proprietary environments. Currently, AI capabilities possessed by robots are still relatively limited, where robots are only capable of a simple navigation task or taking a simple decision. However, as the research field evolves, robots will be able to do much more complex tasks with a greater level of intelligence. Therefore, there is a moral obligation for ethical consideration to evolve with the evolving technology.

Concluding Remarks

This paper provided a systematic review of literature on robotics which have employed artificial intelligence (AI) algorithms and augmented reality (AR) technology. A total of 29 papers were selected and analyzed within two perspectives: A theme-based analysis featuring the relation between AR and AI, and an application-based analysis focusing on how this relation has affected the robotics application. In each group, the 29 papers were further clustered based on the type of robotics platform and the type of robotics application, respectively. The major insights that can be drawn from this review are summarized below.

Augmented reality is a promising tool to facilitate the integration of AI to numerous robotics application. To counter the effect of increased complexity in understanding AI systems, AR offers an intuitive way of visualizing the robot internal state and its live training process. This is done through augmenting live information to the user via an HMD, a desktop-based GUI, a mobile phone, or a spatial projection system. This proved to improve several applications, such as learning by demonstration tasks, grasping, and planning. Learning from demonstration for robot manipulators is a field that has greatly benefited from the integration of AR and AI for an intuitive and user-friendly method of teaching, as done in Fang et al. (2014) and Liu et al. (2018) . AR has served as a user-friendly interface to ask the user to accept or reject the AI output, such as recommending to “Autocomplete” a predicted trajectory or suggesting a faster mapping of new virtual borders. We suspect the use of AR could contribute to the acceptability and trust of the general public in AI-enabled robots, as it can explicitly reveal the decision-making process and intentions of the robot. This also has the potential to contribute to not only increasing the efficiency of robotic systems but also their safety.

To improve the AR experience, accurate and reliable calibration and object localization methods are needed. As can be seen from the literature, artificial intelligence is a viable element supporting this notion for robotics applications. AR markers are widely used but are limited in dynamic environments and in cases of occlusions. Deep neural networks for object detection and robot localization seem the most promising for unstructured robotic environments ( De Gregorio et al., 2020 ; El Hafi et al., 2020 ), although they rely more on computational power and some methods are still computationally demanding. However, progress in hardware and cloud computing is making AI more viable in such scenarios. We suspect that AI will be used more for context and situational awareness in addition to detection of objects and events, which are capabilities that would enrich more AR displayed content.

The potentials of integrating these two elements in robotics applications are manifold and provide a means of deciphering the traditional human-robot mismatch model. Specifically, in the context of human-robot collaboration, AI can be used to understand the real user intent filtered from the perceived tasks the robot traditionally performs as in the work of Zein et al. (2020) . At the same time, AR can visualize information of the robot’s understanding of the user intent as in the work of Ghiringhelli et al. (2014) , providing a closed feedback loop into the model mismatch paradigm. The combination of these technologies will empower the next phase on human-robot interfacing and interaction. This is an area that highlights the importance of AI working side by side with humans instead of being perceived as a substitute for them.

This study confirms the many benefits of integrating AR and AI in robotics and reveals that the field is fertile and expects a striking surge in scholarly work. This result aligns with the current trends of incorporating more AI in the field of robotics ( Dimitropoulos et al., 2021 ). After the outbreak of COVID-19, the demand to replace humans with smart robots have become critical in some fields ( Feizi et al., 2021 ) affirming the increasing trend. Similarly, AR technology is currently at its rise, with several broad applications spanning education ( Samad et al., 2021 ), medicine ( Mantovani et al., 2020 ), and even sports ( da Silva et al., 2021 ). As AR and AI related technologies evolve, their integration will have numerous advantages to every application in robotics as well as other technological fields.”

Despite the well-developed resources, some limitations need to be addressed for powerful implementation of AR and AI in robotics. For example, AR devices are still hardware-limited, and some do not support advanced graphical processing, which challenges the implementation of computationally intensive AI algorithms on AR devices in real-time. Current methods rely on external remote servers for heavy computations, which might be impractical in some cases. Furthermore, vision-based approaches to track objects using AR markers are prone to errors and performance drops largely when occlusions happen or under challenging lighting conditions. Further improvements in AR hardware are needed to improve processing, battery life, and weight; all are elements needed for AR use for an extended period of time.

Future work can apply new out-of-the-box AItbox1 techniques to improve the AR experience with tracking methods robust in dynamic situations. Additional work is needed in AI to better understand human preferences in “how,” “when,” and “what” AR visual displays are shown to the user while debugging or performing a collaborative task with a robot. This can be framed when a robot can fully understand the “user intent” and show the user only relevant information through an intuitive AR interface. Similarly, AR holds potentials for integrating AI in complex robotics applications, such as grasping tasks in highly cluttered environments, detecting targets and localizing robots in dynamic environments and urban search and rescue, and teleoperating UAVs applying intelligent navigation and path planning. The future will have AI and AR in robotics ubiquitous and robust, just like networking, a given in a robotic system.

The major limitation of this systematic review is the potential underrepresentation of some papers combining AR, AI, and robotics. Given the choice of search terms identified in Methods , there is a possible incomplete documentation of research papers that do not contain a specified keyword, rather contain another synonym or an implied meaning in text.

Data Availability Statement

Author contributions.

ZB performed the literature search, data analysis, and wrote the draft. IE came up with the idea of this article, advised on the review and analysis, and critically revised the manuscript.

The research was funded by the University Research Board (URB) at the American University of Beirut.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Abbreviations

AR, Augmented Reality; MR, Mixed Reality; AI, Artificial Intelligence; HMD, Head Mounted Display; GUI, Graphical User Interface; PBD, Programming by demonstration; OGM, Occupancy Grid Map; CNN, Convolutional Neural Network; FOV, Field of View; KNN, K-Nearest-Neighbor; SVM, Support Vector Machine; EEG, Electroencephalographic; SFM, Structure from Motion; RANSAC, Random Sample Consensus; CNN, Convolutional Neural Network, YOLO, You Only Look Once; SSD, Single Shot Detector; ADIOS, Automatic Distillation of Structure; RGB, Red Green Blue; SVD, Singular Value Decomposition; MDP, Markov Decision Process; DTW, Dynamic Time Warping; EE, End Effector; DMP, Dynamic Movement Primitive; CP, Critical Point; ROS, Robot Operating System; GQ – CNN, Grasp Quality CNN; UGV, Unmanned Ground Vehicle; DBSCAN, Density-Based Spatial Clustering of Applications with Noise.

  • Andras I., Mazzone E., van Leeuwen F. W. B., De Naeyer G., van Oosterom M. N., Beato S., et al. (2020). Artificial Intelligence and Robotics: a Combination that Is Changing the Operating Room . World J. Urol. 38 , 2359–2366. 10.1007/s00345-019-03037-6 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Argall B. D., Chernova S., Veloso M., Browning B. (2009). A Survey of Robot Learning from Demonstration . Robotics Autonomous Syst. 57 , 469–483. 10.1016/j.robot.2008.10.024 [ CrossRef ] [ Google Scholar ]
  • Azhar H., Waseem T., Ashraf H. (2020). Artificial Intelligence in Surgical Education and Training: a Systematic Literature Review . Arch. Surg. Res. 1 , 39–46. [ Google Scholar ]
  • Benbihi A., Geist M., Pradalier C. (2019). “ Learning Sensor Placement from Demonstration for UAV Networks ,” in 2019 IEEE Symposium on Computers and Communications (ISCC). Presented at the 2019 IEEE Symposium on Computers and Communications (Barcelona: ISCC; ), 1–6. 10.1109/ISCC47284.2019.8969582 [ CrossRef ] [ Google Scholar ]
  • Bentz W., Dhanjal S., Panagou D. (2019). “ Unsupervised Learning of Assistive Camera Views by an Aerial Co-robot in Augmented Reality Multitasking Environments ,” in 2019 International Conference on Robotics and Automation (ICRA). Presented at the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada (IEEE; ), 3003–3009. 10.1109/ICRA.2019.8793587 [ CrossRef ] [ Google Scholar ]
  • Bhandari M., Zeffiro T., Reddiboina M. (2020). Artificial Intelligence and Robotic Surgery: Current Perspective and Future Directions . Curr. Opin. Urol. 30 , 48–54. 10.1097/MOU.0000000000000692 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Billard A., Calinon S., Dillmann R., Schaal S. (2008). “ Robot Programming by Demonstration ,” in Springer Handbook of Robotics . Editors Siciliano B., Khatib O. (Berlin, Heidelberg: Springer Berlin Heidelberg; ), 1371–1394. 10.1007/978-3-540-30301-5_60 [ CrossRef ] [ Google Scholar ]
  • Bonin-Font F., Ortiz A., Oliver G. (2008). Visual Navigation for Mobile Robots: A Survey . J. Intell. Robot. Syst. 53 , 263–296. 10.1007/s10846-008-9235-4 [ CrossRef ] [ Google Scholar ]
  • Bouaziz J., Mashiach R., Cohen S., Kedem A., Baron A., Zajicek M., et al. (2018). How Artificial Intelligence Can Improve Our Understanding of the Genes Associated with Endometriosis: Natural Language Processing of the PubMed Database . Biomed. Res. Int. 2018 , 1–7. 10.1155/2018/6217812 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Busch B., Grizou J., Lopes M., Stulp F. (2017). Learning Legible Motion from Human-Robot Interactions . Int. J. Soc. Robotics 9 , 765–779. 10.1007/s12369-017-0400-4 [ CrossRef ] [ Google Scholar ]
  • Čaić M., Avelino J., Mahr D., Odekerken-Schröder G., Bernardino A. (2020). Robotic versus Human Coaches for Active Aging: An Automated Social Presence Perspective . Int. J. Soc. Robotics 12 , 867–882. 10.1007/s12369-018-0507-2 [ CrossRef ] [ Google Scholar ]
  • Cao Y., Wang T., Qian X., Rao P. S., Wadhawan M., Huo K., Ramani K. (2019). “ GhostAR: A Time-Space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality ,” in Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. Presented at the UIST ’19: The 32nd Annual ACM Symposium on User Interface Software and Technology, New Orleans LA USA (New York: ACM; ), 521–534. 10.1145/3332165.3347902 [ CrossRef ] [ Google Scholar ]
  • Chacko S. M., Granado A., Kapila V. (2020). An Augmented Reality Framework for Robotic Tool-Path Teaching . Proced. CIRP 93 , 1218–1223. 10.1016/j.procir.2020.03.143 [ CrossRef ] [ Google Scholar ]
  • Chakraborti T., Sreedharan S., Kulkarni A., Kambhampati S., 2017. Alternative Modes of Interaction in Proximal Human-In-The-Loop Operation of Robots . ArXiv170308930 Cs. [ Google Scholar ]
  • Chen L., Chen P., Lin Z. (2020a). Artificial Intelligence in Education: A Review . IEEE Access 8 , 75264–75278. 10.1109/ACCESS.2020.2988510 [ CrossRef ] [ Google Scholar ]
  • Chen L., Su W., Wu M., Pedrycz W., Hirota K. (2020b). A Fuzzy Deep Neural Network with Sparse Autoencoder for Emotional Intention Understanding in Human-Robot Interaction . IEEE Trans. Fuzzy Syst. 28 , 1. 10.1109/TFUZZ.2020.2966167 [ CrossRef ] [ Google Scholar ]
  • Chen T. L., Ciocarlie M., Cousins S., Grice P. M., Hawkins K., Kaijen Hsiao K., et al. (2013). Robots for Humanity: Using Assistive Robotics to Empower People with Disabilities . IEEE Robot. Automat. Mag. 20 , 30–39. 10.1109/MRA.2012.2229950 [ CrossRef ] [ Google Scholar ]
  • Chu F.-J., Vela P. (2018). Deep Grasp: Detection and Localization of Grasps with Deep Neural Networks . [ Google Scholar ]
  • Chu F.-J., Xu R., Vela P. A., 2018. Real-world Multi-Object, Multi-Grasp Detection . ArXiv180200520 Cs. 10.1109/lra.2018.2852777 [ CrossRef ] [ Google Scholar ]
  • Chu F.-J., Xu R., Zhang Z., Vela P. A., Ghovanloo M. (2008). The Helping Hand: An Assistive Manipulation Framework Using Augmented Reality and Tongue-Drive Interfaces . Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 4 , 2158–2161. 10.1109/EMBC.2018.8512668 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Comes R., Neamtu C., Buna Z. L. (2021). “ Work-in-Progress-Augmented Reality Enriched Project Guide for Mechanical Engineering Students ,” in 2021 7th International Conference of the Immersive Learning Research Network (ILRN). Presented at the 2021 7th International Conference of the Immersive Learning Research Network (Eureka: iLRN; ), 1–3. 10.23919/iLRN52045.2021.9459247 [ CrossRef ] [ Google Scholar ]
  • Corotan A., Irgen-Gioro J. J. Z. (2019). “ An Indoor Navigation Robot Using Augmented Reality ,” in 2019 5th International Conference on Control, Automation and Robotics (ICCAR). Presented at the 2019 5th International Conference on Control, Automation and Robotics (ICCAR), Beijing, China (IEEE; ), 111–116. 10.1109/ICCAR.2019.8813348 [ CrossRef ] [ Google Scholar ]
  • da Silva A. M., Albuquerque G. S. G., de Medeiros F. P. A. (2021). “ A Review on Augmented Reality Applied to Sports ,” in 2021 16th Iberian Conference on Information Systems and Technologies (CISTI). Presented at the 2021 16th Iberian Conference on Information Systems and Technologies (CISTI), 1–6. 10.23919/CISTI52073.2021.9476570 [ CrossRef ] [ Google Scholar ]
  • De Gregorio D., Tonioni A., Palli G., Di Stefano L. (2020). Semiautomatic Labeling for Deep Learning in Robotics . IEEE Trans. Automat. Sci. Eng. 17 , 611–620. 10.1109/TASE.2019.2938316 [ CrossRef ] [ Google Scholar ]
  • De Pace F., Manuri F., Sanna A., Fornaro C. (2020). A Systematic Review of Augmented Reality Interfaces for Collaborative Industrial Robots . Comput. Ind. Eng. 149 , 106806. 10.1016/j.cie.2020.106806 [ CrossRef ] [ Google Scholar ]
  • De Tommaso D., Calinon S., Caldwell D. G. (2012). A Tangible Interface for Transferring Skills . Int. J. Soc. Robotics 4 , 397–408. 10.1007/s12369-012-0154-y [ CrossRef ] [ Google Scholar ]
  • Dias A., Wellaboda H., Rasanka Y., Munasinghe M., Rodrigo R., Jayasekara P. (2020). “ Deep Learning of Augmented Reality Based Human Interactions for Automating a Robot Team ,” in 2020 6th International Conference on Control, Automation and Robotics (ICCAR). Presented at the 2020 6th International Conference on Control, Automation and Robotics (ICCAR), Singapore, Singapore (IEEE; ), 175–182. 10.1109/ICCAR49639.2020.9108004 [ CrossRef ] [ Google Scholar ]
  • Dias T., Miraldo P., Gonçalves N., Lima P. U. (2015). “ Augmented Reality on Robot Navigation Using Non-central Catadioptric Cameras ,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Presented at the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems IROS, 4999–5004. 10.1109/IROS.2015.7354080 [ CrossRef ] [ Google Scholar ]
  • Dimitropoulos K., Daras P., Manitsaris S., Fol Leymarie F., Calinon S. (2021). Editorial: Artificial Intelligence and Human Movement in Industries and Creation . Front. Robot. AI 8 , 712521. 10.3389/frobt.2021.712521 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • El Hafi L., Isobe S., Tabuchi Y., Katsumata Y., Nakamura H., Fukui T., et al. (2020). System for Augmented Human-Robot Interaction through Mixed Reality and Robot Training by Non-experts in Customer Service Environments . Adv. Robotics 34 , 157–172. 10.1080/01691864.2019.1694068 [ CrossRef ] [ Google Scholar ]
  • Fang H. C., Ong S. K., Nee A. Y. C. (2014). Novel AR-based Interface for Human-Robot Interaction and Visualization . Adv. Manuf. 2 , 275–288. 10.1007/s40436-014-0087-9 [ CrossRef ] [ Google Scholar ]
  • Fang H. C., Ong S. K., Nee A. Y. C. (2013). Orientation Planning of Robot End-Effector Using Augmented Reality . Int. J. Adv. Manuf. Technol. 67 , 2033–2049. 10.1007/s00170-012-4629-7 [ CrossRef ] [ Google Scholar ]
  • Feigl T., Porada A., Steiner S., Löffler C., Mutschler C., Philippsen M., 2020. Localization Limitations of ARCore, ARKit, and Hololens in Dynamic Large-Scale Industry Environments . Presented at the VISIGRAPP (1: GRAPP), pp. 307–318. 10.5220/0008989903070318 [ CrossRef ] [ Google Scholar ]
  • Feizi N., Tavakoli M., Patel R. V., Atashzar S. F. (2021). Robotics and Ai for Teleoperation, Tele-Assessment, and Tele-Training for Surgery in the Era of Covid-19: Existing Challenges, and Future Vision . Front. Robot. AI 8 , 610677. 10.3389/frobt.2021.610677 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gadre S. Y. (2018). Teaching Robots Using Mixed Reality . Brown Univ. Dep. Comput. Sci. [ Google Scholar ]
  • Ghiringhelli F., Guzzi J., Di Caro G. A., Caglioti V., Gambardella L. M., Giusti A., 2014. Interactive Augmented Reality for Understanding and Analyzing Multi-Robot Systems , in: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. Presented at the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), IEEE, Chicago, IL, USA, pp. 1195–1201. 10.1109/IROS.2014.6942709 [ CrossRef ] [ Google Scholar ]
  • Gong L., Gong C., Ma Z., Zhao L., Wang Z., Li X., Jing X., Yang H., Liu C. (2017). “ Real-time Human-In-The-Loop Remote Control for a Life-Size Traffic Police Robot with Multiple Augmented Reality Aided Display Terminals ,” in 2017 2nd International Conference on Advanced Robotics and Mechatronics (ICARM). Presented at the 2017 2nd International Conference on Advanced Robotics and Mechatronics (China: ICARM; ), 420–425. 10.1109/ICARM.2017.8273199 [ CrossRef ] [ Google Scholar ]
  • Gonzalez-Billandon J., Aroyo A. M., Tonelli A., Pasquali D., Sciutti A., Gori M., et al. (2019). Can a Robot Catch You Lying? A Machine Learning System to Detect Lies during Interactions . Front. Robot. AI 6 , 64. 10.3389/frobt.2019.00064 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Govers F. X. (2018). Artificial Intelligence for Robotics: Build Intelligent Robots that Perform Human Tasks Using AI Techniques . Packt Publishing Limited. [ Google Scholar ]
  • Mylonas G. P., Giataganas P., Chaudery M., Vitiello V., Darzi A., Guang-Zhong Yang G., 2013. Autonomous eFAST Ultrasound Scanning by a Robotic Manipulator Using Learning from Demonstrations , in: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Presented at the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3251–3256. 10.1109/IROS.2013.6696818 [ CrossRef ] [ Google Scholar ]
  • Gradmann M., Orendt E. M., Schmidt E., Schweizer S., Henrich D. (2018). Augmented Reality Robot Operation Interface with Google Tango 8 . [ Google Scholar ]
  • Graf B., Hans M., Schraft R. D. (2004). Care-O-bot II-Development of a Next Generation Robotic Home Assistant . Autonomous Robots 16 , 193–205. 10.1023/B:AURO.0000016865.35796.e9 [ CrossRef ] [ Google Scholar ]
  • Green S. A., Billinghurst M., Chen X., Chase J. G. (2008). Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design . Int. J. Adv. Robotic Syst. 5 , 1. 10.5772/5664 [ CrossRef ] [ Google Scholar ]
  • Gurevich P., Lanir J., Cohen B. (2015). Design and Implementation of TeleAdvisor: a Projection-Based Augmented Reality System for Remote Collaboration . Comput. Supported Coop. Work 24 , 527–562. 10.1007/s10606-015-9232-7 [ CrossRef ] [ Google Scholar ]
  • Hakky T., Dickey R., Srikishen N., Lipshultz L., Spiess P., Carrion R. (2016). Augmented Reality Assisted Surgery: a Urologic Training Tool . Asian J. Androl. 18 , 732. 10.4103/1008-682X.166436 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hastie H., Lohan K., Chantler M., Robb D. A., Ramamoorthy S., Petrick R., et al.2018. The ORCA Hub: Explainable Offshore Robotics through Intelligent Interfaces . ArXiv180302100 Cs. [ Google Scholar ]
  • Heindl C., Zambal S., Ponitz T., Pichler A., Scharinger J., 2019. 3D Robot Pose Estimation from 2D Images . ArXiv190204987 Cs. [ Google Scholar ]
  • Hester T., Vecerik M., Pietquin O., Lanctot M., Schaul T., Piot B., et al.2017. Deep Q-Learning from Demonstrations . ArXiv Prepr. ArXiv170403732. [ Google Scholar ]
  • Kästner L., Dimitrov D., Lambrecht J., 2020. A Markerless Deep Learning-Based 6 Degrees of Freedom PoseEstimation for with Mobile Robots Using RGB Data . ArXiv200105703 Cs. [ Google Scholar ]
  • Kahuttanaseth W., Dressler A., Netramai C. (2018). “ Commanding mobile Robot Movement Based on Natural Language Processing with RNN Encoder-decoder ,” in 2018 5th International Conference on Business and Industrial Research (ICBIR). Presented at the 2018 5th International Conference on Business and Industrial Research (Bangkok: ICBIR; ), 161–166. 10.1109/ICBIR.2018.8391185 [ CrossRef ] [ Google Scholar ]
  • Kastner L., Frasineanu V. C., Lambrecht J. (2020). “ A 3D-Deep-Learning-Based Augmented Reality Calibration Method for Robotic Environments Using Depth Sensor Data ,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). Presented at the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France (IEEE; ), 1135–1141. 10.1109/ICRA40945.2020.9197155 [ CrossRef ] [ Google Scholar ]
  • Kästner L., Lambrecht J. (2019). “ Augmented-Reality-Based Visualization of Navigation Data of Mobile Robots on the Microsoft Hololens - Possibilities and Limitations ,” in 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM). Presented at the 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics (Bangkok: Automation and Mechatronics RAM; ), 344–349. 10.1109/CIS-RAM47153.2019.9095836 [ CrossRef ] [ Google Scholar ]
  • Kavraki L. E., Svestka P., Latombe J.-C., Overmars M. H. (1996). Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces . IEEE Trans. Robot. Automat. 12 , 566–580. 10.1109/70.508439 [ CrossRef ] [ Google Scholar ]
  • Kim B., Pineau J. (2016). Socially Adaptive Path Planning in Human Environments Using Inverse Reinforcement Learning . Int. J. Soc. Robotics 8 , 51–66. 10.1007/s12369-015-0310-2 [ CrossRef ] [ Google Scholar ]
  • Krizhevsky A., Sutskever I., Hinton G. E. (2012). “ ImageNet Classification with Deep Convolutional Neural Networks ,”. Advances in Neural Information Processing Systems . Editors Pereira F., Burges C. J. C., Bottou L., Weinberger K. Q. (Red Hook, NY: Curran Associates, Inc.), 25 , 1097–1105. [ Google Scholar ]
  • Le T. D., Huynh D. T., Pham H. V. (2018). “ Efficient Human-Robot Interaction Using Deep Learning with Mask R-CNN: Detection, Recognition, Tracking and Segmentation ,” in 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV). Presented at the 2018 15th International Conference on Control, Automation, Robotics and Vision (Singapore: ICARCV; ), 162–167. 10.1109/ICARCV.2018.8581081 [ CrossRef ] [ Google Scholar ]
  • Liu H., Zhang Y., Si W., Xie X., Zhu Y., Zhu S.-C. (2018). “ Interactive Robot Knowledge Patching Using Augmented Reality ,” in IEEE International Conference on Robotics and Automation (ICRA). Presented at the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD (IEEE; ), 1947–1954. 10.1109/ICRA.2018.8462837 [ CrossRef ] [ Google Scholar ]
  • Livio J., Hodhod R. (2018). AI Cupper: A Fuzzy Expert System for Sensorial Evaluation of Coffee Bean Attributes to Derive Quality Scoring . IEEE Trans. Fuzzy Syst. 26 , 3418–3427. 10.1109/TFUZZ.2018.2832611 [ CrossRef ] [ Google Scholar ]
  • Loh E. (2018). Medicine and the Rise of the Robots: a Qualitative Review of Recent Advances of Artificial Intelligence in Health . leader 2 , 59–63. 10.1136/leader-2018-000071 [ CrossRef ] [ Google Scholar ]
  • Makhataeva Z., Varol H. (2020). Augmented Reality for Robotics: A Review . Robotics 9 , 21. 10.3390/robotics9020021 [ CrossRef ] [ Google Scholar ]
  • Makhataeva Z., Zhakatayev A., Varol H. A. (2019). “ Safety Aura Visualization for Variable Impedance Actuated Robots ,” in IEEE/SICE International Symposium on System Integration (SII). Presented at the 2019 IEEE/SICE International Symposium on System Integration (SII), 805–810. 10.1109/SII.2019.8700332 [ CrossRef ] [ Google Scholar ]
  • Makita S., Sasaki T., Urakawa T. (2021). Offline Direct Teaching for a Robotic Manipulator in the Computational Space . Ijat 15 , 197–205. 10.20965/ijat.2021.p0197 [ CrossRef ] [ Google Scholar ]
  • Mallik A., Kapila V. (2020). “ Interactive Learning of Mobile Robots Kinematics Using ARCore ,” in 2020 5th International Conference on Robotics and Automation Engineering (ICRAE). Presented at the 2020 5th International Conference on Robotics and Automation Engineering (Singapore: ICRAE; ), 1–6. 10.1109/ICRAE50850.2020.9310865 [ CrossRef ] [ Google Scholar ]
  • Mantovani E., Zucchella C., Bottiroli S., Federico A., Giugno R., Sandrini G., et al. (2020). Telemedicine and Virtual Reality for Cognitive Rehabilitation: a Roadmap for the COVID-19 Pandemic . Front. Neurol. 11 , 926. 10.3389/fneur.2020.00926 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mathews S. M. (2019). “ Explainable Artificial Intelligence Applications in NLP, Biomedical, and Malware Classification: A Literature Review ,” in Intelligent Computing, Advances in Intelligent Systems and Computing . Editors Arai K., Bhatia R., Kapoor S. (Cham: Springer International Publishing; ), 1269–1292. 10.1007/978-3-030-22868-2_90 [ CrossRef ] [ Google Scholar ]
  • McHenry N., Spencer J., Zhong P., Cox J., Amiscaray M., Wong K., Chamitoff G. (2021). “ Predictive XR Telepresence for Robotic Operations in Space ,” in Presented at the 2021 IEEE Aerospace Conference (50100) (IEEE; ), 1–10. [ Google Scholar ]
  • Measurable Augmented Reality for Prototyping Cyberphysical Systems (2016). A Robotics Platform to Aid the Hardware Prototyping and Performance Testing of Algorithms . IEEE Control. Syst. 36 , 65–87. 10.1109/MCS.2016.2602090 [ CrossRef ] [ Google Scholar ]
  • Microsoft HoloLens (2020). Mixed Reality Technology for Business . Available at: https://www.microsoft.com/en-us/hololens (accessed 1 11, 20).
  • Milgram P., Kishino F. (1994). A Taxonomy of Mixed Reality Visual Displays . IEICE Trans. Inf. Syst. E77-d 12 , 1321–1329. [ Google Scholar ]
  • Moher D., Liberati A., Tetzlaff J., Altman D. G. (2009). The PRISMA GroupPreferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement . Plos Med. 6 , e1000097. 10.1371/journal.pmed.1000097 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Muvva V. V. R. M. K. R., Adhikari N., Ghimire A. D. (2017). “ Towards Training an Agent in Augmented Reality World with Reinforcement Learning ,” in 2017 17th International Conference on Control, Automation and Systems (ICCAS). Presented at the 2017 17th International Conference on Control (Jeju: Automation and Systems (ICCAS)), 1884–1888. 10.23919/ICCAS.2017.8204283 [ CrossRef ] [ Google Scholar ]
  • Nicolotti L., Mall V., Schieberle P. (2019). Characterization of Key Aroma Compounds in a Commercial Rum and an Australian Red Wine by Means of a New Sensomics-Based Expert System (SEBES)-An Approach to Use Artificial Intelligence in Determining Food Odor Codes . J. Agric. Food Chem. 67 , 4011–4022. 10.1021/acs.jafc.9b00708 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nilsson N. J. (1998). Artificial Intelligence: A New Synthesis . Elsevier. [ Google Scholar ]
  • Nilsson N. J. (2009). The Quest for Artificial Intelligence: A History of Ideas and Achievements . Cambridge: Cambridge University Press. 10.1017/CBO9780511819346 [ CrossRef ] [ Google Scholar ]
  • Norouzi N., Bruder G., Belna B., Mutter S., Turgut D., Welch G. (2019). “ A Systematic Review of the Convergence of Augmented Reality, Intelligent Virtual Agents, and the Internet of Things ,” in Artificial Intelligence in IoT . Editor Al-Turjman F. (Cham: Springer International Publishing; ), 1–24. 10.1007/978-3-030-04110-6_1 [ CrossRef ] [ Google Scholar ]
  • Oculus | VR Headsets & Equipment , 2021. Available at: https://www.oculus.com/ (accessed 11.1.20).
  • Ong S. K., Chong J. W. S., Nee A. Y. C. (2010). A Novel AR-based Robot Programming and Path Planning Methodology . Robotics and Computer-Integrated Manufacturing 26 , 240–249. 10.1016/j.rcim.2009.11.003 [ CrossRef ] [ Google Scholar ]
  • Papachristos C., Alexis K. (2016). “ Augmented Reality-Enhanced Structural Inspection Using Aerial Robots ,” in IEEE International Symposium on Intelligent Control (ISIC). Presented at the 2016 IEEE International Symposium on Intelligent Control (Buenos Aires: ISIC; ), 1–6. 10.1109/ISIC.2016.7579983 [ CrossRef ] [ Google Scholar ]
  • Patel J., Xu Y., Pinciroli C. (2019). “ Mixed-Granularity Human-Swarm Interaction ,” in 2019 International Conference on Robotics and Automation (ICRA). Presented at the 2019 International Conference on Robotics and Automation (Montreal: ICRA; ), 1059–1065. 10.1109/ICRA.2019.8793261 [ CrossRef ] [ Google Scholar ]
  • Pessaux P., Diana M., Soler L., Piardi T., Mutter D., Marescaux J. (2015). Towards Cybernetic Surgery: Robotic and Augmented Reality-Assisted Liver Segmentectomy . Langenbecks Arch. Surg. 400 , 381–385. 10.1007/s00423-014-1256-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pickering C., Byrne J. (2014). The Benefits of Publishing Systematic Quantitative Literature Reviews for PhD Candidates and Other Early-Career Researchers . Higher Education Res. Development 33 , 534–548. 10.1080/07294360.2013.841651 [ CrossRef ] [ Google Scholar ]
  • Puljiz D., Riesterer K. S., Hein B., Kröger T., 2019. Referencing between a Head-Mounted Device and Robotic Manipulators . ArXiv190402480 Cs. [ Google Scholar ]
  • Qian L., Wu J. Y., DiMaio S. P., Navab N., Kazanzides P. (2020). A Review of Augmented Reality in Robotic-Assisted Surgery . IEEE Trans. Med. Robot. Bionics 2 , 1–16. 10.1109/TMRB.2019.2957061 [ CrossRef ] [ Google Scholar ]
  • Qiu S., Liu H., Zhang Z., Zhu Y., Zhu S.-C. (2020). “ Human-Robot Interaction in a Shared Augmented Reality Workspace ,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Presented at the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 11413–11418. 10.1109/IROS45743.2020.9340781 [ CrossRef ] [ Google Scholar ]
  • Redmon J., Divvala S., Girshick R., Farhadi A., 2016. You Only Look once: Unified, Real-Time Object Detection . ArXiv150602640 Cs. [ Google Scholar ]
  • Rosen E., Whitney D., Phillips E., Chien G., Tompkin J., Konidaris G., et al. (2019). Communicating and Controlling Robot Arm Motion Intent through Mixed-Reality Head-Mounted Displays . Int. J. Robotics Res. 38 , 1513–1526. 10.1177/0278364919842925 [ CrossRef ] [ Google Scholar ]
  • Samad S., Nilashi M., Abumalloh R. A., Ghabban F., Supriyanto E., Ibrahim O. (2021). Associated Advantages and Challenges of Augmented Reality in Educational Settings: A Systematic Review . J. Soft Comput. Decis. Support. Syst. 8 , 12–17. [ Google Scholar ]
  • Sawarkar A., Chaudhari V., Chavan R., Zope V., Budale A., Kazi F., 2016. HMD Vision-Based Teleoperating UGV and UAV for Hostile Environment Using Deep Learning . ArXiv160904147 Cs. [ Google Scholar ]
  • Sidaoui A., Zein M. K., Elhajj I. H., Asmar D. (2019). “ A-SLAM: Human In-The-Loop Augmented SLAM ,” in 2019 International Conference on Robotics and Automation (ICRA). Presented at the 2019 International Conference on Robotics and Automation (Montreal: ICRA; ), 5245–5251. 10.1109/ICRA.2019.8793539 [ CrossRef ] [ Google Scholar ]
  • Simões M. A. C., da Silva R. M., Nogueira T. (2020). A Dataset Schema for Cooperative Learning from Demonstration in Multi-Robot Systems . J. Intell. Robot. Syst. 99 , 589–608. 10.1007/s10846-019-01123-w [ CrossRef ] [ Google Scholar ]
  • Singh N. H., Thongam K. (2019). Neural Network-Based Approaches for mobile Robot Navigation in Static and Moving Obstacles Environments . Intel Serv. Robotics 12 , 55–67. 10.1007/s11370-018-0260-2 [ CrossRef ] [ Google Scholar ]
  • Sprute D., Tönnies K., König M. (2019a). A Study on Different User Interfaces for Teaching Virtual Borders to Mobile Robots . Int. J. Soc. Robotics 11 , 373–388. 10.1007/s12369-018-0506-3 [ CrossRef ] [ Google Scholar ]
  • Sprute D., Viertel P., Tonnies K., Konig M. (2019b). “ Learning Virtual Borders through Semantic Scene Understanding and Augmented Reality ,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Presented at the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China (IEEE; ), 4607–4614. 10.1109/IROS40897.2019.8967576 [ CrossRef ] [ Google Scholar ]
  • Tay Y. Y., Goh K. W., Dares M., Koh Y. S., Yeong C. F. (). Augmented Reality (AR) Predictive Maintenance System with Artificial Intelligence (AI) for Industrial Mobile Robot 12 .
  • Turing A. M. (1950). I.-Computing Machinery and Intelligence . Mind New Ser. LIX , 433–460. 10.1093/mind/lix.236.433 [ CrossRef ] [ Google Scholar ]
  • Tussyadiah I. (2020). A Review of Research into Automation in Tourism: Launching the Annals of Tourism Research Curated Collection on Artificial Intelligence and Robotics in Tourism . Ann. Tourism Res. 81 , 102883. 10.1016/j.annals.2020.102883 [ CrossRef ] [ Google Scholar ]
  • Tzafestas C. S. (2006). “ Virtual and Mixed Reality in Telerobotics: A Survey ,” in Industrial Robotics: Programming, Simulation and Applications (London: IntechOpen; ). 10.5772/4911 [ CrossRef ] [ Google Scholar ]
  • Van Krevelen D. W. F., Poelman R. (2010). A Survey of Augmented Reality Technologies, Applications and Limitations . Ijvr 9 , 1–20. ISSN 1081-1451 9. 10.20870/IJVR.2010.9.2.2767 [ CrossRef ] [ Google Scholar ]
  • Walker M., Hedayati H., Lee J., Szafir D. (2018). “ Communicating Robot Motion Intent with Augmented Reality ,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. Presented at the HRI ’18: ACM/IEEE International Conference on Human-Robot Interaction, Chicago IL USA (Chicago: ACM; ), 316–324. 10.1145/3171221.3171253 [ CrossRef ] [ Google Scholar ]
  • Wallach W., Allen C. (2009). Moral Machines: Teaching Robots Right from Wrong . New York: Oxford University Press. 10.1093/acprof:oso/9780195374049.001.0001 [ CrossRef ] [ Google Scholar ]
  • Wang B., Rau P.-L. P. (2019). Influence of Embodiment and Substrate of Social Robots on Users' Decision-Making and Attitude . Int. J. Soc. Robotics 11 , 411–421. 10.1007/s12369-018-0510-7 [ CrossRef ] [ Google Scholar ]
  • Wang R., Lu H., Xiao J., Li Y., Qiu Q. (2018). “ The Design of an Augmented Reality System for Urban Search and Rescue ,” in IEEE International Conference on Intelligence and Safety for Robotics (ISR). Presented at the 2018 IEEE International Conference on Intelligence and Safety for Robotics (ISR), Shenyang (IEEE; ), 267–272. 10.1109/IISR.2018.8535823 [ CrossRef ] [ Google Scholar ]
  • Warrier R. B., Devasia S. (2018). “ Kernel-Based Human-Dynamics Inversion for Precision Robot Motion-Primitives ,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Presented at the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid (IEEE; ), 6037–6042. 10.1109/IROS.2018.8594164 [ CrossRef ] [ Google Scholar ]
  • Weisz J., Allen P. K., Barszap A. G., Joshi S. S. (2017). Assistive Grasping with an Augmented Reality User Interface . Int. J. Robotics Res. 36 , 543–562. 10.1177/0278364917707024 [ CrossRef ] [ Google Scholar ]
  • Williams T., Szafir D., Chakraborti T., Ben Amor H. (2018). “ Virtual, Augmented, and Mixed Reality for Human-Robot Interaction ,” in Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. Presented at the HRI ’18: ACM/IEEE International Conference on Human-Robot Interaction, Chicago IL USA (Daegu: ACM; ), 403–404. 10.1145/3173386.3173561 [ CrossRef ] [ Google Scholar ]
  • Yew A. W. W., Ong S. K., Nee A. Y. C. (2017). Immersive Augmented Reality Environment for the Teleoperation of Maintenance Robots . Proced. CIRP 61 , 305–310. 10.1016/j.procir.2016.11.183 [ CrossRef ] [ Google Scholar ]
  • Zein M. K., Sidaoui A., Asmar D., Elhajj I. H. (2020). “ Enhanced Teleoperation Using Autocomplete ,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). Presented at the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France (IEEE; ), 9178–9184. 10.1109/ICRA40945.2020.9197140 [ CrossRef ] [ Google Scholar ]
  • Zhang H., Ichnowski J., Avigal Y., Gonzales J., Stoica I., Goldberg K. (2020). “ Dex-Net AR: Distributed Deep Grasp Planning Using a Commodity Cellphone and Augmented Reality App ,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). Presented at the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France (IEEE; ), 552–558. 10.1109/ICRA40945.2020.9197247 [ CrossRef ] [ Google Scholar ]
  • Zhang X., Yao X., Zhu Y., Hu F. (2019). An ARCore Based User Centric Assistive Navigation System for Visually Impaired People . Appl. Sci. 9 , 989. 10.3390/app9050989 [ CrossRef ] [ Google Scholar ]
  • Zhu Z., Hu H. (2018). Robot Learning from Demonstration in Robotic Assembly: A Survey . Robotics 7 , 17. 10.3390/robotics7020017 [ CrossRef ] [ Google Scholar ]

IMAGES

  1. Artificial Intelligence Essay

    essay on robotics and artificial intelligence

  2. Robots and Robotics Free Essay Example

    essay on robotics and artificial intelligence

  3. Comparison: Robot Process Automation vs Artificial Intelligence

    essay on robotics and artificial intelligence

  4. Essay on Robotics and Machine Learning for School and College Students

    essay on robotics and artificial intelligence

  5. 007 Largepreview Essay Example Artificial ~ Thatsnotus

    essay on robotics and artificial intelligence

  6. (PDF) Impact of Artificial Intelligence, Robotics, and Automation on

    essay on robotics and artificial intelligence

VIDEO

  1. Robotics and Artificial Intelligence ~ Snippets from a project by students of Class IX

  2. AI and Robotics

  3. Robotics Artificial Intelligence working Fast

  4. Essay on Artificial Intelligence ⁉️🤯🧠

  5. Lecture : 13 How to Counter the Topic of Artificial intelligence in English Essay ?

  6. Write a short Essay on Robot🤖 in English|10lines on Robot|

COMMENTS

  1. (PDF) Robotics and Artificial Intelligence

    Robotics and Artificial Intelligence. January 2020; ... In fact, DeepMind has produced ov er 140 journal and conference papers and has had four . articles published in Nature since 2012.

  2. (Pdf) Artificial Intelligence in Robotics: From Automation to

    8.1 Summary of Findings. In conclusion, this research paper has explored the role of AI in robotics, the transition. from automation to autonomous systems, and the various AI techniques employed ...

  3. Robots and Artificial Intelligence

    Artificial intelligence and robots can bring many benefits to organizations, mainly due to the capacity for extensive automation. However, automation is a vague term, and it is necessary to clearly outline what aspects of organizational processes can be automated. On the contrary, there are concerns with security and ethics.

  4. AI and robotics: How will robots help us in the future?

    Artificial Intelligence. Recent advances in artificial intelligence (AI) are leading to the emergence of a new class of robot. In the next five years, our households and workplaces will become dependent upon the role of robots, says Pieter Abbeel, the founder of UC Berkeley Robot Learning Lab. Here he outlines a few standout examples.

  5. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  6. Robotics and artificial intelligence

    We are now living through the early stages of a similarly rapid revolution in robotics and artificial intelligence — and the effect on society could be just as enormous. This collection will be ...

  7. Ethics of Artificial Intelligence and Robotics

    Other Internet Resources References. AI HLEG, 2019, "High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI", European Commission, accessed: 9 April 2019. Amodei, Dario and Danny Hernandez, 2018, "AI and Compute", OpenAI Blog, 16 July 2018. Aneesh, A., 2002, Technological Modes of Governance: Beyond Private and Public Realms, paper in the Proceedings ...

  8. Robotics and Artificial Intelligence Essay example

    Robotics and Artificial Intelligence Essay example. Best Essays. 1769 Words. 8 Pages. 7 Works Cited. Open Document. Robotics and artificial intelligence is the way of the future. Imagine sitting at work and your co-worker is a robot, not just a robot but one who looks like a human, seems a bit far fetched but as predicted by The National ...

  9. PDF Primer on artificial intelligence and robotics

    Abstract. NYU Stern School of Business, 44 West 4th Street, New York, NY. 10012, USA. This article provides an introduction to artificial intelligence, robotics, and research streams that examine the economic and organizational consequences of these and related technologies. We describe the nascent research on artificial intelligence and ...

  10. Essay On Artificial Intelligence And Robotics

    Artificial intelligence was born in the periods of time of the 50's there were no clear distinction between the two distinctions artificial intelligence and robotics. And the reason for that is that the notion of the intelligent machine leads naturally to the robots and the robotics even though you might think that not every machine is a robot.

  11. Augmented Reality Meets Artificial Intelligence in Robotics: A

    Recently, advancements in computational machinery have facilitated the integration of artificial intelligence (AI) to almost every field and industry. This fast-paced development in AI and sensing technologies have stirred an evolution in the realm of robotics. Concurrently, augmented reality (AR) applications are providing solutions to a myriad of robotics applications, such as demystifying ...

  12. PDF The Impact of Artificial Intelligence on Innovation

    of papers and patents from 1990 through 2015. In particular, we develop what we believe is the first systematic database that captures the corpus of scientific paper and patenting activity in artificial intelligence, broadly defined, and divides these outputs into those associated with robotics, symbolic systems, and deep learning.

  13. A Review of Future and Ethical Perspectives of Robotics and AI

    In recent years, there has been increased attention on the possible impact of future robotics and AI systems. Prominent thinkers have publicly warned about the risk of a dystopian future when the complexity of these systems progresses further. These warnings stand in contrast to the current state-of-the-art of the robotics and AI technology. This article reviews work considering both the ...

  14. Exploring the impact of Artificial Intelligence and robots on higher

    Artificial Intelligence (AI) and robotics are likely to have a significant long-term impact on higher education (HE). The scope of this impact is hard to grasp partly because the literature is siloed, as well as the changing meaning of the concepts themselves. But developments are surrounded by controversies in terms of what is technically possible, what is practical to implement and what is ...

  15. Artificial Intelligence and Robotics Essay Examples

    Browse essays about Artificial Intelligence and Robotics and find inspiration. Learn by example and become a better writer with Kibin's suite of essay help services. Essay Examples

  16. How the A.I. That Drives ChatGPT Will Move Into the Physical World

    Covariant, a robotics start-up, is designing technology that lets robots learn skills much like chatbots do. By combining camera and sensory data with the enormous amounts of text used to train ...

  17. The impact of artificial intelligence on human society and bioethics

    Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. ... Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks ...

  18. Artificial Intelligence and Robotics: Impact & Open issues of

    Abstract: In engineering province robotics is one of the cognitive perspective to human communication or it concern with synod of perception of action. In Today's Tech World Artificial Intelligence is an essential tool which provides effective analytical business solutions & plays significant role in the domain of robotics and have several similarities like human behavior which may drive the ...

  19. PDF AI and the Economy

    track the scientific progress in and impact of artificial intelligence and robotics. 10 For example, academic papers focused on AI have increased 9 times since 1996; in comparison, computer science papers have increased 6 times since 1996. The number of students enrolled in artificial intelligence and machine learning courses at Stanford has ...

  20. 500+ Words Essay on Artificial Intelligence

    Artificial Intelligence Essay. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use ...

  21. What is Artificial General Intelligence (AGI)?

    Artificial General Intelligence is a theoretical AI system capable of rivaling human thinking. We explore what AGI is and what it could mean for humanity. ... Mobile robots: Utilizing wheels as their primary means of movement, mobile robots are commonly used for materials handling in warehouses and factories. The military also uses these ...

  22. Physical Intelligence Is Building a Brain for Robots

    Since the earliest sci-fi books and films, computers imbued with artificial intelligence have almost always been accompanied by equally clever moving machines, like androids and other robots.

  23. 'Full-on robot writing': the artificial intelligence challenge facing

    A 2021 Forbes article about AI essay writing culminated in a dramatic mic-drop: "this post about using an AI to write essays in school," it explained, "was written using an artificial ...

  24. Engineering household robots to have a little common sense

    Mapping a robot's physical coordinates, or an image of the robot state, to a natural language label is known as "grounding." The team's new algorithm is designed to learn a grounding "classifier," meaning that it learns to automatically identify what semantic subtask a robot is in — for example, "reach" versus "scoop ...

  25. Artificial Intelligence Essay for Students and Children

    Artificial Intelligence refers to the intelligence of machines. This is in contrast to the natural intelligence of humans and animals. With Artificial Intelligence, machines perform functions such as learning, planning, reasoning and problem-solving. Most noteworthy, Artificial Intelligence is the simulation of human intelligence by machines.

  26. How artificial intelligence affects the labour force employment

    To investigate how artificial intelligence (AI) affects the structure of labour force employment, we integrate robotics adoption and employment into this study's model. Based on Chinese provincial panel data from 2010 to 2019, fixed, mediating and threshold effects models and a spatial heterogeneity model were used to empirically test the impact of AI on the employment structure from the ...

  27. Artificial Intelligence Act: MEPs adopt landmark law

    The Artificial Intelligence Act responds directly to citizens' proposals from the Conference on the Future of Europe (COFE), most concretely to proposal 12(10) on enhancing EU's competitiveness in strategic sectors, proposal 33(5) on a safe and trustworthy society, including countering disinformation and ensuring humans are ultimately in ...

  28. Unlocking the Potential: How Artificial Intelligence Revolutionizes

    Introduction Innovation in the realm of robotics has always been at the forefront of technological advancement. However, with the integration of artificial intelligence (AI), this evolution has taken an exponential leap forward. Today, we delve into the transformative power of AI in propelling robot learning to new heights. The Fusion of AI and Robotics: A […]

  29. Augmented Reality Meets Artificial Intelligence in Robotics: A

    Introduction. Artificial intelligence (AI) is the science of empowering machines with human-like intelligence (Nilsson, 2009).It is a broad branch of computer science that mimics human capabilities of functioning independently and intelligently (Nilsson, 1998).Although AI concepts date back to the 1950s when Alan Turing proposed his famous Turing test (Turing, 1950), its techniques and ...

  30. Artificial Intelligence

    AI is the ability of a computer, or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment. Although there is no AI that can perform the wide variety of tasks an ordinary human can do, some AI can match humans in specific tasks. Characteristics & Components: The ideal ...