AI UX | Make Machines Help Humans

  • Overview
  • Transcript

In this episode Daria talks about how people in her study prefer a relationship with intelligent systems where the system is in a subordinate role and that attributes in machines that seem human-like, like independent thinking, make people uncomfortable.You'll learn why you should not make machines human, make them capable of helping humans.

Keep up to date with the AI UX YouTube Playlist

Subscribe to the Intel Software YouTube Channel

ADDITIONAL RESOURCES:

Loi, D., 2018, Intelligent, Affective Systems: People’s Perspective & Implications, proceedings of CHIuXiD2018, Yogyakarta, Jakarta, Malang, Indonesia

Loi, D., Raffa, G. & A Arslan Esme, 2017, ‘Design for Affective Intelligence’, 7th Affective Computing and Intelligent Interaction conference, San Antonio, TX: 

Bostrom, N., & Yudkowsky, E. 2014. The ethics of artificial intelligence. In The Cambridge Handbook of Artificial Intelligence. Cambridge University Press

Sophia Chen. AI Research Is in Desperate Need of an Ethical Watchdog. Retrieved 14 October, 2017

Gershgorn, D. (2017, 30 Aug 2017). The age of AI surveillance is here. Quartz

This is AI:UX, a miniseries focused on 10 guidelines created to assist you in the design and development of AI-based systems. I'm Daria Loi, an Intel researcher. And today, I will talk about guideline number four, do not make machines human. Make them capable of helping humans. 

In my research, people articulated that they prefer a relationship with intelligent systems where the system is in a subordinate role. In other words, they want to be in charge, with no ambiguity on who is in control. 

The system should always ask before acting, unless otherwise specified or authorized. Attributes in machines that seem humanlike, like independent thinking, make people uncomfortable. 

So here are a few tips to address this matter. Design helper systems with clear power boundaries. Avoid designing systems that behave or are perceived as assuming or arrogant, like a human might be. If emotion recognition is your focus, tackle emotions by context and add in the system ways to educate people about the capability. 

Do not underestimate people's skepticism on usages' reliability. And finally, consider using emotional understanding to help people and to connect with others. 

While conducting this research, one participant said, it's hard to have something be that much in control of your life when you're already not in control as it is, almost like you are in jail when you're at home. Surely, this is not the sentiment we want to inspire. 

When reflecting on humanlike machines, another participant said, half jokingly, this intimidates me. It's like having a controlling husband. 

Quotes like these are a good indication that we should prioritize our focus towards designing AI systems that are capable of helping humans, but not trying to be humanlike. 

Thanks for watching. Don't forget to like this video and subscribe. I will see you next week on Tuesday for more AI:UX.