‹ Back to Video Series: AI UX

AI UX: Ethics and Brand Trust

  • Overview
  • Resources
  • Transcript

This episode dives into how trust and ethics need to play a key role in the development of AI systems.

AI UX YouTube* Playlist

Subscribe to the YouTube* Channel for Intel® Software

Additional Resources:

Loi, D., 2018, Intelligent, Affective Systems: People’s Perspective & Implications, proceedings of CHIuXiD2018, Yogyakarta, Jakarta, Malang, Indonesia.

Loi, D., Raffa, G. & A Arslan Esme, 2017, Design for Affective Intelligence, 7th Affective Computing and Intelligent Interaction conference, San Antonio, TX

Bostrom, N., & Yudkowsky, E. 2014. The Ethics of Artificial Intelligence in The Cambridge Handbook of Artificial Intelligence. Cambridge University Press

Sophia Chen. AI Research Is in Desperate Need of an Ethical Watchdog. Retrieved 14 October, 2017

PwC. 2017. Global Artificial Intelligence Study: Sizing the Prize.

Gershgorn, D. 2017. The Age of AI Surveillance is Here. Quartz

This is AI UX, a miniseries focused on ten guidelines created to assist you in the design and development of AI-based systems. I'm Daria Loi, an Intel researcher. And, today, I will talk about the first guideline: to take a firm, unambiguous, ethical stand, and to be a trusted brand.

Intelligent systems have been, and will continue to be, exposed to high scrutiny. Scrutiny will increase as mainstream awareness, standards, and governmental mandates increase. Because of this, trust plays a key role when it comes to AI systems.

When I was conducting research and interviews, subjects often expressed that they would only purchase a product if it was from a company they trusted—a company with a proven record of keeping things private, with no security breaches or scams. Brand trust has the power to make or break someone's willingness to consider a smart system. Getting a customer's trust comes with important responsibilities.

Imagine a scenario where you convinced yourself to purchase a product from a company you trust, but, later on, you realized that the product broke your trust. You would be primarily disappointed in the company, not the product. Your negative association would be with the brand.

When it comes to AI, it is important that you take a firm, unambiguous, ethical stand to be a trusted brand. Companies and individuals must not only promote ethical practices, they must design them. You must be firm in your ethics and ensure that they are present throughout your work.

At an individual level, if you have clarity on your own brand of ideology and ethical stand around AI systems, then you're better equipped to take action when needed. Imagine, for instance, finding yourself in a brainstorm meeting, during which a colleague comes up with ideas that appear to breach your belief system. If you have clarity on your own ethical stand, then you are better equipped to counterbalance the discussion in clear, uncompromised, articulate, yet productive, ways.

Ultimately, the responsibility to develop healthy AI futures is not only on the shoulders of companies and institutions. It is, also, on the shoulders of individuals-- individuals like you and me-- to take on the responsibility proactively and daily. Thanks for watching. Don't forget to like this video and subscribe. I will see you next week, on Tuesday, for more AI UX.