‹ Back to Video Series: AI UX

AI UX: Design Socially Trusted and Trustworthy Platforms

  • Overview
  • Resources
  • Transcript

A socially trusted and trustworthy platform is void of security or privacy concerns and is designed to be capable of protecting its users' information. This episode of AI UX provides some insight into how and why you should design with this in mind.

AI UX YouTube* Playlist

Subscribe to the YouTube* Channel for Intel® Software

Harari, Y.N., June 23, 2017. Are We About to Witness the Most Unequal Societies in History? The Guardian.

Loi, D., 2018, Intelligent, Affective Systems: People’s Perspective & Implications, proceedings of CHIuXiD2018, Yogyakarta, Jakarta, Malang, Indonesia.

Loi, D., Raffa, G. & A Arslan Esme, 2017, Design for Affective Intelligence, 7th Affective Computing and Intelligent Interaction conference, San Antonio, TX

Bostrom, N., & Yudkowsky, E. 2014. The Ethics of Artificial Intelligence in The Cambridge Handbook of Artificial Intelligence. Cambridge University Press

Sophia Chen. AI Research Is in Desperate Need of an Ethical Watchdog. Accessed October 14, 2017.

Gershgorn, D. 2017. The Age of AI Surveillance is Here. Quartz

This is AI UX, a miniseries focused on ten guidelines created to assist you in the design and development of AI-based systems. I'm Daria Loi, an Intel researcher. And today, I will give you five tips to follow guideline number three: to design socially trusted and trustworthy platforms.

A socially trusted and trustworthy platform is when the platform is void of security or privacy concerns and is designed and recognized as capable of protecting its users' information. Here are a few tips to achieve such a platform.

First, ensure your platform addresses privacy and hacking concerns upfront. This can be achieved by offering data protection services and warranties bundled in the product.

Second, include a check and balances mechanism into the fabric of the system as well as optometric encryption of all data.

Third, data types should be separated. Only the user's system should have the ability to assemble data into a cohesive picture. Think for instance of existing brokerage models used to pay for purchases without providing credit card details to the seller. In those cases, a product can be purchased and the seller can receive payment because of the intermediary. The same thinking could be applied to data, meaning that an application could provide a service to the user by dealing with a data broker instead of getting access to all user data to perform the action.

Fourth, make AI motivations and actions transparent, and offer multiple ways for users to provide feedback to the system. The system should also explain how and when feedback will be executed.

Fifth, explain where data is stored in an accessible and transparent way. Explain where the data will go and for how long and who will be able to access it and why.

There are so many opportunities to create trusted and trustworthy platforms that provide peace of mind and ensure that data is safe. One of my interviewees asked, it's always listening, how secure is it? What do they do to protect you? When he says they, he means us, the designers. People are trusting us to design with their safety and protection in mind.

Thanks for watching. Don't forget to like this video and subscribe. I will see you next week on Tuesday for more AI UX.