Privacy protections are best engineered during early-stage developments as we transition to the ambient computing era; to do this requires collaboration between social scientists and software engineers.
“As we develop these [ambient computing] solutions we want them to be ethical, and that goes a long way to making sure that we design our technology for real people who live in real communities. We want to make sure that these technologies are useful and viable for everyone”
— Heather Patterson, senior research scientist, Privacy and Ethics, Intel
The growing presence of artificial intelligence (AI)—deployed for convenience and utility in our homes, offices, automobiles, and cities—presents a potential threat to the essential principles of individual, group, and community privacy rights, particularly as ambient computing becomes ubiquitous. Creating an equitable balance between the benefits enabled by these solutions and the privacy concerns of those contributing the data requires forethought and planning.
For AI solutions and ambient computing to flourish in coming years, individuals who share personal information across the Internet will need access to reasonable choices and visibility into the processes and policies involved. The technologies that support ambient computing should be designed from the ground up to honor and respect this intent. Designing the interfaces linking humans and machines while maintaining an ethical perspective will help protect individual privacy options and ensure visibility into processes.
Social scientists, privacy scholars, and interface designers at Intel are tackling a fundamental question of the fourth industrial revolution: how to balance the escalating presence of ambient computing and AI-enabled solutions with the privacy concerns and rights of those living in smart homes, commuting into smart cities, and adopting personalized compute technologies. A complex matrix of issues confronts companies actively developing solutions in this sector. These issues include the legal requirements of individuals and corporations that handle personal data, as well as the privacy questions surrounding AI-enabled solutions that track and observe human behavior.
Recognizing that this is the ideal time to establish policies and frameworks to drive the next generation of technology, Intel is approaching this challenge from multiple angles, addressing the human concerns and the legal protections that every individual needs, as well as the technological issues that arise as we embed intelligence into a broad array of objects and surroundings.
The term ambient computing first took hold at the tail end of last century, but the promise of it has only come into focus in the last few years. A slate of enabling technologies—including Internet of Things, machine learning, analytics, natural language processing, visual computing, neural networks, virtual reality, augmented reality, and more—have finally moved from the theoretical to the practical. Complementary advances in both computer hardware capabilities and software architectures and algorithms have spurred artificial intelligence projects, many of which have been nurtured and supported by the Intel® AI Developer Program and the collaborative contributions from Intel® Developer Zone (Intel® DZ) members.
“Ambient computing covers applications incorporating machine learning and other forms of artificial intelligence and is characterized by human-like cognitive and behavioral capabilities and contextual awareness. It creates a digital environment in which companies integrate technology seamlessly and invisibly into everything around us, maximizing usefulness while minimizing demands on our attention.”1
— Gary Grossman, futurist and communications marketing executive, Edelman
As the presence of ambient computing systems becomes more pervasive—managed by actors ranging from governments to private enterprise—mechanisms will increasingly be needed to place limits on the disclosure of personal information, including how long this information can legally be retained. In an era when people’s activities, monetary transactions, preferences, physical locations, and so on will be more easily tracked, recorded, and processed through analytics algorithms, the need for selective filtering and protections against invasive monitoring becomes essential to personal privacy.
“AI has no morals and ethics, but—used wrongly—it can amplify our biases.”2
— Chris Heilman, senior program manager, Intel
Human interfaces exist as the conduit to AI-guided services, capturing and relaying information from a variety of sources. Toolkits for developing these human interfaces encompass natural language processing, computer vision, sentiment analysis, speech conversion and analysis, and moderation of group discussions. From an ethics perspective, one imperative that should guide interface design is transparency. People using these interfaces, willingly providing personal data, must be given the opportunity to understand what information is being collected, where it goes, how it will be used and by whom, and how long it will be stored.
The pace of technology change can sometimes be so rapid that there is little time to assess the risks, consider the societal impact, and weigh the prospective consequences of deploying new tools and technologies. The scientists, engineers, futurists, planners, and system architects tasked with launching solutions and bringing them to market are typically, by nature, focusing on the positive aspects of solutions, motivated by the drive to create something new and exciting.
For those emerging technologies, however, that have sweeping influence on a wide range of human activities, oversight and critical analysis of the human factors involved should play an important part in the design process, rather than something that is added on after the solution is nearly complete. This applies in particular to technologies that enable ambient computing: Internet of Things solutions, systems using artificial intelligence, context-aware mobile computing, and smart home implementations.
“Most of us view personalization and privacy as desirable things, and we understand that enjoying more of one means giving some of the other. To have goods, services, and promotions tailored to our circumstances and desires, we need to divulge information about ourselves to corporations, governments, or other outsiders. Such tradeoffs have always been part of our lives as consumers and citizens. But now, thanks to the ‘net, we’re losing our ability to understand and control these tradeoffs—to choose consciously and with awareness of the consequences, what information about ourselves we disclose and what we don’t. Incredibly detailed data about our lives are being harvested from online databases without our knowledge, much less our approval.”3
— Nicholas Carr, technology author
Collectively, we’ve learned from experience that successful technology models factor in critical concerns early in the development process. Security is an obvious example of a consideration that will often be flawed if not integrated into a product’s blueprints—whether we are looking at a solution from the lowest integrated circuit level, hardware platform, software framework, network operations, or storage devices. One weak point in the security matrix can become the ultimate vulnerability that can make a solution deeply flawed from a security perspective.
To avoid this type of design oversight in AI-powered consumer technologies, particularly in those solutions being developed for the smart home, Intel has heavily invested in strategy and product requirement definition research conducted by senior Intel social scientists. This research team includes privacy scholar Heather Patterson, cultural anthropologist Alexandra Zafiroglu, experience architect Faith McCreary, and user experience researcher Yen-ning Chang.
The extensive body of work completed by these social scientists ranges from ethnography in North America, Europe and Asia, to large-scale qualitative surveys in the US, and a multicountry quantitative survey. This research and the surveys generated the idea of homes shifting from black boxes (in which service providers, retailers, utilities and other companies know little detail of the exact activities happening in homes) to glass houses (as technologies that enable ambient computing are adopted by householders). As part of the vision of glass houses, householders regulate the types of data that are created and stored. They also determine how and when information is shared and used to provide services. Within this framework, privacy becomes a key concern for technology design and adoption.
“Our homes are becoming instrumented glass houses where even the most intimate and personal acts may leave data footprints that companies providing services (and potentially others) can access. As homes become instrumented with data-generating technologies, existing information boundaries will be tested, and householders will take on the burden of creating new boundaries on information about their home lives.”
— Alexandra Zafiroglu, principal engineer, Internet of Things, Intel Corporation
The Privacy in Glass Houses project surveyed the sentiments and concerns of smart home-savvy householders across three geographical regions (the United States, Germany, and People’s Republic of China) with respect to smart home adoption.
“Overall,” Heather explained, “we found that over three- quarters of research participants flagged privacy as a significant barrier to smart home adoption, second only to concerns about the utility. This was true in all three geographical regions we surveyed. When we drilled down a little more, we learned that our participants’ privacy concerns clustered into three areas. First, we saw concerns about the creepiness of being watched or surveilled—of people not knowing when they might be watched or listened to, or by whom. This speaks to the importance of clearly communicating monitoring and data collection practices to end-users—for example, when cameras or microphones or other ambient computing tools are active.”
“Second,” Heather continued, “was a concern about exposing particular aspects of their lives to outsiders, such as behaviors that may occur in bathrooms, bedrooms, or children’s rooms. This teaches us that technology designers need to proceed more cautiously when working in these spaces—that a great deal of nuance must be brought to technology designs. And, third, participants expressed concern about diminishing personal autonomy—of smart services making decisions on their behalf that they would rather make for themselves, even if that labor was intended to make life more comfortable or efficient. This indicates that designers need to be sensitive to creating affirmative opportunities for end-users to stay in the loop; to be active participants with technology, rather than the objects of it.”
Figure 1 shows the primary smart home adoption concerns, ranked and linked to individual geographical regions. Blue represents the US; orange, the People’s Republic of China; and gold, Germany.4
Figure 1. Top concerns that are slowing adoption of smart home technology.
Published papers from these projects include:
The Intel social scientists studying ambient computing developments—led by Heather, Alexandra, and Faith— worked closely with Mohammad Reza Haghighat (Moh), a Senior Principal Engineer at Intel, to generate prototypes directly tied to the research findings. Moh has been actively working in the core technologies necessary to enable ambient computing around the world, including interoperability and discoverability.
The discussion between the social scientists and the technologists at Intel moved toward exploration of the potential of a device that could tell you what agencies or organizations were collecting information about you within your environment. “We had envisioned the home as the unit, not the office,” Heather said, “but this would work in any environment.”
The privacy agent would be poised to provide alerts whenever an outside entity is collecting information about a person or the household. Within a controlled environment, such as a home, the agent would provide the opportunity to say, for example: You can only collect the camera data from my bedroom between the hours of 10 am and 4 pm, but not at any other time of day. Controls might provide a mute button on the cameras in the house or the audio devices, such as virtual voice assistants that are always listening. Sharing could be blocked entirely during specified times of day. Individuals might even have the option to go back and delete earlier information, if that option was made possible legally and technically. Information that is collected about any services that come into the home would have an expiration date associated with it.
“Of course,” Heather said, “there are complex legal and technical barriers to implementing these kinds of solutions. In our planning, we were being creative and aspirational. If anything were possible, how might we want to reshape the ways in which information flows through a system, prioritizing the people who are the subjects, senders, and sometimes recipients of that information. What would this model look like?”
The future of ambient computing and the emergence of wide-scale AI solutions can be put in perspective and understood by tracing the history of computing over several decades. The infrastructure foundations that are being established today to enable ambient computing— emphasizing interoperability and standards-based communication—evolved over two prior computing eras, during which device connectivity expanded from a handful of connections to an environment in which tens of billions of connected devices exist. The challenge to universal device communication in an era of ambient computing becomes the establishing of a standardized framework that makes it possible for any device to communicate its nature, purpose, and capabilities to any other device.
In talking with Moh about the paradigm shift that ambient computing represents, he points to the success of the open web platform as a medium of communication across the Internet. “What made the web the successful enterprise that it is today—and made it the foundation of the Internet economy—is openness and universal standardization.”
“If you go back to the history of computing,” Moh continued, “we had the Internet from the ‘70s. We had the networking protocols in the late ‘60s and early ‘70s. But up until the mid ‘90s nothing major came from that. I have collected the data on connected devices in the world throughout history. Looking at the data (see Figure 3), you see that in 1983 (which is almost a decade after the Internet), there were about 500 connected devices.” As shown in Figure 2, the number of Internet hosts has also risen sharply since 1981.
Figure 2. Growth in the number of Internet hosts and web hosts.
Source: * https://ftp.isc.org/www/survey/reports/2011/01/
Source: Intel Science and Technology Center for Pervasive Computing
Figure 3. Number of connected devices, projected through 2020 (in billions).
As you come to 1994, Moh points out, the rise becomes like a hockey stick and just jumps—not linearly, but quadratically.
In the earlier stages of computer history, in the mid ‘80s and ‘70s, you could connect computers to each other in a basic way, using file transfer protocol (FTP) to access the machines and download files. Messaging was available and in use, primarily in government and university settings. However, in these early days, interoperability was lacking. There was no universal higher- level language. Low-level communication protocols were basic and limited.
*Courtesy of Allen Wirfs-Brock: http://www.wirfs-brock.com/allen/posts/622
Figure 4. The path toward ambient computing.
As the web evolved, we gained interoperability and discoverability. “Discoverability,” Moh said, “in the sense that when you put up a page today, you don’t have to do anything. Web crawlers come and find you and they know what your page is all about. They can read it and see what information you have, how you are related to other pages, based on the links to other pages and the links they have connected to you. This foundation of search technology and crawling has become very successful.”
With active discoverability engaged, before the user types a query, they have access to the answer and where it can be found. This is something that is missing from IoT. Interoperability and discoverability have generated a great deal of discussion within standardization bodies, including the W3C or WWW Consortium (the same consortium that provides the web standards). There is currently a working group called the Web of Things Working Group, which is working to counteract fragmentation in IoT solutions, a step forward in devising standards to initially support IoT solutions and then also lead toward ambient computing environments.
The Open Connectivity Foundation (OCF) is another organization working to enable secure and reliable device discovery and connectivity through a standard communications platform, open source framework, and bridging specification. “The goal,” Moh said, “is to come up with a universal, agreed-upon standard, in which you describe properties about things. Once you have that, you can also describe other aspects of this information, deriving details about an individual that can interact with a personalized privacy agent that enforces policies about what personal information can or cannot be disclosed about the individual.”
“There are huge, transformative opportunities not only for mobile operators but for all businesses if we can overcome the fragmentation of the IoT. As stewards of the Open Web Platform, W3C is in a unique position to create the royalty-free and platform-independent standards needed to achieve this goal.”
— Jeff Jaffe, W3C CEO, speaking at the Mobile World Congress 2017
The ultimate vision of ambient computing—while ambitious and far reaching—depends on an unprecedented degree of collaborative work to organize and apply universal standards that promote maximum interoperability across sensors, mobile devices, cameras, AI systems, network gateways, as well as intelligent objects in homes, automobiles, factories, and citywide infrastructures. Groundwork in IoT connectivity has made inroads into standardization, but doesn’t address the magnitude of what is needed to make the ambient computing vision mainstream. Building an extensive mesh of interoperable smart things opens a panorama of opportunities to serve human needs, improve daily lives, and bring pleasure and convenience to many different activities.
“Right now, developers write something with hard code, the assumption being that this one application wants to interact with that one device. But, what are the types of information needed for interaction? Where are the properties making it clear what the device can do? What we are lacking is a universal thin layer that every smart thing would implement. It would say, these are my properties. These are my actions. And these are my states. Everything would then exchange this information, across the Internet, so that when smart things want to talk to each other, they can. All of these things become interoperable.”
—Moh Haghighat, senior principal engineer, Intel
Moh’s team prototyped a device based on Google* physical web data, compact and lightweight, running on a cell battery, that can communicate information about things. “Every second, it just broadcasts a URL. That is all that it does. Inside that URL, you can put information about that thing. Then it says, for example, ‘Oh, this is a dog and it is my dog and my name is Moh and this is my phone number.’ Everywhere this dog goes, it has one of these things around its neck. If the dog gets lost, someone can sense it with a phone and they can call you.”
“Or, for another example,” he continued, “if it is a vending machine, it can have a page with information about what is currently in the vending machine. Because the vending machine has actuators, you can select it on your phone and say, ‘Oh, I want a cold can of soda,’ and then you click and pay and everything is taken care of. Essentially, your vending machine becomes part of the web. It becomes discoverable.”
Moh explained that privacy could be a central part of this approach. Things are equipped to describe their own properties, their own actions. The vending machine can identify itself as a vending machine and issue a dispatch and itemize its range of actions. It could also contain information about privacy. You can determine whether a device is certified or not. So, when your personalized privacy assistant sees that device, it can verify whether that device has been certified—if it is truly genuine—when it gives you information about the way it uses your information.
Moh’s project team completed the first prototype of this model with the help of two interns, one from Carnegie-Mellon and the other one from UC Irvine. One intern focused more on the policy of privacy and the other one focused on the implementation. The prototype used the physical web, as well as a mobile phone running Android* and macOS*, and would pass information about things to identify what they can do and what you could use them for.
The research work and prototyping completed by the team during this period resulted in two published papers:
“The key point about this,” Moh noted, “is that interoperability, discoverability, and privacy are a major part of the ambient computing vision. By addressing privacy concerns, you remove the fears that people might have about the technology. If people don’t trust it, they’re not going to use it.”
Where do the latest manifestations of AI fit into the discussion on privacy and ambient computing? The technologies that are currently shaping AI have an important role in machine learning, deep learning, reinforcement learning, and related disciplines in support of the ambient computing model. With privacy considerations built into the learning models, protections could be deeper and integral to the sharing of information.
Intel social scientists Alexandra and Yen’s latest work on Glass Houses defines home-specific sensing inputs and inferencing tailored to home activities and conditions, and to householders’ values and goals, so that the ambient compute and broader AI capabilities Intel creates will enable experiences valued by—and not threatening to—householders. By prioritizing sensing and sense-making capabilities that follow the social rules and cultural values of home life detailed in Alex and Yen’s work, Intel will help to assure that AI innovation balances the multiple, tangible benefits enabled by these solutions and the privacy rights of those creating and sharing their data with service providers.
Figure 5. Machine learning in the cloud.
“My mandate at Intel has always been to bring the stories of everyone outside the building inside the building and make them count. You have to understand people to build the next generation of technology.”5
— Genevieve Bell, ethnographer, Senior Fellow, Intel
2. Heilman, Chris. Artificial Intelligence for Humans. Intel. April 2018.
3. Carr, Nicholas. Utopia is Creepy: And Other Provocations. W.W. Norton and Company. 2016.
4. Heather M. Patterson, Alexandra Zafiroglu, Yen Chang, and Faith McCreary, Intel. Privacy in Glass Houses research portfolio. Presented at DevCon 2017.
5. Singer, Natasha. Intel’s sharp-eyed social scientist. Financial Review. 2014.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804