Imagine a smart city. Maybe you have a favorite city in mind that could manage its traffic in a better, more efficient manner. Maybe the smart city you imagine is more futuristic, coming from a video game or your own imagination. In science fiction, artificially intelligent city managers are sometimes evil and sometimes benevolent. As real-world artificial intelligence becomes more heavily discussed (and, depending on who you speak to, closer to reality), people start having to ask: where is the “soul” in the machine when it comes to AI and the Internet of Things?
In a seminar hosted by the industry association the Institute of Electrical and Electronics Engineers, IoT and AI professionals addressed this question and how AI and IoT could be used to work together in the future. The full presentation can be found here.
Panelists B.C. “Heavy” Biermann, an educator and artist, and Heather Schlegel, futurist and social scientist, sat down with moderator Jay Iorio, the Innovation Director at IEEE. They discussed the moral question of living in a world overseen by a decision-making AI manager. What does it mean for personal identity if everyone is “plugged in?” Is human augmentation the natural next step up from fitness trackers? To what degree do people need to protect their personal data?
One of the central questions was whether design can be inherently ethical or how ethical design will need to be managed in an age of artificial intelligence. Schlegel said that this comes down to how people use the technology, which is not often in the way the designers expected: designers need to watch how people use their product in the real world and be open to solving those user’s problems.
As artificial intelligence becomes smarter, though, its governance might be out of the hands of the maker entirely. Organizations like OpenAI are working on standards and best practices for AI now, and Schlegel recommended something like this as well. She suggests a standards organization that can enable competition between companies. Biermann recommends involvement by the UN in order to make a global roadmap.
“We don’t quite have the answers yet in terms of long-term social cognitive consequences of AR, VR and AI,” Biermann said.
“We’re going to have to re-think what ethics needs with this technology,” Schlegel said. Who owns an artificial intelligence once it gains autonomy? We haven’t had to find out yet, although some of the questions the panelists posed in relation to true artificial intelligence apply at a smaller scale to the smartphones and Twitter bots of today. Artificial systems need to get both smaller and more responsive to human needs, the panelists said. For example, ads might play only when a consumer has signaled that they’re actively shopping for something. Effectively, “shopper” and “non-shopper” would be two different digital identities for the same individual person.
But the future of AI could go beyond targeted advertising and personal assistants. Schlegel said that people need to aim higher – and that after we stop “infantilizing” the technology, we’ll find completely new ways to use it.
Filed Under: M2M (machine to machine)