Artificial intelligence (AI) has already proven revolutionary in industrial applications such as automated farming and agriculture. IoT devices collect weather, soil, plant, and nutrient data points and AI systems process this data for informed decision making, for example. In fact, AI-outfitted tractor and drone controls reliably boost crop yields, reduce waste, and make both agribusiness and family-scale farming more sustainable. Similar gains are being realized in manufacturing. Visit designworldonline.com/design-guide-library for our cybersecurity installment detailing other AI solutions for connected enterprises.
But many — including Quad9 chief security officer Danielle Deibler — are focused on how AI is already a tool for cybercrime perpetrated on individuals as well as general cyber … creepiness. Quad9 is an open domain name system (DNS) recursive service aimed at giving individuals and organizations free security and online privacy.
“We’re a privacy-focused nonprofit organization offering an opt-in service, and users don’t disclose who they are to us. Though we don’t have any information that’s even remotely personally identifiable even at an organizational level, we do know that manufacturers, large municipalities, government organizations, educational institutions, and IT administrators use our service,” Deibler explained in a recent interview with Design World.
Especially concerning to Deibler are AI-generated emulations of living persons — complete with voice and realistic avatar — and the unique threats these pose to personal cybersecurity. “We’re not really thinking about these avatars now. But how might we be represented via AI depictions extracted from public sources on the internet — and what ownership do we have over our own image, thoughts, and speech?”
It’s enough to make a gal want to delete her entire social-media presence.
But in the U.S., we’re not seeing legislation to protect identities. “Consider how we have a data breach every other week, and people just take it in stride … they’ve gotten used to it. They say, ‘I’ll just buy dark-web monitoring and shut down the ability for people to open credit cards in my name.’ If we can’t protect a basic social-security number though, we’re certainly not going to be able to protect against what commercial organizations are going to be able to do with images and things like that,” Deibler adds.
Unaddressed ethics issues surrounding open AI also abound. “These technologies get released and enter public consciousness and then everybody realizes, ‘Oh my God, this is a huge deal. We should do something about this,” says Deibler.
Society must define how we want to leverage this type of technology because AI will drive the next industrial revolution … except the changes it spurs will come far faster, Deibler asserts. Should people need to give permission to allow their content to be used as training data for a given model? Should AI-generated content be required to be labeled as such?
“Having AI show me 15 ways to visualize something with a dataset behind it is a lot more useful to me than fake pictures of ex-presidents — because with those all you’re doing is making me try to figure out what’s real and what’s not,” says Deibler. “Instead, give me something that really adds to my productivity by leveraging an existing data set that I know is pretty good.”
Indeed, there’s much more than can be done to make AI much more beneficial for humanity — with well-designed rollouts rather than what we have now — surprise drops that hit the news cycle and suddenly change everything while we have nothing in place for managing them.
You may also like:
Filed Under: AI • machine learning