ON BRITNEY SPEARS, ARTIFICIAL INTELLIGENCE, AND AUTOMATION
Recently, Britney Spears has been in the news concerning the court-ordered conservatorship imposed on her stemming from her infamous 2008 mental health breakdown. As details of the story emerged, the public reacted with astonishment at the setup that essentially limits her freedom to make her own choices and therefore live her life as she sees fit. Some of the key decisions that her conservators make on her behalf include deciding whether or not she can work, when she can leave her house, and even how she spends her own money.
Most people reacted to this news with disbelief and shock, partly at the measures themselves but also at how long they had been in place and how long she’d been subjected to them. People grasped the curtailment of her individual choices as inherently unfair and unjust. That’s because it strikes at a person’s individual autonomy or agency; the ability to make choices about their own lives, something that is rightly considered a fundamental human right.
Likewise, there has been a proliferation of stories about artificial intelligence (AI), highlighting both the promises as well as the perils it brings. It’s long been known that the most troubling moral problems associated with certain uses of AI revolve around privacy, bias and discrimination, and the role of human judgment. Perhaps the most commonly cited example of AI bias involves the algorithm known as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions); it’s used in some U.S. court systems to help predict defendant recidivism rates.
Such uses of AI software, where it is essentially deciding the fate or future direction of individual human lives with little if any transparency with regard to the methods and algorithms used, is recognized to be morally problematic. This is because in the end we recognize it to be an infringement on individual autonomy, striking at the power to decide for ourselves. It’s the same sense of violation of individual agency that people recognize in the case of Britney Spears; in her case, it’s a court-appointed conservator, and in the COMPAS case it’s essentially AI software making decisions.
And yet, the uses of AI are potentially limitless. There are a growing number of legitimate uses for AI in manufacturing and automation. AI software can help identify patterns that humans can’t see or analyze massive data sets, helping to improve many manufacturing and industrial processes. For example, AI is helping robotic systems through better data collection and analysis, which can improve productivity and performance.
In the end, it’s up to the human designers and engineers and other stakeholders to figure out the proper uses of AI, and those that are not. To that end, industry groups such as the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems along with others have been doing the work of addressing these and other ethical considerations involving the design and implementation of AI systems.
Such work is necessary to guide design into the future, so that we can identify and avoid uses of AI with harmful social effects, such as those that arise from facial recognition software, or uses that contain a coercive or manipulative dimension that impacts human autonomy.
Ultimately AI, like fire, is a tool. So it’s up to us, its human caretakers, to make wise decisions on how best to use it and in what circumstances to do so. As well as what not to do with it.
Filed Under: DIGITAL ISSUES