While AI has been talked about for some time, most workers haven’t had to interface with it much, and when they are forced to answer ethical questions alone, some will inevitably fail, and you’re left with a brittle company. Finger-pointing and firings abound when legal and societal consequences crop up. Which is why, like most things in life, the best remedy is to be proactive: create policies and regulations to prevent mistakes and misunderstandings.
At the moment, AI regulations are the Wild West. But technology companies are doing their due diligence to avoid the same mistakes as Silicon Valley social media companies, who charged ahead with the notion they’d build now, regulate later. Much like Einstein’s letter to President Roosevelt to warn of the dangers of the Nazi’s work to create an atomic bomb, technology companies are coming to congress with a plea: help us regulate AI.
Until we have those regulations in place (and even after), every company will need to address ethical concerns from a structural level. And the best tools to do so are an ethics board and constitutional models. Before diving into the role of an ethics board and constitutional models at your enterprise, let’s examine what the private and public sector is already doing to promote ethical AI.
The Latest Developments in AI Ethics
World leaders are scrambling to keep up with developments in artificial intelligence, pledging billions of dollars in governmental funding, calling dozens of hearings, and announcing over 20 national AI strategy plans. The White House, for example, has created the Blueprint for an AI Bill of Rights, which includes “five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.”
In the private sector, IBM cites the Belmont Report as a guide to ethical AI. The Belmont Report was published in 1979 as a guideline for the protection of human research subjects. Nearly a quarter of a decade later, the principles still apply in the realm of AI. (It is fair to say that society is currently engaging in a global-scale experiment with AI.) The tenants of the Belmont Report list respect for the autonomy of individuals, beneficence (an oath to “do no harm”), and fairness and equality.
Researchers are also working to create evaluation benchmarks to uncover undesirable results from AI systems. These tests give companies a running start to protect enterprises and the end users of their products from harmful and/or erroneous generated content. And now major players like Google are introducing more complex model evaluations for what are classified as “extreme risks”: the extent that a model possesses “dangerous capabilities” that threaten security, exert a negative influence, or evade oversight, and to what extent the model is prone to applying its capabilities to cause harm.