WHAT WE THINK

How to Set Guardrails for Ethical AI At An Enterprise

AIGenerative AI
21-Aug-23

Generative AI is transforming businesses—and bringing new opportunities and risks to companies around the world. With the ability to outsource repetitive tasks, workers can be more efficient and productive. They are freed to tackle challenging problems and find new ways to differentiate their business from competitors. It’s an exciting time to be in product development… and a scary one, too. As companies tread into uncharted territory of generative AI, everyday developers are underprepared to face the ethical dilemmas that await them:

While AI has been talked about for some time, most workers haven’t had to interface with it much, and when they are forced to answer ethical questions alone, some will inevitably fail, and you’re left with a brittle company. Finger-pointing and firings abound when legal and societal consequences crop up. Which is why, like most things in life, the best remedy is to be proactive: create policies and regulations to prevent mistakes and misunderstandings.

At the moment, AI regulations are the Wild West. But technology companies are doing their due diligence to avoid the same mistakes as Silicon Valley social media companies, who charged ahead with the notion they’d build now, regulate later. Much like Einstein’s letter to President Roosevelt to warn of the dangers of the Nazi’s work to create an atomic bomb, technology companies are coming to congress with a plea: help us regulate AI.

Until we have those regulations in place (and even after), every company will need to address ethical concerns from a structural level. And the best tools to do so are an ethics board and constitutional models. Before diving into the role of an ethics board and constitutional models at your enterprise, let’s examine what the private and public sector is already doing to promote ethical AI.

The Latest Developments in AI Ethics

World leaders are scrambling to keep up with developments in artificial intelligence, pledging billions of dollars in governmental funding, calling dozens of hearings, and announcing over 20 national AI strategy plans. The White House, for example, has created the Blueprint for an AI Bill of Rights, which includes “five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.”

In the private sector, IBM cites the Belmont Report as a guide to ethical AI. The Belmont Report was published in 1979 as a guideline for the protection of human research subjects. Nearly a quarter of a decade later, the principles still apply in the realm of AI. (It is fair to say that society is currently engaging in a global-scale experiment with AI.) The tenants of the Belmont Report list respect for the autonomy of individuals, beneficence (an oath to “do no harm”), and fairness and equality.

Researchers are also working to create evaluation benchmarks to uncover undesirable results from AI systems. These tests give companies a running start to protect enterprises and the end users of their products from harmful and/or erroneous generated content. And now major players like Google are introducing more complex model evaluations for what are classified as “extreme risks”: the extent that a model possesses “dangerous capabilities” that threaten security, exert a negative influence, or evade oversight, and to what extent the model is prone to applying its capabilities to cause harm.

Why Your Company Needs an AI Ethics Board

Just as it is difficult to keep up with the many advances as of late in AI technologies, developments to promote ethical AI practices are changing quickly, too. Nearly every individual at your company may knowingly or unknowingly face an ethical decision while using AI technologies in their daily work and it is unrealistic and inefficient to expect each person to research and develop ethical guidelines for their own role.

If your company is endorsing AI products internally or from a third-party, you need an ethics board to review them and promote ethical AI. This group would ideally meet on a regular basis, develop best practices for the enterprise, and have the responsibility to distribute this knowledge and accountability throughout the organization. Two members of the Centre for the Governance of AI and a member of Stanford University put together a guide for creating an AI ethics board, which you may find helpful to start your journey.

Here are the primary responsibilities of an AI ethics board:
To Fight Bias

AI models are trained by people. Any one person has biases, regardless of whether they’re aware of it. The more humans with varying perspectives making decisions collectively, the more well-rounded the solution. Ergo, less bias. This is one of the ways an AI ethics board can help ensure the AI models created by your company are as minimally biased as possible.

An AI Ethics board would safeguard your AI models from harmful biases, which largely fit into three categories:

Pre-existing Bias

Has its roots in the social institutions, practices, and attitudes of our world. AI models are trained on data that reflects our society, which, unfortunately, can be biased. For example, if a facial recognition system is trained on data that has a disproportionate number of images of certain racial or ethnic groups, it may struggle to accurately recognize individuals from underrepresented groups.

Technical Bias

Refers to biases that arise from algorithms, architectures, or design choices made during development and implementation. For example: an airport faces limitations in terms of screen space available for displaying incoming and departing flights. These flights are sorted using algorithms, which can unintentionally result in biases against international flights.

Emergent Bias

Arises when new data is added to continue training the model from users that may be biased. It’s incredibly tricky to validate data coming in so quickly on a global scale. Sometimes this bias from users can be unintentional—sometimes, malicious. (Take Microsoft’s chatbot turned Nazi via Twitter, for example.)

Defend Against Legal Action

Github’s copilot will autocomplete code for developers. Chat GPT generates code, too. This is a timesaver for developers—and also a potential hazard. Sometimes generative AI platforms may spit out code under a certain license elsewhere. The developer isn’t aware of this, plugs it into their product base code and… now they’re in legal trouble.

One remedy for companies is to set up their own chat application using an off-the-shelf LLM (large language model), which has been trained on appropriately-licensed code. A less fool-proof alternative is training employees how to use discretion when using third party AI applications.

To Check For Hallucinations

Any output from generative AI is an approximation. It can, at times, say things that aren’t grounded in any basis of reality or simply don’t exist. Earlier this year, two lawyers were in hot water in a Manhattan federal court when they used ChatGPT to produce research for a court filing… only to discover the cases ChatGPT generated didn’t exist.

The ethics board needs to inform your staff that not everything you get from an AI model is necessarily accurate. You still need manual reviews. You can’t just copy paste and call it a day because you’re ethically liable for the result.

To Give Moral Nuance to Design Choices

Let’s say you’re designing an AI-informed braking system for cars. The design will need to balance smooth braking for rider comfort with hard braking for passenger safety (and potentially, the safety of pedestrians, vehicle passengers, and animals on the road.)

To Bolster Your Security

Whereas social media companies used to own data on you, AI gives that power to everybody. Everyone can download an AI model and use it in an application if you know enough about programming. (And if you don’t, you can just ask an AI for help.)

Pretty much everyone’s data, at some point, to some extent, has been leaked. AI models may aid bad actors to infiltrate networks in a fraction of the time a human can alone. And they can employ several of them at once.

Your company may have multiple openings for vulnerabilities at once. You may not have the resources to pay people to find every single security hole in your system.  The fundamental mechanism of any AI model is prediction so it can detect things that are maybe not necessarily findable by humans in a short amount of time. So AI can help you here, but you will need oversight on how to fight back.

Why Your Company Needs a Constitutional Model

Constitutional models are another stopgap to help you ensure your AI models are ethical. They function sort of like our prefrontal cortex: a filter for what stays in our head and what exits our mouth.

You have your foundational model, built by the enterprise, to enable users to tackle whatever problems you built it to solve. And then you have a constitutional model (IBM’s constitutional model is Watson X), which provides a values system of sorts, that filters out the results of the first model behind the scenes before generating a response. Constitutional models are helpful to protect users from using your model to seek legal advice or inappropriate content.

What the Future Holds for Ethical AI

ChatGPT and other AI models are outperforming many humans on scholarly tests like the GMAT, bar exam, GRE, and more. It’s becoming industry standard for leading technology companies to put out research papers with metrics for how well their models perform such tests. As a whole, generative AI platforms are improving their scores over time. But most companies have barely scratched the surface when it comes to testing how platforms answer ethical questions, which seem to still be relegated to the human domain. Recreating moral constructs in AI models is not something we’ve mastered yet.

AI is generating an amazing renaissance, upending the way enterprises operate at their core level. Like every large scale invention, there are growing pains. Proactive guardrails can protect enterprises from potential risks to poise your products and society at large, for success using AI.


 

Want to know your organization's AI readiness?

Let’s see
how you Rank!

Take the assessment now→


ABOUT THE AUTHOR

Tono Nogueras is a software engineering generalist specializing in front-end and A.I. development. With a combined 12 years of marketing communications and software engineering experience, Tono’s understanding of tech reaches a unique intersection where innovative product design, branding philosophy, and user experience collide to build battle-tested applications for clients ranging from startups to Fortune 500. His insatiable love of learning leaves him looking toward the future for a better, educated, and sustainable world.

Connect with Tono on LinkedIn

Connect

Let’s make it happen.

The path to your next great product, invention or software application starts here. The first step is starting the conversation. Simply fill out this form, and we’ll reach out immediately.

"*" indicates required fields

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.