3 Essential Steps for Ethical AI
How to apply ethics for a more secure future with artificial intelligence
There are two schools of thought when it comes to the future of artificial intelligence (AI):
- The utopian view: Intelligent systems will usher in a new age of enlightenment where humans are free from work to pursue more noble goals. AI systems will be programmed to cure disease, settle disputes fairly and augment our human existence only in ways that benefit us.
- The apocalyptic view: Intelligent systems will steal our jobs, surpass humans in evolution, become war machines and prioritize a distant future over current needs. Our dubious efforts to control them will only reveal our own shortcomings and inferior ability to apply morality to technology we cannot control.
As with most things, the truth is probably somewhere in the middle.
Regardless of where you fall on this spectrum, it’s important to consider how humans might influence AI as the technology evolves. One idea is that humans will largely form the conscience or moral fabric of AI. But how would we do that? And how can we apply ethics to AI to help prevent the worst from happening?
Artificial Intelligence for Executives
AI requires a vision to achieve. Your vision isn’t cookie cutter, so your AI application shouldn’t be either. With our guidance, you can integrate advanced analytics, including artificial intelligence, into your strategy – and understand the strengths and weaknesses of various methods based on your goals.
The human, AI relationship
The power of deep learning systems is that they determine their own parameters or features. Just give them a task or purpose, point them at the data, and they handle the rest. For example, the autotune capability in SAS for Machine Learning and Deep Learning can figure out the best result for itself. But people are still the most critical part of the process.
“Humans solve problems, not machines,” explains Mary Beth Ainsworth, an AI specialist at SAS. “Machines can surface the information needed to solve problems and then be programmed to address that problem in an automated way – based on the human solution provided for the problem.”
While future AI systems might also be able to gather their own data, most current systems rely on humans to provide the input parameters, including the data and the best result, as identified through learning definitions – like reinforcement learning. When you ask the algorithm to figure out the best way to achieve that result, you have no idea how the machine will solve the problem. You just know it will be more efficient than you are.
Given this current relationship between humans and AI, we can take a number of steps to more ethically control the outcome of AI projects. Let’s start with these three.
Humans solve problems, not machines. Machines can surface the information needed to solve problems and then be programmed to address that problem in an automated way – based on the human solution provided for the problem. Mary Beth Ainsworth AI and Language Analytics Strategist SAS
Step 1 for AI ethics: Provide the best data
AI algorithms are trained through a set of data that is used to inform or build the algorithm. If your algorithm is identifying a whale as a horse, clearly you need to provide more data about whales (and horses). Likewise, if your algorithm is identifying animals as humans, you need to provide more data about a more diverse set of humans. If your algorithm is making inaccurate or unethical decisions, it may mean there wasn’t sufficient data to train the model, or that learning reinforcement wasn’t appropriate for the desired result.
Of course, it’s also possible that humans have, perhaps unwittingly, injected their unethical values into the system via biased data selection or badly assigned reinforcement values. Overall, we have to make sure the data and inputs we provide are painting a complete and correct picture for the algorithms.
Step 2 for AI ethics: Provide the proper oversight
Establish a system of governance with clear owners and stakeholders for all AI projects. Define which decisions you’ll automate with AI and which ones will require human input. Assign responsibility for all parts of the process with accountability for AI errors, and set clear boundaries for AI system development. This includes monitoring and auditing algorithms regularly to ensure bias is not creeping in and the models are still operating as intended.
Whether it’s the data scientist or a dedicated hands-on ethicist, someone should be responsible for AI policies and protocols, including compliance. Perhaps one day all all organizations will establish a chief AI ethicist role. But regardless of the title, somebody has to be responsible for determining if the output and performance are within a given ethical framework.
Just as we’ve always had a need for governance, traceability, monitoring and refining with standard analytics, we also do for AI. The consequences are far greater in AI, though, because the machines can start to ask the questions and define the answers themselves.
Step 3 for AI ethics: Consider ramifications of new technologies
In order for individuals to enforce policies, the technology must allow humans to make adjustments. Humans must be able to select and adjust the training data, control the data sources and choose how the data is transformed. Likewise, AI technologies should support robust governance, including data access and the ability to guide the algorithms when they are incorrect or operating outside of ethically defined boundaries.
There’s no way to anticipate all potential scenarios with AI, but it’s important to consider the possibilities and put controls in place for positive and negative reinforcement. For example, introducing new, even competing, goals can reward decisions that are ethical and identify unethical decisions as wrong or misguided. An AI system designed to place equal weight on quality and efficiency would produce different results than a system focused entirely on efficiency. Further, designing an AI system with several independent and conflicting goals could add additional accountability to the system.
Don’t avoid AI ethics
AI can enhance automobile safety and diagnose cancer – but it can also choose targets for cruise missiles. All AI capabilities have considerable ethical ramifications that need to be discussed from multiple points of view. How can we ensure ethical systems for AI aren’t abused?
The three steps above are just a beginning. They’ll help you start the hard conversations about developing ethical AI guidelines for your organization. You may be hesitant to draw these ethical lines, but we can’t avoid the conversation. So don’t wait. Start the discussion now so you can identify the boundaries, how to enforce them and even how to change them, if necessary.
Recommended reading
- Article Intelligent policing: Data visualization helps crack down on crimeLearn how data visualization can give police real-time views of locations enriched with other data to help them make intelligent, fact-based decisions.
- Article 4 strategies that will change your approach to fraud detectionAs fraudulent activity grows and fighting fraud becomes more costly, financial institutions are turning to anti-fraud technology to build better arsenals for fraud detection. Discover four ways to improve your organization's risk posture.
- Article Payment fraud evolves fast – can we stay ahead?Payment fraud happens when a criminal steals a person’s private payment information, then uses it for an illegal transaction. As payment trends evolve, so do the fraudsters. Banks and PSPs can fight back with advanced analytics techniques that adapt quickly to spot anomalies in behavior.
- Article AI in manufacturing: New opportunities for IT and operationsAn AI survey reveals that leaders and early adopters in AI are making important advances and are identifying and expanding on what works as they use AI in more ways and more parts of their organizations.
Ready to subscribe to Insights now?