Group of coworkers reviewing work on a computer monitor

AI Ethics

What it is and why it matters

AI ethics connects artificial intelligence (AI) with the practice of ethics – the principles of right and wrong that define what humans ought to do. Why is this so important? Because AI technology allows machines to mimic human decision making and intelligently automate tasks. To do this safely, we need guidelines to ensure AI systems are designed and deployed in alignment with fundamental human values like privacy, inclusion, fairness and protection of individual rights.

History of AI ethics: "Can machines think?"

Artificial intelligence has dominated headlines and captured public imagination in recent years – but its roots stretch back decades. AI research began in the 1950s, with early pioneers developing foundational concepts and algorithms.

Recent advancements in computing power, big data and machine learning techniques propelled AI into the mainstream, making the impact of AI more visible and tangible in our daily lives. Consider the rapid adoption of generative AI (GenAI) – a type of AI technology that goes beyond predicting to generate new data as its primary output.

With all the hype around GenAI, it’s easy to assume our generation is the first to ponder heavy questions about biased data, accountability, algorithmic fairness and the societal impacts of intelligent machines. But root concerns around the downstream effects of AI and smart machines stretch back to the dawn of digital computing.

It began with visionaries like Alan Turing grappling with the philosophical and ethical implications of artificial intelligence. The questions that kept Turing awake – the nature of consciousness and the potential for machines to think and learn – continue to resonate and evolve in modern discourse on AI ethics.

A timeline of AI ethics history

Click through the dates to read about some important milestones in the evolution of AI ethics.

Trustworthy AI advice from an expert

“Trust in AI has to start before the first line of code is written.” That’s just one nugget of advice around AI ethics from Reggie Townsend, Vice President of the SAS Data Ethics Practice. Watch this short video to hear other great tips Townsend has shared with audiences across the world.

AI ethics in today’s world

Adhering to AI ethics fosters transparency, accountability and trust. But navigating the ethics of artificial intelligence requires grappling with complex moral, legal and societal implications of data and AI. Discover how experts approach these critical challenges in responsible AI development and deployment.

The ethics of data and AI deployment

Tech meets ethics when AI models are deployed. Learn what questions to ask during development, and see how to account for transparency, trustworthiness and societal impact as you move from development to real-world use.

Trustworthy data and AI governance

As AI revolutionizes our world, the need for ethical AI practices soars. In this e-book, you can explore the risks of AI, learn strategies for building trustworthy AI systems, and discover how to effectively integrate AI ethics principles into your business.

What is synthetic data?

Despite widespread availability, good-quality data can be difficult (and costly) to obtain, challenging to protect and short on variety. But training AI models relies on large, diverse and authentic data sets. See how synthetic data helps solve the problem of data “suitability.”

AI anxiety: Calm in the face of change

Is AI keeping you up at night? You're not alone. Learn to identify the root causes of your AI anxiety – from job concerns to ethical dilemmas – and find practical strategies to navigate the AI revolution with confidence.

NIH is breaking barriers in health research with diverse data sets

A one-size-fits-all approach to medical research is limited. Our bodies are all different – with variations based on where you live, what you eat, your genetic makeup, lifetime exposures and more. At the National Institutes of Health (NIH), the All of Us Research Program is out to change what types of data are collected and how it’s shared for biomedical research. By building broad data sets that reflect the rich diversity of people across the US, research data sets now include many who were previously underrepresented. It’s all about making research more trustworthy and keeping AI and analytics transparent and ethical.

Which industries are using AI ethics?

From autonomous vehicles to AI chatbots and now AI agents, decisions made by AI affect humans to varying degrees. As such, AI ethics is a critical consideration across multiple industries, including Big Tech companies. Today, many organizations recognize the importance of having ethical frameworks to guide their AI applications and AI initiatives, mitigate potential risks and build trust with stakeholders.

Banking

AI is integral to financial services. Credit scoring systems use algorithms to analyze data and assess creditworthiness. Fraud detection uses advanced machine learning algorithms to scan transactions and adapt to new fraud patterns in real time. But AI can amplify biases if left unchecked. For example, AI models trained on historical financial data can perpetuate existing inequalities, leading to unfair treatment with loan approvals, credit scoring, housing applications, employment and insurance pricing. Explainability features and bias and fairness metrics can help, along with regulatory guidance and human oversight. In turn, banks can be catalysts for prosperity and equitable access.

Health care

As AI revolutionizes health care – from genetic testing to personalized cancer treatments to chatbot diagnostics – it brings a host of ethical questions. Patient data, needed for AI training, demands fortress-like protection. But robust security can't shield against biased algorithms that can amplify health care disparities. “Black box" decision making raises other concerns. What happens if machines make life-altering decisions without transparency? If AI makes a mistake, who is responsible? Tech innovators, health care professionals, patients and policymakers must work together to create guidelines that protect patients without stifling progress. This is how we can responsibly and ethically unlock AI's full potential in health care.

Insurance

Insurance companies collect a wide range of data – from customer information in applications and policies to data streaming from sensors on self-driving cars. Collecting information in near-real time lets insurers understand an individual’s needs better and provide them with a superior personalized experience. But protecting and governing personal data as it’s used to make decisions is essential to maintaining trust. To avoid concerns with privacy or a lack of sufficient data, some insurers are using synthetic data in their pricing, reserving and actuarial modeling tasks. Whatever their approach, insurers must establish and adhere to an AI ethics framework to ensure the models fed by their data deliver fair and unbiased decisions.

Public sector

Public sector workers are dedicated to protecting and improving the lives of the people they serve. As they respond to citizen needs, many are using AI to be more productive and effective. For example, GenAI techniques can analyze historical data, public sentiment and other indicators, then generate recommendations to reduce congestion or fine-tune resource allocation. But AI use is not risk-free. It’s vital to develop and deploy AI models with fairness and transparency, to incorporate government regulations across all initiatives, and to overcome today’s rapid spread of misinformation. Being able to build ethical AI systems that protect and strengthen individuals’ rights is vital for helping the public sector fulfill its mission.

Curious about AI ethics in other industries?

Explore a wide range of considerations around AI ethics in manufacturing and agriculture. 

How AI ethics works: Understanding the ethics of artificial intelligence

AI ethics operates at the intersection of technology, philosophy and social science. To be successful in using this powerful technology, we must embed ethical considerations into every stage of the AI life cycle – from data collection and algorithm design to deployment and monitoring. Let's delve into some of the key principles.

Human centricity

AI systems that prioritize human needs and values are more likely to be adopted, trusted and effectively used. By embedding human centricity as we develop and implement AI, organizations can create more responsible, effective and socially beneficial AI systems that complement human intelligence and creativity.

Techniques and approaches to implement human centricity include:

  • Human-in-the-loop (integrating human judgment at crucial points in AI processes, especially high-stakes decisions).
  • Participatory design.
  • Ethical impact assessments.
  • Adaptive AI (systems that adjust their behavior based on human feedback and changing contexts).

As AI evolves, maintaining a human-centric approach will be crucial to creating AI systems that benefit society while respecting individual rights and dignity.

Fairness and accountability

One key aspect of AI ethics is ensuring fair, unbiased results. Consider this example: If your algorithm is identifying animals as humans, you need to provide more data about a more diverse set of humans. If your algorithm is making inaccurate or unethical decisions, it may mean there wasn’t sufficient data to train the model, or that learning reinforcement wasn’t appropriate for the desired result.

Humans have, sometimes unintentionally, inserted unethical values into AI systems due to biased data selection or badly assigned reinforcement values. One of the first technical steps to ensuring AI ethics is developing fairness metrics and debiasing techniques. Demographic parity and equalized odds measure algorithmic fairness. Reweighing training data and adversarial debiasing can help mitigate learned biases.

But one-and-done is not enough. Regular audits, combined with diverse representation in AI development teams, will help maintain fairness and accountability throughout the AI system's life cycle. It's not enough to have one-off conversations about these issues; we must make it a continuous and integral part of our discourse.

Transparency and explainability

Transparency and explainability are crucial for building trust, complying with AI regulations and attaining ethical validation. Transparent, explainable AI allows developers to identify and address biases or errors in the decision-making process and empowers end users to make informed decisions based on factors that influence the AI output.

Nutrition labels for AI models

Nutrition labels on food packaging provide transparency into the ingredients, nutrition and preparation of your favorite snacks. Similarly, model cards are transparent "nutrition labels" for AI models. They give visibility into a model's purpose, performance, limitations and ethical considerations, using a standardized, accessible way to communicate key aspects of AI models to stakeholders.

Techniques for explaining complex models

Modelers can use multiple techniques to explain the predictions of complex machine learning models, helping demystify the model’s decision-making process. Examples of these techniques include:

  • LIME (Local Interpretable Model-Agnostic Explanations).
  • SHAP (SHapley Additive exPlanations).
  • ICE plots (Individual Conditional Expectation).

Model developers can also use natural language processing (NLP) to generate human-readable explanations of model decisions. NLP can translate complex statistical outputs into clear, contextual narratives that are accessible and easy to interpret for developers and users. Read about five key questions to ask when developing trustworthy AI models.

Privacy and security

The intertwined pillars of privacy and security ensure sensitive data is protected throughout the AI life cycle. Privacy-preserving AI techniques enable organizations to harness large data sets while safeguarding individual information. Security measures defend against malicious attacks and unauthorized access.

As businesses move toward a decentralized data model, federated learning techniques provide scale and flexibility while solving several privacy and security problems. For example, federated learning techniques allow organizations to train models without sharing raw data – reducing data movement (and therefore risk of exposure).

Other helpful techniques for privacy and security include:

  • Homomorphic encryption (allows computation on encrypted data without decrypting the original).
  • Differential privacy (concealing individual data by adding controlled noise).
  • Adversarial training and input sanitization.
  • Robust access control and authentication protocols.

Robustness

Robust AI systems perform consistently and accurately under various conditions, including unexpected inputs or environmental changes. Robustness is crucial for maintaining reliability and trust in real-world applications.

Techniques to enhance robustness include:

  • Adversarial training involves exposing models to malicious input during training to improve resilience.
  • Ensemble methods entail combining multiple learning algorithms to improve stability and performance.
  • Regularization techniques help prevent overfitting and underfitting, improve generalization, and balance model complexity with performance. Ongoing performance monitoring and model updates help maintain accuracy.

Trustworthy AI software should incorporate various methods for managing algorithms and monitoring their depreciation over time. Ultimately, AI ethics creates a framework of governance, technical solutions and organizational practices that align AI development and deployment with human values and society’s best interests.

Navigating 6 unique ethical challenges of generative AI

Artificial intelligence has always raised ethical questions, but GenAI – with its ability to generate new data – has escalated these concerns. Resulting ethical questions and potential hazards present unprecedented risks and challenges that organizations and society urgently need to address.

Consider these examples of how GenAI can:

  • Take deepfakes to a new level (such as in social media posts).
  • Trample intellectual property rights.
  • Destroy trust in digital information.
  • Exacerbate bias and discrimination.
  • Have negative psyche and social impacts.
  • Create an accountability and governance quagmire.

The role of governance in ethical AI

A governance framework forms the backbone of an ethical AI implementation. These structures establish clear lines of responsibility and accountability throughout the AI life cycle.

A comprehensive governance strategy should define the decision-making processes – including human oversight – and assign specific roles for AI project management.

At some point, this may include assigning roles for AI ethics officers or committees responsible for policy development, compliance monitoring and ethical audits. Regular algorithm assessments and bias checks are crucial components of these governance systems, ensuring AI models remain aligned with ethical standards and organizational values.

As AI capabilities expand, the role of governance becomes even more critical. The potential for AI systems to independently formulate questions and generate answers underscores the need for robust oversight mechanisms. Consider, for example, the implications of AI hallucinations.

By implementing stringent governance protocols, your organization can harness AI's power while mitigating risks and maintaining ethical integrity in an increasingly autonomous technological landscape.

Trustworthy and responsible AI is about more than diminishing the negative; it's also about accentuating AI's great potential to enable more productive and equitable societies. Reggie Townsend Reggie Townsend Vice President SAS Data Ethics Practice

The future of AI ethics

As AI evolves, so will the field of AI ethics. Emerging technologies like quantum computing and neuromorphic AI will present new ethical challenges and opportunities. Policymakers, industry leaders and researchers must collaborate to develop adaptive ethical frameworks that can keep pace with rapid technological advancements.

International cooperation will be crucial in establishing global standards for AI ethics. For example, the EU AI Act, OECD AI Principles and the UNESCO Recommendation on the Ethics of Artificial Intelligence are all paving the way for a more unified approach to ethical AI governance.

The fundamental questions AI tools and technology raise about our relationship with computers will continue to evolve. Debates around how AI will affect the future of work – and if (or when) technological singularity could occur – are ongoing.

Education and awareness will also play a vital role in shaping the future of AI ethics. By fostering a culture of ethical awareness among AI developers, business leaders and the general public, we can ensure that the benefits of AI are realized responsibly and equitably.

As we stand on the cusp of an AI-driven future, embracing AI ethics is first and foremost a moral imperative. It's also a strategic necessity for businesses that hope to build a sustainable, trustworthy and beneficial AI ecosystem for generations to come.

Next steps

See how to develop AI responsibly, consistently and in a trustworthy way.

Empowering AI ethics innovation

SAS® Viya® is a comprehensive platform for developing and deploying ethical AI solutions. With built-in features for model explainability, bias detection and governance, it allows you to harness the power of AI while adhering to the highest ethical standards.