GDPR and AI: Friends, foes or something in between?
By: Kalliopi Spyridaki, Chief Privacy Strategist, SAS Europe
When discussing artificial intelligence (AI) policies, it’s hard not to talk about the General Data Protection Regulation (GDPR) at the same time. That’s because the GDPR has had the most impact of any law globally in terms of creating a more regulated data market – while data is the key ingredient for AI applications. Certainly, the GDPR and AI confluence raises intriguing issues in policy-related conversations. In EU policymaking, AI and machine learning are the new hype as Europe aims to be a global leader in AI adoption. But the core of AI discussions and potential new regulations fall outside the scope of the GDPR.
Consider the Communication on AI for Europe, which the European Commission published in April 2018. The document purports to reflect the appropriate legal and ethical framework to create an environment of trust and accountability and to ensure Europe develops and uses AI in accordance with its values. Several countries in the EU (and globally) are adopting national policies setting out ambitious plans for AI adoption.
But what is the interaction between the GDPR and AI? And how are AI systems regulated by the EU GDPR? Are the two friends, foes – or something in between?
Profiling and automated, individual decision making
A set of specific provisions within the GDPR affect AI-based decisions on individuals, particularly those related to automated decision making and profiling. Many of these are contained in Article 22. But the intent of the GDPR around these provisions is not always clear. This is due in part to the technical complexity of the issue, and it’s partly due to the extensive negotiations and compromises that took place during the legislative process. The result is sometimes a misunderstanding of what the General Data Protection Regulation truly requires. To get it right, it’s essential to have an accurate reading of the letter of the law while accounting for the intention of the legislator.
Without going into a full legal analysis, here are a few observations on these provisions.
Article 22 is a general restriction on automated decision making and profiling. It only applies when a decision is based solely on automated processing – including profiling – which produces legal effects or similarly significantly affects the data subject. The italicized wording sets a high threshold for triggering the restrictions in this article.
Moreover, the stricter GDPR requirements of Article 15 are specifically linked to automated, individual decision making and profiling that fall within the narrow scope of Article 22. These include:
- The “existence” of automated decision making, including profiling.
- “Meaningful information about the logic involved.”
- “The significance and the envisaged consequences of such processing” for the individual.
The bottom line: If Article 22 does not apply, these additional obligations do not apply, either.
Ensuring data quality, addressing algorithmic biases, and applying and improving methods around code interpretability that help reconstruct the algorithm can all play a key role in fair and ethical use of AI. Kalliopi Spyridaki Chief Privacy Strategist SAS Europe
Despite the narrow applicability of Article 22, the GDPR includes a handful of provisions that apply to all profiling and automated decision making (such as those related to the right to access and the right to object). Finally, to the extent that profiling and automated decision making include the processing of personal data, all GDPR provisions apply – including, for instance, the principles of fair and transparent processing.
The European data protection authorities have issued guidelines on how to interpret these provisions. While the guidelines may be helpful in some respects, the courts will eventually provide the legally binding interpretation of these rules.
Meaningful information about the logic involved versus algorithmic ‘explainability’
One of the most frequently debated topics in the context of GDPR and AI discussions relates to the so-called GDPR “right to explanation.” Despite common misinterpretations, the GDPR does not actually refer to or establish a right to explanation that extends to the “how” and the “why” of an automated individual decision.
“Meaningful information about the logic involved” in relation to Article 22 of the GDPR should be understood as information around the algorithmic method used rather than an explanation about the rationale of an automated decision. For example, if a loan application is refused, Article 22 may require the controller to provide information about the input data related to the individual and the general parameters set in the algorithm that enabled the automated decision. But Article 22 would not require an explanation around the source code, or how and why that specific decision was made.
On the other hand, algorithmic transparency, accountability and the explainability of AI systems are discussed outside the GDPR. We need further research and reflection on these issues.
Setting aside intellectual property issues around the source code, would explaining the algorithm itself really be helpful to individuals? It would probably be more meaningful to provide information on the data that was used as input for the algorithm, as well as information about how the output of the algorithm was used in relation to the individual decision. Ensuring data quality, addressing algorithmic biases, and applying and improving methods around code interpretability that help reconstruct the algorithm can all play a key role in fair and ethical use of artificial intelligence.
New rules on AI and machine learning
The European Commission’s recently adopted Communication on AI acknowledges the potential impact of the GDPR and other EU legislative proposals, such as the Free Flow of Non-Personal Data Regulation, the ePrivacy Regulation and the Cybersecurity Act on AI development. Notably, the paper announces that AI ethics guidelines will be developed at the EU level by the end of 2018. Safety and liability are two other major areas addressed.
The AI community needs to prepare for more EU laws that regulate specific issues that are core to AI use and development. For example, the recently proposed law on promoting fairness and transparency for business users of online intermediation services includes provisions requiring that transparency and access to data be incorporated into algorithms used in connection with ranking. Another example is the recently proposed review of the EU Consumer Rights Directive. This foresees that contracts concluded in online marketplaces should provide information on the main parameters and underlying algorithms used to determine ranking of offers presented to a consumer doing an online search.
GDPR and AI: Neither friends nor foes
The GDPR and AI are neither friends nor foes. The GDPR does in some cases restrict – or at least complicate – the processing of personal data in an AI context. But it may eventually help create the trust that is necessary for AI acceptance by consumers and governments as we continue to progress toward a fully regulated data market. When all is said and done, GDPR and AI are lifelong partners. Their relationship will mature and solidify as we see more AI and data-specific regulations arising in Europe, and globally.
SAS for Personal Data Protection: The 5 pillars
-
Access
Access data where it lives, when you need it
Integrated data management technologies help you securely access your data and connect to any source – anytime, anywhere. And use virtual, user-based access for simplicity and security.
-
Identify
Quickly find personal data: Out-of-the-box logic rules
Thousands of prepackaged logic and rules help ID sensitive data – backed by easy-to-use data quality, governance and security management tools.
-
Govern
Establish, maintain & monitor enterprisewide rules, policies
Align business and IT with standard terms and a glossary. We can establish processes and help you collaborate effectively. Built-in tools let you measure success, track progress and monitor trends.
-
Protect
Implement safeguards and apply privacy-specific measures
Know who's looking at personal data, and make sure only the right people see it – as you authenticate, authorize, secure, audit and monitor users.
-
Audit
Know if you're in compliance. Create reports to prove it.
Audit information and create reports to share with authorities by keeping detailed usage records, spotting correlations and identifying exceptions quickly and easily.
What is the interaction between AI and the GDPR?
Kalliopi Spyridaki explains how the GDPR and AI relate – and clears up misconceptions around the extent to which AI systems are regulated by the GDPR.
Recommended reading
- Model risk management: Vital to regulatory and business sustainabilitySloppy model risk management can lead to failure to gain regulatory approval for capital plans, financial loss, damage to a bank's reputation and loss of shareholder value. Learn how to improve model risk management by establishing controls and guidelines to measure and address model risk at every stage of the life cycle.
- The 5 new rules of retailThere is good news for retailers. Analytics can help overcome some of the effects of disruption, allowing retailers to move from long-term seasonal forecasting to more agile planning.
- Optimizing well placement to eliminate water poverty How data visualization is helping Water for Good bring fresh water to the Central African Republic.