Site Loader
Draft guidance on the AI auditing framework webinar


Hello everybody. Welcome to the
ICO’s AI auditing framework webinar. My name is Lisa Tighe Lead Communications
officer here at the ICO. joining me today is Reuben Binns our postdoctoral fellow in
AI, & Alister Pearson senior policy officer. The ICO has been
working to develop a framework which can be used to audit AI. This project
aims to help organizations navigate the data protection pitfalls that might
occur when developing and implementing AI systems .It will also shape the way in
which the ICO regulates this space. Last week we launched our public
consultation on our draft guidance on the AI auditing framework. in the
webinar today we’ll be looking at this guidance in more detail and discussing
how our thinking has evolved since 2009 and how it may influence the ICO’s
work in the future. We will discuss why it’s important for us to consult with you on
the guidance and the next steps of the project. We’ll be answering questions at
the end of the webinar so please send anything you wish to ask via email
[email protected] . or tweet them into us @ ICOnews
so without further ado I’ll hand over to Alister Thanks Lisa.
I wanted to start by providing some background to the draft guidance that
was published last week and to explain why we have produced this guidance, what
have we done so far, how this guidance relates to other ICO
work on AI ,who the guidance is for and how the guidance is structured. I then
hand over to Reuben who will talk about what we say in the guidance. So why have
we produced this draft guidance? We see new uses of AI every day from healthcare
to recruitment to commerce and beyond we recognize that AI will bring huge
benefits to organizations and individuals but also risks. we
therefore made AI one of our top three strategic priorities and decided to
develop a framework for auditing ai compliance with data protection
obligations. The framework has two distinct outputs, the first is auditing
tools and procedures which will be used by our investigation and assurance
teams when assessing the compliance of organizations using AI. The second is
this detailed guidance on AI and data protection for organizations which
outlines our thinking to help organizations audit the compliance of
their own AI systems. This guidance aims to inform you what we think constitutes
best practice for data protection compliant AI it is not a statutory code
and there will be no penalty if you don’t follow our recommendations if you
find another way to comply of the law. What have we learned so far ?In December
2018 the ICO hired its first postdoctoral research fellow in AI,
Reuben, to research and develop the AI auditing framework and conduct further
in-depth research activities in AI and machine learning. Between March and
October 2019 we launched an initial call for input into the framework during this
time we published 15 blocks designed to initiate discussion and debate some of
the risks to rights and freedoms that AI can pose.
We’ve been developing and expanding these blogs and converted them into this
more formal draft guidance. We’ve engaged extensively with stakeholders to
understand how organizations are using AI on the ground .We’ve used these
insights to produce the practical and realistic hypothetical examples included
in the draft guidance to help illustrate the risks that AI creates or
exacerbates. How does this guidance relate to other ICO work on AI? There are
several initiatives across the ICO with links to AI including the sandbox which
is supporting organizations to use personal data in innovative and safe
ways. Most of the current entries use AI in some way. In our investigations we’re
looking at facial recognition technology in public spaces by law enforcement and by
companies in the private sector . We also host a regulators and AI working group,
which is an informal network set up by the ICo for UK regulators to
share information and align approaches to the regulation of AI
and Project Explain which produced guidance on how organizations can best
explain the use of AI to individuals The draft guidance on the AI auditing
framework is designed to complement the Explain guidance ,we recommend reading
both in tandem. Most of this work can be traced back to our award winning report on
big data, AI machine learning, and data protection. So who is the draft guidance
aimed at? There are two broad audiences that the guidance primarily targets.
First thos with a compliance focus including data protection officers,
general counsel, risk managers and the ICO’s own auditors. In other words whether
we utilize this guidance ourselves in the exercise of our audit functions
under the data protection legislation Second is technology specialists including
machine learning developers and data scientists, software developers and
engineers and cyber security and IT risk managers. How’s the guidance structured? We
divided it into four parts corresponding to different data protection principles
and rights. Part one addresses issues that primarily relate to the
accountability principle sections in this part include data protection
impact assessments ,controller processor relationships and assessing &
justifying trade-offs. Part two covers lawfulness fairness and transparency,
covering lawful basis, statistical accuracy and bias and discrimination. Part three covers the principles of security and data minimization in AI systems. Finally pART 4 covers how you can facilitate the exercise of individual
rights about their personal data in your AI systems and rights relating to solely
automated decisions. We have also structured the draft guidance to include
a risk statement and some examples of controls you can take were a subsection
heading relates to a risk that AI creates or exacerbates. The controls have been broken down into three categories, preventative controls,
detective controls and corrective controls They are designed to provide you with practical suggestions. these controls
have no statutory basis and are included as what we believe constitutes best
practice. I shall now hand over to Reuben who will discuss more about the
content of the guidance. Thanks Alister. We have divided the guidance into the four sections. The first section covers
governance and accountability. In this section we cover a range of different
topics relating to the accountability principle. We start out by talking about
what we mean by approaching AI governance from a risk-based perspective. We then move into topics around how to conduct a data protection impact
assessment and we found that while you know the best practices in a DPIA will still apply to AI there are some unique challenges when it comes to
assessing the data protection impacts of AI systems which we cover in that
section on on DPIAs. We then talk about understanding controller processor
relationships in AI. In many cases the AI supply chain may be quite complicated
where you’ll have multiple different organizations who are involved in
different stages and so this section sets out a number of different
considerations there where you may be a provider or a procurer of an AI system
and we set out a number of considerations which would affect your
status as a data controller or a data processor or in some cases a joint data
controller. In this section we also discuss AI related trade-offs and how
you should deal with them so in many cases there are trade-offs to be struck
when you’re designing an AI system so in increasing for instance, statistical
accuracy you may be decreasing privacy or in increasing
fairness you may be decreasing privacy. So these kinds of trade-offs we discuss
them and we give some advice on how to strike the right balance when you’re
designing AI systems that involve these kinds of trade-offs. in this section we
also mention issues around outsourcing and how you can assess trade-offs
controllership and accountability when you’re outsourcing different parts of
your AI system. The next section relates to the lawfulness, fairness and
accountability principles. so these first of all we focus on how you can identify
the different purposes of processing data in an AI system and this is
necessary for you to do before you can figure out what the appropriate lawful
basis might be and one of the things we talk about is the distinction between
training an AI system where you’re processing personal data in order to
develop a statistical model that will allow you to make predictions or
classifications and that might be one purpose that has a particular lawful
basis . but then we distinguish that from another purpose which is actually
deploying a model to make predictions or classifications about people so where
your model is deployed in the real world and it’s having impacts in that case
there may be a different purpose and a different lawful basis that you need to
consider. We move on to talking about
statistical accuracy so we begin by distinguishing statistical accuracy
which in the context of AI refers to the extent to which your system gets the
correct answer in response to new cases that it’s asked to make a prediction
about or a classification about and the accuracy principle in data protection
which is different because it relates to whether the data that you process about
an individual is inaccurate as to a matter of fact so we distinguish these
two and we say that statistical accuracy is important in your AI system to ensure
that you don’t end up processing data about someone in a way that’s unfair so
we define different kinds of accuracy and we talk about the breakdown between
things like false positives and false negatives where the costs of errors to
yourselves as an organization and to data subjects may be different. We talk
about how you can define and priority prioritize these different accuracy
measures and we give a set of questions you should ask yourself and measures
that you can deploy to ensure that you’ve got the right level of
statistical accuracy then we talk about the risks of bias and discrimination
when you’re using AI systems. so in many cases because of imbalances and data or
because data may reflect real-world structural injustice or
discrimination Your AI system may end up
replicating or repeating those patterns of discrimination so we talked about the
reasons why that might be we talked about the ways you can measure and
assess if it’s happening and we also talked about the processing of special
category data where that might be necessary in order to assess the
potential discriminatory impacts of your system and again we talked about the
kinds of risks how to identify them in examples of controls that you can put in
to litigate them. So moving on to the next part of the guidance this focuses
on security and data minimization so these are two important principles in
data protection there are kind of security risks which are exacerbated by
AI so these are common security risks which you may be familiar with in some
context which are made harder to deal with in various ways as a result of AI.
One of them is that AI requires data to be copied and transformed and moved
around in lots of ways which may mean it’s harder to maintain proper records
and access control. the other potential challenge is around the introduction of
new kinds of software and code into your IT infrastructure that are
necessary in order to build AI systems that may introduce new risks. we also
talk about new risks that are introduced by AI systems one of them is
the fact that in some cases it may be possible to infer personal data from AI
models so we talked about why that might be and what measures you can take to
mitigate against that. we also talk in this section about data minimization so
we consider different steps in the AI development and deployment process and
we consider the different possible measures that you can deploy to ensure
that you’re not collecting more data than you need to and that you are that
you were doing so in a data minimizing way. The final section deals with
individual rights in the context of AI systems so there are a range of
individual rights that are that you need to protect at different
stages of the AI development process but because of the way that AI systems are
developed and deployed oftentimes personal data may need to be processed
in unusual ways and managed in unusual ways which may raise additional
challenges when it comes to responding to individual rights requests so for
instance it may be harder to understand when and how individual rights apply to
training data to data that’s used in the deployment of a system and so we talked
about a range of mechanisms that you can deploy to ensure that you can
effectively respond to individuals when they seek to exercise those rights. Next steps the consultation period
started a couple of weeks ago it’s going to end on the 1st of April so that
hopefully gives you time to get some your consultation responses and as I
outlined in the beginning and the guidance is aimed at two different
audiences so risk practitioners and technology
experts so we particularly want to hear from people who work in these different
roles. We’re also interested in seeking of views of people in senior management
roles so those whose responsibilities include setting your organization’s risk
appetite and for signing off on the deployment of AI systems in addition to
the consultation we’re also going to continue to develop tools for risk
practitioners and tools for our own audit and investigations teams so the
feedback that we get as a result this consultation will also go into informing our own operational procedures and investigations. Ok thanks very much
guys that was a really interesting explanation in detail look at the
guidance itself . Just a reminder that you can email in any questions or tweet us @ICOnews and we have had some the questions in so let’s get into those. So one of the first
ones we the ICO be providing a summary of the AI auditing framework
that is easily understandable by senior management? what will be particularly useful are simple
statements who needs to do what and when so I think we’re getting some way
towards that in the guidance that we’ve got and we think it definitely will take
that on board and I think we’ve definitely got the what in the sections
in the boxes at the end of each section which look at what risk controls you can
put in place but we could we could definitely do more in terms of
specifying which kinds of roles should be doing which of those actions so we’ll
definitely take that on board thanks. Another question we’ve got could
you provide an example of an AI audit work plan to help understand the
questions you’ll be asking and practice? So for this we can give a certain amount
of information and much of that is already in this guidance for various
reasons we can’t reveal everything about how our assurance and investigations teams
work because that may prejudice their ability to do to do their job but we can
certainly include the broad kinds of considerations that we would be taking
into account when we do that SO someone sent us s in a bit of an
example they’ve said say a company is using a phone depth of screen CBS the
vendor algorithms are continuously updated based on improvements from
multiple clients so how would you approach the order in this case who will
get audited? will it be a third-party vendor or the company as a controller or
both? it’s a bit of a worked example there how would that work in practice? I think it depends on the nature of the individual case of course but in
theory it could be any of those organizations because they’re probably
all processing personal data in some way or another so we have the ability to
assess any of them and it would depend in practice as to who is involved in
which bits of the processing and which bits we’re concerned about. One more question we’ve got are you seeking or would you be happy to receive
more substantial responses in addition to the online questionnaire absolutely
yeah yeah I think the questionnaire is there to get in the specific questions
that we had but we’re very welcome to other considerations which we may not
have thought to ask for so please do send in anything else. And people could
just send that into [email protected] yes just have final chances any more
questions came through know is that anything in particular that you’re
looking for people to focus on three areas but you’re particularly interested
in getting feedback from one there on the guidance I think we’re looking for a
practical feedback so organizations have a chance that sort of pilot the draft
guidance and some actual real-world examples it would be really beneficial
for us to see how this draft that is works in practice and see what works
well and what doesn’t work well to help us improve from the final day so it look
taking the guidance and applying it to get a different a variety of sectors
sizes of organization and see how it works for them
yeah we’re it yeah and we also want like a wide range of different sectors we’ve
we’ve put a lot of them in the examples and draft guidance but there are those
sectors that would be interested in hearing from and seeing one how they’re
using AI to develop products in their sector and two how they’re trying to
overcome some of the risks that we guidance okay and so we’ve got another
question in can you tell us how you could quantitively assess the fairness
of the machine learning model so what metrics to use and what thresholds so
we’ve given some thoughts of this in the section on bias and discrimination and
we outline a range of different metrics there I think the problem is it’s so
context specific that we can’t say you know across all sectors and across all
contexts either this should be the threshold and we think that’s something
that each organisation will need to assess for themselves in their own
context weighing up the various risks to the rights and freedoms of data subjects
and considering how these things work in their own context so we wouldn’t be able
to give specific thresholds but we do think that it’s important that these
issues are given consideration and they should be documented in your data
protection impact assessments so more probably not given the threshold
is more so than for organizations to consider themselves the same way
that you will others elements of data protection law and come up with their
baselines .okay so I think that’s all we’ve got time for
but you can continue to send in your questions to [email protected] throughout the consultation process and we’ll try to get back to you
with responses and all the information you need is available at ico.org.uk/aiconsultation it’s open until the 5th of April and like I said we really
look forward to hearing your views and sorry it’s the first of April not the
5th of April and we really look forward to hearing your views and whether that’s
on this consultation itself or like Ruben says if you want to send in a more
detailed guidance some examples to a [email protected] you are
more than welcome to do that as well so thank you very much for listening

Reynold King

Leave a Reply

Your email address will not be published. Required fields are marked *