On March 15, 2023, the UK Information Commissioner’s Office (“ICO”) published an updated version of its guidance on AI and data protection (the “updated guidance”), following requests from UK industry to clarify requirements for fairness in AI.
The key updates are summarized as follows:
- The updated guidance has been restructured using the data protection principles as the core of the structure. According to the ICO, this structure “makes editorial and operational sense” and will make updating the guidance in the future more efficient.
- A new section has been inserted detailing what an organization should assess when conducting a data protection impact assessment (“DPIA”) on AI. For example, the DPIA should include evidence of consideration of “less risky alternatives” to achieve the same purpose and why those alternatives were not chosen.
- A new chapter has been added containing a high-level description of the transparency principle as it applies to AI. For example, the updated guidance confirms that, when personal data is collected directly from a data subject, the data subject should be provided notice if that personal data is to be used to the train the AI model. This chapter is supplementary to the ICO’s existing guidance on explaining decisions made with AI .
- A new chapter has been added regarding ensuring lawfulness in AI. The new content in this chapter relates to AI and inferences, affinity groups and special category data. For example, the updated guidance notes that it may be possible, using AI, to infer or guess details about a person, which may constitute special category data. According to the updated guidance, an inference is likely to be special category data if an organization can (or intends to) infer relevant information about an individual, or intends to treat someone differently on the basis of the inference (even if it’s not with a reasonable degree of certainty).
- A new chapter has been added regarding ensuring fairness in AI. The new content in this chapter includes information on, e.g., the data protection approach to fairness; how fairness applies to AI and a non-exhaustive list of legal provisions to consider; the difference between fairness, algorithmic fairness, bias and discrimination; and processing personal data for bias mitigation.
- A new annex has been added related to fairness in the AI lifecycle. The annex details data protection fairness considerations across the AI lifecycle, from problem formulation to decommissioning. It also sets outs why fundamental aspects of building AI may have an impact on fairness, identifies the different sources of bias that can lead to unfairness and lists possible mitigation measures.
According to the ICO, the updated guidance “supports the UK government’s vision of a pro-innovation approach to AI regulation and more specifically its intention to embed considerations of fairness into AI.” The ICO also noted that the guidance will require further updates in the future to keep up with the “fast pace of technological developments” and confirmed it will be supporting the implementation of the UK government’s forthcoming White Paper on AI Regulation.
Go to Source of this post
Author Of this post: Hunton Andrews Kurth LLP