The UK Information Commissioner’s Office has launched a new AI and biometrics strategy to support further innovation while protecting the public from potential harms, along with research on public impressions of the technologies.
The strategy establishes how the ICO will develop a statutory code of practice for organizations that develop or deploy AI and ensure that police use of facial recognition is fair and proportionate. It also sets expectations for using automated decision-making (ADM) systems and lays out the expectations for ensuring “AI foundation models” are developed lawfully.
Commissioner John Edwards notes that the focus on AI and biometrics is not new, citing regulatory actions over the use of biometrics for student lunch payments and tracking employee time and attendance.
“Our research shows that people expect to understand when and how AI systems affect them, and they are worried about the consequences when these technologies go wrong – such as being incorrectly identified by facial recognition, or losing a job opportunity through erroneous ADM,” Edwards says.
The strategy reviews the actions taken by the ICO on AI and biometrics so far, including opinions on live facial recognition use by the private sector and law enforcement. Action is necessary because of a lack of regulatory certainty and confidence in both the public and private sector, and because public trust is being undermined by a lack of transparency and confidence about how personal information is used.
Areas of strategic focus therefor include transparency and explainability, the risk of bias and discrimination, and the rights and redress mechanisms people have.
The strategy sets out six action items the ICO plans to carry out. Once ADM and foundation models are dealt with, though, only ones are relevant to biometrics providers. The ICO has committed to “(s)upport and ensure the proportionate and rights-respecting use of facial recognition technology by the police” with published guidance, audits and advice to the government on proposed legal changes, and to “(a)nticipate and act on emerging AI risks,” for which the strategy refers to agentic AI and emotion inference.
Revealing public attitudes to biometrics
A 47-page Revealing Reality report, “ICO: Understanding the UK public’s views and experiences of biometric technologies,” shows a moderate increase in public awareness of biometrics use compared to 2019 research from the Ada Lovelace Institute. Differences in the methodology of the Ada Lovelace research prompt a caution about comparing the results, though stats from the two reports are included side-by-side within the new one.
Ninety percent are familiar with the most common biometric modalities, according to the report, but fewer people are comfortable with them, at 73 percent for fingerprints and 65 percent for facial recognition.
Police use of facial recognition has slightly less popular support than voluntary use, at 63 percent. For specific purposes like finding missing people (78 percent) and retrospective suspect identification (77 percent), support was higher. Public comfort with police use is lowest for operator-initiated facial recognition, at 61 percent.
The current regulation of police facial recognition use is considered appropriate by only 48 percent of those surveyed, though most of the rest hold a neutral view and only 10 percent slightly or strongly disagree. Agreement was nearly universal that all UK police should follow the same rules when using FRT (91 percent), but 42 percent said they believe that is the case today.
“Many AI tools are still in the early stages of maturity – and while they may seem simple to implement, they can introduce novel risks when used to address complex social challenges,” Edwards continues. “I urge organisations to take appropriate care and use our guardrails – such as guidance, innovation services and DPIAs – to develop and deploy this technology responsibly and on a foundation of trust, protecting your customers and your reputation.”
Article Topics
AI | biometrics | data protection | explainability | face recognition | Information Commissioner’s Office (ICO) | responsible AI
You can contact us for more informations or ads here [email protected]