New Zealand Police have put new rules into effect and signed a charter agreement outlining their responsibilities after it was revealed staff tried out an emerging facial recognition technology without proper authorisation this year.
In May, RNZ reported police staff had undertaken a two-month trial of Clearview AI - an American facial recognition system which scans images people have uploaded to the internet and matches them against a target image - without properly clearing it with police leadership or the Privacy Commissioner.
Police Commissioner Andrew Coster, who assumed the role on April 3, was seemingly unaware of the trial. The Office of the Privacy Commissioner also confirmed they were not informed.
"The policy released today acknowledges that emergent technologies can have an important part to play in modern policing, and it supports staff to innovate in their work," Coster said today.
"It also recognises that the use of emergent technologies can have privacy, security and ethical implications, which must be carefully weighed before such technologies are trialled or introduced."
The policy will apply to all police staff, as well as contractors, and includes guidance for staff who are approached by companies with offers to trial technology such as facial recognition.
Police also said they have signed a new Algorithm Charter, which, among other things, requires police to understand the limitations of AI and emergent technologies, identify and manage bias and regularly peer review algorithms to ensure privacy, ethics and human rights are safeguarded.
The charter also commits police to ensuring that humans continue to have oversight over decisions made by AI, including providing a channel for people to challenge decisions made or informed by those systems.
Police also committed to clearly explaining how any algorithms they use work, in plain English, and to making information about those processes available, as well as publishing information about how data is collected, secured, and stored by those systems.
Privacy Commissioner John Edwards called today's moves "a really positive step forward".
"I am encouraged by the approach that the new Police Commissioner is taking towards greater transparency and governance associated with the use of algorithms and new technologies," he said.
"Having been involved in the foundational documents that sit behind the algorithm charter, I am delighted to see agencies signing up to this document and look forward to the improved transparency and accountability that I hope will come with it."
'BETTER LATE THAN NEVER'
Dr Andrew Chen, an expert on emergent technology and artificial intelligence and Research Fellow for Koi Tū: The Centre for Informed Futures, said the police's signing of the charter and establishment of policy was "a step in the right direction" - but also warned "the proof is in the pudding.
"Overall, this is a case of better late than never - it is probably late - it's taken them a while to come up with this stuff," Chen said.
"Now we have to see if these policies and charters will be reflected in the operational work of police."
Chen said he was encouraged by police's plan to establish an external advisory group was encouraging, and he hoped it could even be established before the end of the year, but anticipated it would likely take longer.
The use of tools like artificial intelligence and facial recognition was "threading a needle" for police, he said, because those technologies have great potential to do good, but also potential to do harm.
"They could also go the other way and never do anything new because it's too risky," Chen said.
"It's about 'how do we find ways to use this technology in a good way - and if we can't, maybe we shouldn't use it just yet'."
There was a greater perception from the New Zealand public today that technologies like AI and facial recognition is "creepy", Chen said, making it imperative that police develop a social licence to use these technologies before implementing them - or even trialling them.
He said it was good that police had committed to transparency around how these systems work, but also said some AI and emergent technology companies could potential withhold the exact workings of their services on commercial grounds.
"Police might have to consider how much they can explain, and I think you can go quite far to explain how a system might work without necessarily revealing the exact source, or the data that's been used to train it - that can go a long way in terms of giving people confidence."