How the NZ government will regulate AI

Rather than have a specific AI law, existing regulations will be updated over time to govern appropriate uses of artificial intelligence

virtual brain / digital mind / artificial intelligence / machine learning / neural network
MetamorWorks / Getty Images

The New Zealand Government plans to regulate the use of artificial intelligence (AI) algorithms by progressively incorporating AI controls into existing regulations and legislation as they are amended and updated, rather than having any specific regulation to control the use of AI.

Last month, the AI Forum of New Zealand — representing organisations from across New Zealand's artificial intelligence ecosystem —published a set of principles designed to help build public trust in the development and use of AI in New Zealand. It marked a new development in a long-running debate in New Zealand about the regulation of AI.

In response to questions from Computerworld on this latest development, a spokesperson for the Department of Internal Affairs said, “AI informs a number of government services and processes and is covered off by a multitude of existing regulations or legislation. As acts are updated and amended, AI and algorithm use will be incorporated into those bills. At this stage, the Government has not made any decisions to explicitly regulate the use of AI.”

The NZ Government’s strategy for AI

These initiatives will form part of the NZ Government’s Strategy for a Digital Public Service, released in November 2019 and described as “a call to action for the public service to operate in the digital world in a more modern and efficient way — delivering the outcomes that … New Zealand needs.”

The spokesperson said: “The strategy for a Digital Public Service identifies the need to establish strong digital foundations that can be used across the public sector. This includes, for example, determining appropriate policy and regulatory requirements for emerging technologies, such as AI. This will ensure new technologies are adopted lawfully, safely, transparently, and with the continued support of the public.

“The strategy … confirms that our government will ensure the human rights that apply offline continue to be recognised and protected in the digital environment.”

The current round of activity around controlling the use of AI algorithms started in April 2018 when the University of Otago called for government use of AI analytics to be regulated. Then, in May 2019, the New Zealand Law Foundation warned against the unregulated user of artificial intelligence algorithms by government. In response, in November 2019, the Government announced it would examine regulation of AI with the World Economic Forum (WEF).

WEF collaboration on AI

The spokesperson said the NZ Government had been invited to work with the WEF’s San Francisco-based Centre for the Fourth Industrial Revolution on the development of new and agile regulatory approaches for Al.

“As a small country, New Zealand is seen by WEF as being an ideal test bed for this kind of agile thinking,” the spokesperson said. “We have a small, stable democracy, with a government that can move quickly. We are well-connected, both internally, across our government and wider society, and we have strong relationships with other countries. We are seen as a leading digital nation.”

Lofred Madzou, WEF’s project lead for artificial intelligence, tells Computerworld, “A draft roadmap for policymakers to help shape thinking when regulating AI has been developed by the project team. This roadmap includes a tools and approaches section that suggests policymakers look at a range of soft and hard regulatory tools and levers. Work is underway within the New Zealand government on identifying options for the regulation of government algorithms. This work, while a New Zealand government piece, will provide a useful case study to the project as it will be testing elements of this high level roadmap. Work with the global community since the creation of the roadmap has further refined the scoping and the emphasis has moved from the high-level roadmap to being about the gathering of evidence and development of tools. The community workshops held by the project team have recommended that the innovative approaches and tools should concentrate on the areas of national conversations about AI ethics and values, assessment and options for a centre of excellence.”

Algorithm charter coming

In another Government AI development, Statistics New Zealand has released a draft algorithm charter that it says, “commits government agencies to use algorithms in a fair, ethical, and transparent way.”

The charter would apply to ‘operational algorithms’ as defined in Statistics NZ’s Algorithm assessment report, released in October 2018: “These impact significantly on individuals or groups. … [They] interpret or evaluate information (often using large or complex data sets) that result in, or materially inform, decisions that impact significantly on individuals or groups…[and] may use personal information about the individuals or groups concerned, but do not need to do so exclusively.”

The charter is expected to be finalised in the second half of 2020. Public submissions on the draft closed on 31 December 2019; Statistics NZ has published a summary of these and provides updates on the project’s progress on its website.

Questions on private sector AI use

The Law Foundation’s warnings on AI were contained in a report prepared by the University of Otago’s Artificial Intelligence and Law in New Zealand Project (AILNZP).

The report’s co-author, associate professor Alistair Knott, told Computerworld that the same issues existed in the private sector, but the report had chosen to focus only on government use because the question of regulation was easier to answer in this narrower context.

“The same questions arise for private sector on uses of decision systems or classifiers. How accurate are they? Do they perform the same way for all different groups in society? In other words, is there any bias? There are other questions of interest. Are the people using these machines actually in charge? Or are the machines on some form of autopilot? You might have the illusion that a person is in the loop when in fact they're just taking the recommendation of the system, especially when the system gets sufficiently good. So are there protocols for making sure that people are awake to the system making mistakes?”

The university is now working on Phase 2 of the Law Foundation project with the results due out later in 2020.

Knott said it would focus on the employment issues thrown up by the use of AI. “How will human work change? Which jobs might be lost? How might new jobs be created? What effects there will be on whole professions possibly, of widespread introduction of AI? What aspect of those professions be at risk of being lost if there's widespread automation?”

Copyright © 2020 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon