AI in healthcare neglected area in New Zealand law

Researcher warns of potential issues in the technology’s medical usage, while other reports raise concerns about NZ’s healthcare leadership.

woman doctor medical technology touch icons apps

Despite the use of algorithms in medical practice surfacing a raft of legal issues, there is scant attention being paid to how New Zealand laws may need to change to accommodate the benefits—as well as curb the dangers—of artificial intelligence in healthcare.

That’s the view of Canterbury University PhD candidate Chris Boniface, who interviewed 200 people for his thesis on the impact of AI on New Zealand healthcare. He’s also written a paper for the New Zealand Law Journal examining medical negligence in the age of AI.

A report produced by the AI Forum and Precision Driven Health about AI in New Zealand healthcare in 2019 also highlighted the need for a regulatory framework and associated policies for AI and data controls in health.

New Zealand not driving its own healthcare direction

New Zealand appears to be following, rather than leading, AI developments in healthcare, says Boniface. “This is conjecture, but having talked to people it seems New Zealand is a little more hesitant to invest in these sorts of things, until they see results. A lot of the discussion in New Zealand around AI tends to happen in the context of ‘Europe is doing this, Asia is doing this, this works so let’s focus on this approach’. Where we let people go through the problems a little before us and then we go, ‘We can make that work better’.”

New Zealand’s Health IT infrastructure was part of a scathing review of the national health system in June 2020, noting that “the system needs a clear long term digital and data strategy and plan to ensure a cohesive, effective modern health and disability system.”

Boniface says what’s required is a “broad spectrum look” across AI in healthcare, which takes into account all the issues. “There’s a lot of little chats about little things and little issues, but I feel like they are dealing with them in a very isolated capacity. Whereas the nature of AI lends itself to quite a broad overlapping series of problems and I think a more general interpretive approach would be more useful. A more holistic look at things because then you can come up with a more creative solution.”

The Law Commission, the Ministry of Health and Pharmac are three organisations that Boniface suggests would be well-equipped to undertake this work. The Ministry of Health has to date published guidelines for developing and using algorithms in healthcare.

Law Commission president Amokura Kawharu told Computerworld New Zealand that “the use of AI across a range of contexts raises many novel legal issues. We are aware of the recent doctoral research, as well as current and proposed law reform projects in overseas law reform agencies concerning the use of AI. Here at the commission in New Zealand, we are undertaking work in the areas of litigation procedure, the use of DNA in criminal proceedings, and succession law. We’ll soon begin new projects on surrogacy and decision-making capacity. That takes our law reform programme out to 2022 and beyond. Given this, we are not currently in a position to review the legal issues concerning the use of AI. Nor therefore are we in a position to comment on the those issues.”

The potential issues with AI in healthcare

Boniface says diagnostics is the area in New Zealand healthcare where most AI applications are found, especially in radiology. That’s when an image-based scan goes through an AI system first before being referred to a medical professional. “Currently, it will still go to a person because we haven’t really entrusted control to a system. But in future, in theory, you wouldn’t need to go to a person at all,” he says.

In his paper, Boniface points out the issues that can arise when AI is introduced into healthcare. These include the idea that AI has the potential to learn, or alter its processes over time, to become more effective. This means it can be impossible to identify the factors that lead to the decision-making process. So, while the use of AI may result in a better medical outcome, learning from the decisions made would be impossible.

In addition, it’s problematic to hold an individual responsible for the actions of an AI system where the decision-making is opaque. For instance, if a device changes over time then the creator of that device may not be liable. In addition, machines that make medical decisions can’t be held responsible, or punished for their actions, in a way that human medical practitioners can.

“In most technology, especially in a medical context, you regulate the technology and then bring in the technology. The tech doesn’t change. Whereas in AI the algorithms and whose controlling them—what they do, what they capable of—evolves over time. This creates a shifting goal post,” Boniface says. “I don’t think it is a doomsday scenario—of course not—but I do think AI open doors to different problems that we aren’t necessarily aware of.”

One of the most basic issues is over control of health data, and what the trade-off is when taking advantage of AI techniques to improve healthcare. “The way machine learning works is it requires massive data sets which are usually hosted by companies like Google. Which means they are not in New Zealand, so there are jurisdictional problems. Also, the control over the data is afforded to a company that is not under the same scrutiny, necessarily,” he says.

“Which means patient data is being taken overseas and used potentially for things beyond what they necessarily wanted or were aware of. Whereas conventional patient data is held domestically, and you have quite an extensive degree of control over it. The problem with that being, if you regulate that you don’t afford yourself the benefits of these technologies,” Boniface says.

The report by the AI Forum and Precision Driven Health also noted that barriers to implementation of AI include the “inflexibility of legacy technology systems.” Boniface says this is an issue in other jurisdictions as well, notably in the UK, where its National Health Service had looked to AI to improve the delivery of healthcare. “They [NHS] pushed for a lot of research, they brought in a lot of international companies, but their system is just so archaic compared to others that there is no progress,” he says.

Copyright © 2020 IDG Communications, Inc.

Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon