Pages

Monday, September 16, 2024

Bronwyn Howell: The Precautionary Principle, Safety Regulation, and AI: This Time, It Really Is Different


Key Points
  • Generative pretrained transformers (GPTs)—such as large language models like ChatGPT, Claude, and Llama—come from a different computing paradigm than do traditional “big data” artificial intelligence models.
  • Traditional risk-management frameworks developed from the precautionary principle to address the risks of big data AI models (which current AI regulations are based on) are not well suited to manage GPT risks.
  • Using case-based regulation for specific applications rather than generic, overarching regulation is likely a more effective way to manage GPT AI risks.

The precautionary principle (PP) holds that in the face of scientific uncertainty about the outcomes of deploying a new technology, and especially when serious or irreversible damage could occur, a cautionary approach is justified—“better to be safe than sorry”—which necessitates strictly regulating the technology’s release.

The PP has long been important in managing risks associated with technological innovations that have no explicit scientific knowledge.1 Beyond being criticized for lacking a sound, logical foundation,2 the PP could distort regulatory priorities, justify protectionist measures, and stifle further innovation.3 Yet the PP has found favor in a number of policy and regulatory areas,4 notably product and health safety, environmental risk management, and, recently, artificial intelligence.

This PP approach is characterized in, for example, the strict processes under which new pharmaceutical drugs and therapeutic treatments are developed and deployed. In the US, the Food and Drug Administration requires new drugs to be extensively tested in laboratory and controlled and supervised settings among human subjects before it gives market approval.5 Furthermore, once deployed, continued surveillance is required because not all possible consequences can be known or anticipated when a drug is released on the market.6 Moreover, the burden of proof that the intervention meets acceptable safety standards and, potentially, liability in the event of unexpected harm lies with the developer.7

The PP has been foundational in the European Union legislative context: It was enshrined in the 1992 Maastricht treaty and included in Article 191 of the Treaty on the Functioning of the European Union.8 Thus, it has led to the risk-management approach characterized in product safety and environmental legislation and, now, the Artificial Intelligence Act9 and its adjunct, the AI Liability Directive.10

The act classifies AI applications according to their anticipated risk level and then specifies the processes they must complete to be marketed to or used by EU citizens. “High-risk” applications must undergo a rigorous testing and approval process before they are permitted on the market. Post-market, application operators must extensively monitor, report, and responsibly run them—and implement shutdown processes in the event of significant unexpected harm. Other low-risk application operators must abide by a less rigorous disclosure process. However, all AI model operators will be subject to the liability directive if unexpected harm occurs. High-risk AI system operators will be held accountable under strict liability laws, while other AI operators will face fault-based liability, with a presumption of fault on the part of the operator unless it can prove it has abided by its duty of care.

This article was first published by the American Enterprise Institute HERE – the full report can be accessed HERE.

Dr Bronwyn Howell is a programme director at the School of Management at Victoria University and an adjunct scholar at the American Enterprise Institute. 

Notes

1. Didier Bourguignon, The Precautionary Principle: Definitions, Applications and Governance, European Parliamentary Research Service, December 2015, https://data.europa.eu/doi/10.2861/821468.

2. Giandomenico Majone, “What Price Safety? The Precautionary Principle and Its Policy Implications,” Journal of Common Market Studies 40, no. 1 (March 2002): 89–109, https://oeclass.aua.gr/eclass/modules/document/file.php/AOA105/PRECAUTIONARY%20PRINCIPLE/precautionary%20principle%20and%20its%20policy%20implications.pdf.

3. D. Kriebel et al., “The Precautionary Principle in Environmental Science,” Environmental Health Perspectives 109, no. 9 (September 2001): 871–76, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1240435.

4. Organisation for Economic Co-operation and Development, Risk and Regulatory Policy: Improving the Governance of Risk, 2010, https://www.oecd.org/content/dam/oecd/en/publications/reports/2010/04/risk-and-regulatory-policy_g1ghc5f1/9789264082939-en.pdf.

5. N. A. Doerr-MacEwen and M. E. Haight, “Tailoring the Precautionary Principle to Pharmaceuticals in the Environment: Accounting for Experts’ Concerns,” in Sustainable Development and Planning II, ed. A. G. Kungolos, C. A. Brebbia, and E. Beriatos (Ashurst, UK: WIT Press, 2005), 1:281–91, https://www.witpress.com/Secure/elibrary/papers/SPD05/SPD05028FU1.pdf.

6. Maxwell J. Smith, Ana Komparic, and Alison Thompson, “Deploying the Precautionary Principle to Protect Vulnerable Populations in Canadian Post-Market Drug Surveillance,” Canadian Journal of Bioethics 3, no. 1 (2020): 110–18, https://www.erudit.org/fr/revues/bioethics/2020-v3-n1-bioethics05237/1070232ar.

7. Kriebel et al., “The Precautionary Principle in Environmental Science”; and Forrest L. Tozer and John E. Kasik, “The Medical-Legal Aspects of Adverse Drug Reactions,” Clinical Pharmacology & Therapeutics 8, no. 5 (1967): 637–46, https://ascpt.onlinelibrary.wiley.com/doi/10.1002/cpt196785637.

8. Bourguignon, The Precautionary Principle.

9. EU Artificial Intelligence Act, “The Act Texts,” https://artificialintelligenceact.eu/the-act.

10. Tambiama Madiega, “Artificial Intelligence Liability Directive,” European Parliamentary Research Service, February 2023, https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf.

2 comments:

Anonymous said...

What a shame none of this applied to the great Covid injection experiment!

KP said...

"What a shame none of this applied to the great Covid injection experiment! "

Exactly! I thought I was reading a tongue-in-cheek criticism of the PP in relation to big pharma, a parody of sorts given the extreme criminal censorship we now live under.

"The PP has long been important in managing risks associated with technological innovations that have no explicit scientific knowledge."

Well, THAT never happened with mRNA injections!

"This PP approach is characterized in, for example, the strict processes under which new pharmaceutical drugs and therapeutic treatments are developed and deployed. In the US, the Food and Drug Administration requires new drugs to be extensively tested in laboratory and controlled and supervised settings among human subjects before it gives market approval.5 Furthermore, once deployed, continued surveillance is required because not all possible consequences can be known or anticipated when a drug is released on the market.6 Moreover, the burden of proof that the intervention meets acceptable safety standards and, potentially, liability in the event of unexpected harm lies with the developer.7"

..and THAT is completely laughable seeing as we still have elevated levels of non-Covid excess deaths that paralleled the release of the jabs.

You are right about the PP Bronwyn, but your example shows the complete contempt it is treated with by Govt and big business.