Could your minister be replaced by AI?

Could your minister be replaced by AI?

As Artificial Intelligence (AI) continues to be used in more commercial applications, questions are emerging surrounding its potential uses…and whether these are ethical.

To scan recent news, you would be forgiven for thinking you were reading a synopsis of Black Mirror episodes.

An upcoming United Nations conference will explore the potential role of AI in meeting the Sustainable Development Goals (SDGs).

Meanwhile, Victoria has become the latest Australian state to ban the use of AI chatbots in schools.

With chat AI Chat GPT proving to be popular, and Google eyeing their own product to compete with it, there are some discussion over what the future holds for AI, including whether or not it might replace certain jobs.

In a report published by CBS News, Jason Boehmig, co-founder and CEO of Ironclad, suggested that AI might replace certain legal drafting jobs.

“The dynamic that happens to lawyers now is there is way too much work to possibly get done, so they make an artificial distinction between what they will work on and what will be left to the wayside,” he said.

Columbia Business School Professor Oded Netzer added, “There are parts of a legal document that humans need to adapt to a particular situation, but 90 percent of the document is copy pasted.”

“There is no reason why we would not have the machine write these kinds of legal documents. You may need to explain first in English the parameters, then the machine should be able to write it very well,” he said.

“The less creative you need to be, the more it should be replaced.”

However, Professor Netzer told CBS MoneyWatch that the technology may not lead to mass retrenchments.

“In terms of jobs, I think it’s primarily an enhancer than full replacement of jobs,” he said.

“Coding and programming is a good example of that. It actually can write code quite well.”

As the University of Sydney’s Uri Gal points out, there are a number of potentially concerning issues regarding AI when it comes to privacy and data retention. Writing in particular about Chat GPT in a piece for The Conversation, he argues:

The data collection used to train ChatGPT is problematic for several reasons.

First, none of us were asked whether OpenAI could use our data. This is a clear violation of privacy, especially when data are sensitive and can be used to identify us, our family members, or our location.

Even when data are publicly available their use can breach what we call textual integrity. This is a fundamental principle in legal discussions of privacy. It requires that individuals’ information is not revealed outside of the context in which it was originally produced.

Also, OpenAI offers no procedures for individuals to check whether the company stores their personal information, or to request it be deleted. This is a guaranteed right in accordance with the European General Data Protection Regulation (GDPR) – although it’s still under debate whether ChatGPT is compliant with GDPR requirements.

This “right to be forgotten” is particularly important in cases where the information is inaccurate or misleading, which seems to be a regular occurrence with ChatGPT

So with the above considerations in mind, could AI be something that one day stands in place of ministers?

Well, hold off on calling that meeting just yet.

Along with the potential ethical issues regarding potential plagiarism, there are some indications that AI may not be able to replace human writers (including ministers), at least in terms of where the technology currently is at this stage of its development.

One of the key indicators here is that the technology, while good at reproducing information already available, struggles with navigating more complex terrain.

While much has been made about AI’s potential plagiarism, and TurnitIn are set to make big money from new tools that can detect chatbots in university assignments, students may not get away with its use for long. As a Vice article explores, University Professors are starting to pick up on students’ use of the technology.

Darren Hicks is assistant professor of philosophy at Furman University. In a recent Facebook post, he said he saw a ChatGPT-generated essay on ‘Hume and the paradox of horror’.

“The first indicator that I was dealing with AI was that, despite the syntactic coherence of the essay, it made no sense,” Dr Hicks said.

In an interview with Motherboard, Hicks said the essay gave itself away.

“It was wrong, but it was confident and it was clearly written,” he said. “If I didn’t know the material better, it would have looked good. And that, that’s a weird combination of flags which I’d never seen before.”

Rev. Dr Niall McKay is an Educator for Lifelong Learning for the NSW and ACT Synod.

He told Insights he had seen some anxiety regarding AI’s potential for disruption.

“I’ve seen a bit of anxiety, especially in people who work for universities and academic institutions,” he said.

“Some of this is simple fear of the unknown, but others are seriously concerned about how they will deal with the “ brave new world” of Chat GPT and what that will mean for shaping teaching and evaluating student work.”

“More sophisticated reflections acknowledge that more aspects of “white collar” professions will be affected by technology now – whereas the brunt of technological innovations in the 20th century has upended jobs requiring more manual labour – just look at video of a Tesla factory to see how few people it takes to build a car these days. What will it mean when a computer can write our legal briefs, or business letters, or even our newspaper articles?”

When Insights asked whether or not AI might be able to replace ministers, Rev. Dr McKay was more sceptical.

“Depends if they train it drink too many cups of tea and some bad Sunday morning coffee,” he said.

“But seriously, not anytime soon, especially if Chat GPT is the example.”

“It is simply not able to interact with the whole person and the whole community, picking up on culture, history and human interactions in all their varied forms.”

“Choosing appropriate music for worship and then praying with a dying person? I don’t even know how to think about (how) AI (might) be this adaptable. But, if the question is, can AI do some tasks that ministers currently do – then sure. I would hope so. And I would hope that we use the technology at our disposal as best we can in ministry.”

Rev. Dr McKay said the major ethical question AI prompted was to do with whether or not it might be used in just ways.

“Without getting into questions of AI consciousness and autonomy and Skynet etc, which Chat GPT is not a harbinger of, perhaps the biggest philosophical and ethical question for us at this stage of AI development is that of the “just transition.””

“For it’s not whether or not AI is a good thing, but rather how can it be adopted and adapted in ways which are most empowering and life giving. In the west, at least, we haven’t been terribly good at our embrace of technology, but maybe we can have another go now that a new segment of the workforce may be disrupted.”


Leave a Comment

Your email address will not be published. Required fields are marked *




Are you hosting an event in the Synod that will be of interest to Insights’ readers?

To add an event listing email us your event details. A full list of events can be found on our Events page.

Scroll to Top