Friday, April 28, 2023

Cam Slater: How ChatGPT Analysed a David Fisher Article

On ANZAC Day I popped into the supermarket and banged into a former journalist who is a good mate. We worked on a few stories together, most notably the Darren Hughes story that led to his demise as an MP.

We got to chatting about the parlous state of journalism in New Zealand and various other things. He said to me that he loves using ChatGPT to analyse journalists’ work and suggested I try it.

So, I did and chose to look at a recent article from David Fisher. I mean, who better than that, right?

So I asked ChatGPT, “Analyse the following article from the perspective of a journalism tutor and outline issues with it”, and pasted in the full text of his latest article.

Here is what ChatGPT thinks about David Fisher’s article (the emphasis is mine):

As a journalism tutor, I would say that the article appears to be well researched and raises important concerns about the use of surveillance technology by the New Zealand police. However, there are several issues with the article that could be improved:
1. Lack of sources: While the article does cite some sources, such as the Herald and the Privacy Impact Assessment, there are several claims made without any clear attribution. For example, the article states that “Police appear to be increasingly embracing powerful technology-led ways to keep our community safe without putting in place systems that will earn public confidence.” Who says that? It would be helpful to have more specific sources for such claims.
2. Lack of balance: The article appears to take a strongly critical stance towards the New Zealand police and their use of surveillance technology. While there may be legitimate concerns about this, it would be useful to also include the perspective of the police themselves, as well as any experts who support the use of such technology.
3. Unclear structure: The article jumps around between different points without a clear overall structure. It would be helpful to have a clearer introduction that sets out the main points the article will cover, as well as subheadings to separate different sections.
4. Confusing language: At times, the article uses overly complex language that may be difficult for readers to understand. For example, the article states that the police are “increasingly engaging with facial recognition technology without heeding a report that recommended a range of steps which would safeguard the use of the system.” This could be simplified to something like “the police are using facial recognition technology without taking steps to ensure it is used safely and responsibly.”

Overall, the article raises important concerns about the use of surveillance technology by the New Zealand police. However, there are several issues with the article that could be improved to make it clearer and more balanced.

Bwahahaha… a robot AI schools David Fisher on how to write articles properly.

This will be a fun, but occasional series about mainstream journalists and their lack of skill.

AI tools are fun to use and I can see a future where robot journalists can actually do a better job at analysis than human journalists.

It is certainly something I am keen to look at funding to further expand the capabilities of The BFD, and especially developing our NewsDesk with the assistance of technology so we can better compete with the multi-million dollar news corporations polluting media at the moment.

Cam Slater is a New Zealand-based blogger, best known for his role in Dirty Politics and publishing the Whale Oil Beef Hooked blog, which operated from 2005 until it closed in 2019. This article was first published HERE


JamesA said...

Interesting. I used the same ChatGPT question on a number of opinion pieces across several news outlets including a few from Breaking Views.

It was critical of all of them.

A useful tool to help confirm your own bias against an opinion you may wish to disagree with.

I use ChatGPT (named Chatty on my iPhone) regularly when seeking data etc as it scrapes relevant info from the internet very quickly. Unfortunately, fails when Google decides to censor particular information disliked by its left-leaning masters. If I'm aware of possible censorship it I will ask Chatty to drill down by asking specific questions relating to data. Presumably, it then goes off on a major data hunt scanning websites with lower rankings. 9 times out of 10 Chatty will respond with an apology for its earlier response and correct it with the correct data.

Anonymous said...

It would be fun to put one of those govt announcements half English half Māori muddle and see what Chat GPT thinks about them.
Maybe it can make sense of them coz we have no bloody idea what their saying

robert Arthur said...

Anything from the Teaching Council should caue it to haemorrhage.

Unknown said...

Cam Slater - a question - "Is it reasonable to ask -

1/- if this "new AI medium" may have a "linkage into the House of Google" (the one that produces that famous deodorant - Censorship);

2/- that one can "trust" any printed Opine from David Fisher - knowing that he has a "lean to the left", which is not caused by a leg imbalance;

I would hope you read posted comments, within the Breaking News domain, especially when they are posted under your articles?

Read the one from James A.. 4th paragraph - "Unfortunately, fails when Google ...". Even in this News Domain (Breaking News), I have read statements that have alluded to "Censorship of posted comments", to a specific Opinion.