#26 What ChatGPT means for trust in data and statistics

ChatGPT and statistics: a problematic pairing?

Hello!

I’m James Janson Young and I convene Ours For The Making. Here I explore the ideas, concepts and innovations that are shaping our future. Read other issues here.

👉You can subscribe to the accompanying YouTube channel Ours For The Making  here 


“Are AI-generated statistics the future of how we gather stats and data, or a recipe for misinformation? Join us as we explore the potential implications and the steps we can take to ensure trust in the statistics generated by AI.”

That doesn’t sound quite right… “join us?” Who’s the “us”? There’s only me here. Perhaps the AI counting itself?

OK, some context. The above in quotation marks is what an AI text-generator served up to me when I asked it for a hook for a YouTube video about the implications of the use of AI on our trust in statistics and data.

Like a lot of the answers these supercharged Chat-BOTs offer, they look, on the face of it, reasonable but on closer inspection…

I decided to interview ChatGPT (or interrogate) to find out what it thought the issues might be.


Have you tried OpenAI’s ChatGPT? It’s one of these AI text-generators ‘trained’ on huge amounts of data. You type in your prompt and it generates text responses that appear almost instantly in eerily human-like conversational prose.  

It also offers up data and statistics. The trouble is, these stats can be stripped of context and sources, and be out of date, mis-representative, or just plain wrong. 

So what does this mean for our use and trust in official data and statistics? 

Are we going to see a proliferation in AI-generated statistical ‘facts’ that are easy to come by, look convincing but are quite possibly wrong? And will this mean we start questioning every stat, leading to a collapse in our trust in the numbers we read and hear? And if so, how can we avoid that?

As someone that works with data and statistics and authored published reports founded on accurate, verified numbers, this matters a lot. 

So, who better to ask, than, well Chat GPT itself.

I started out by asking it about: the implications of AI text generators for trust in official statistics, before going on to explore how it checks the accuracy of the statistics it generates, what role it sees for regulation help ensure better quality and accuracy, how the use of statistics by AI text generators might evolve in the future? 

And perhaps most importantly, I asked it how can you and I can avoid getting duped by AI-generated statistics.

Implications for trust in statistics

So we started out with: 

ME: “How will the availability of chat GPT and ai text generators in general change the way we will collect and trust official statistics in the future?”

CHAT: “Chat GPT and other AI text generators have the potential to change the way official statistics are collected and reported by automating the process of generating written reports and summaries.”

In other words: you can now collect statistics ridiculously quickly.

But:

CHAT: “they cannot replace the expertise of statisticians and other domain experts in interpreting and contextualizing the data.”

ChatGPT also noted that “it is important that the data used to train the model is credible, unbiased and have a broad representation of different perspective.” 

I think that’s supposed to be “perspectives”, plural – see, we puny humans still have a role to play.

More worryingly though, “AI models can also be used to generate fake statistics or fake news if not used properly, so it’s crucial that the data used to train the models is verified and the output of the model is fact-checked.”

Ok, so efficiency gains tempered by careful oversight and source verification, fact-checking. Got it. 

Let’s move on…

Implications for public discourse

I asked it about the implications of all of these readily available statistics on public discourse…

Aside from some circular gobbledegook, one point did stand out and that was this:

CHAT: “people may be less critical of the statistics they encounter if they are generated by AI, assuming that the model was trained with high-quality data and that it’s output will also be of high-quality.”

I can imagine if these AI text generators are embedded inside, say a familiar and trusted desktop application, say Word and you ask it for statistics to supplement the report you’re drafting about changes in, say, employment across different sectors, you might be more willing to accept the validity of whatever it serves up, more so than, if you were to chat to an AI on some random website. 

Chat GPT suggests AI-generated statistics should be “clearly labeled as such” and concludes that while there are benefits to public discourse, “It’s important to use AI-generated statistics in conjunction with human analysis, interpretation, and judgement.”

How does ChatGPT check its accuracy?

Next we talked about how it checks it’s accuracy.

In short: it doesn’t. 

ME: “How does chat GPT check the accuracy of the stats it generates?”

CHAT: “Chat GPT and other AI text generators do not have built-in mechanisms for checking their own answers or statistics for accuracy. These models rely on the data they were trained on to generate their responses, and the accuracy of the output depends on the quality and representativeness of the data used for training.” 

So, the onus is on us to check and cross-check and double check…the stuff we all should be doing right now…

These models are not trained statisticians. They don’t possess the skills to analyse data and interpret what they’re looking at. They can only “generate text based on patterns and trends that they’ve seen on the data that has been used to train them.” 

They also won’t know if it is missing some key data, something that would be hopefully obvious to someone with knowledge of an area or domain that a stat is describing. 

So, the responsibility for checks lies with you and me, the user. We are responsible.

Or should others share that responsibility for accuracy?

I put it to Chat GPT: “Who should be responsible for ai text generators’ statistic accuracy?”

According to CHAT GPT, the responsibilities for “Improving the accuracy of statistics used by AI text generators” should be shared between:

“the data scientists and engineers who develop and maintain the AI text generators”

Secondly, “providers of data used to train the models have a responsibility to ensure that the data they provide is accurate, unbiased, and representative”. 

Making sure that data is “cleaned, verified, and labeled appropriately to ensure that the model can make accurate inferences from it.”

It’s third point was a raft of jumbled waffle so we’ll skip to it’s last point about end-users who:

“also have a responsibility to verify the statistics and use them critically, seeking out additional sources and expert opinions when necessary.” 

“By working together and taking their respective responsibilities seriously, these different parties can help to improve the accuracy of statistics used by AI text generators and increase the reliability and trustworthiness of official statistics.”  

Absolutely. Well said. Or generated, or whatever.

But I was curious,

How could the accuracy of AI-generated statistics be improved?

ME: “What more could chat GPT do To improve accuracy of stats in the future?”

CHAT GPT had some thoughts to share:

  1. “Incorporating domain-specific knowledge:” 

So, training the model on a specific area or topic to help it learn specialized knowledge and terminology from the field.

  1. “Improving data representation” to be more diverse and representative
  2. “Incorporating statistical analysis capabilities into the model, such as the ability to perform statistical tests or make inferences”
  3. “Incorporating Explainable AI techniques”: that set out how it generates statistics in more understandable terms.
  4. “Adopting responsible AI techniques: Incorporating techniques like fairness, accountability, and transparency (FAT) in the model development could help to ensure that the statistics generated by the model are unbiased, accurate and transparent.”
  5. “Continual monitoring and updating: Continually monitoring the model’s performance on real-world data, as well as regularly updating the model with new data, could help to improve its accuracy over time.”

Again, the emphasis is on having good quality inputs AND for the end-user to do their own fact-checking.

Stopping the spread of fake stats

Then we turned to fake or erroneous statistics and safeguards to prevent their spread.

I asked: “Could the prevalence of a wrong statistic on the web result in that wrong statistic being generated repeatedly and spread by an ai text generator?

CHAT: “Yes, it is possible that the prevalence of a wrong statistic on the web could result in that wrong statistic being generated repeatedly and spread by an AI text generator.”

OK, so, ME: “what mechanisms could stop that from happening and how could statistical authorities alert ai text generator companies to get it removed or updated?”

CHAT: “There are several mechanisms that could help to stop the spread of incorrect statistics generated by AI text generators:

  1. “One key step is to improve the quality of data used to train the models”

OK fine, but that doesn’t help if there’s an error out there right now widely circulating that needs correcting.

Further ideas were:

  1. “Regular monitoring of the statistics generated by AI text generators can help identify errors early and take steps to correct them.”
  2. AI that can be explained to the user can help AI text generators be more transparent about how their output is generated
  3. Fact-checking:
  4. “Collaboration between statistical authorities, AI text generator companies, data providers and researchers, can help in identifying and addressing errors in AI-generated statistics, as well as sharing best practices, guidelines, and standards.”
  5. “a balanced regulation system that promotes innovation while ensuring that the AI-generated statistics are accurate, reliable, and unbiased could also help in preventing the spread of incorrect statistics.”

CHAT GPT closes on a cautionary note: “It’s worth noting that even with all these mechanisms in place, errors may still occur.” 

How might AI text-generators be used to provide statistics in the future?

Then we moved on to how AI text generators might be used in the future:

“One issue with ai text generators is that the statistics they generate are not current – the latest models have only seen materials up to around the middle of 2021. However, if future ai text generators were able to be fed statistics and numbers in real time, can you foresee any issues this might cause for directing or misdirecting people’s behaviour or choices that has a material impact?”

Chat saw some benefits of real-time data such as “providing more up-to-date information and allowing for faster decision-making”. 

But it also foresees potential issues including a possible “increase the risk of misinformation or errors” as data continuously changes or is volatile, rendering statistics produced by the model that are “inaccurate or outdated by the time they are generated.” 

Decisions could then be taken on outdated or incorrect information or information that’s not relevant to long-term decisions

It also sees possible “ethical concerns, as it could be used to target or manipulate people’s behavior or choices.”

Could data owners opt-out?

I then asked if holders of certain data or statistics could choose to opt out from future iterations of ai text generators, in a similar way that artists are being offered a way to opt out of text to image generators as a way to protect their copyright and the integrity of their work?

ChatGPT agreed this is possible. Which is curious given that this a very new question that only arose towards the end of 2022.

But, nevertheless it continues…

“This can be achieved by implementing an opt-out system that allows data and statistics holders to have control over how their data is used by AI text generators. This could include mechanisms such as allowing data holders to flag their data as off-limits for certain types of AI models, or by providing the ability for data holders to set specific usage guidelines for their data.”

But, it sees problems…

“implementing such a system may be technically and logistically challenging. For example, it can be hard to track and attribute data, as it may be distributed across multiple sources, also it may be difficult to ensure that the opt-out rules are being followed by all parties involved.”

“Overall, the implementation of an opt-out system for holders of certain data or statistics is a complex matter that involves balancing the rights of data holders with the benefits of using AI text generators.”

Mmm. How insightful.

How can we avoid being duped by fake or erroneous AI-stats?

OK, so lastly, how can we avoid being duped by ai generated statistics?

Me: “How can users avoid being duped by ai generated statistics?”

CHAT: “There are several ways that users of chat GPT and other AI text generators can avoid being duped by official statistics:

  1. “Verify the data: Before using chat GPT or any other AI text generator, it’s important to make sure that the data used to train the model is credible, unbiased and have a broad representation of different perspectives.
  2. Fact-check the output: Always fact-check the output of the model and cross-reference it with other credible sources before using or sharing it.
  3. Understand the limitations: Keep in mind that AI text generators are not a substitute for human expertise and judgment. They can only make inferences based on the data they were trained on, and may not take into account nuances and context that humans would.
  4. Seek expert opinion: Consult with experts in statistics or data science before using AI-generated statistics in decision-making or other important contexts.
  5. Look for transparency: When dealing with AI generated statistics look for signs of transparency such as the data used to train the model, the methodology and any other relevant information that can help you assess the quality of the statistics.
  6. Be cautious with AI generated statistics: Be cautious when using AI generated statistics and use them in conjunction with human analysis, interpretation, and judgement.”

Do you have any final conclusions or thoughts about ai text generators and statistics or their future use?

CHAT: “AI text generators have the potential to become powerful tools for generating statistics and other data-driven insights, but it’s important to ensure that the data, models, and statistics generated are accurate, reliable, and unbiased. It’s crucial to have a collaborative approach that involves all stakeholders to ensure that the responsible and ethical use of AI in official statistics is promoted.”

Oh, one last thing, “Could you write a blog post that explains how ai text generators use statistics and the potential advantages and challenges, concluding with some practical tips for how users of ai text generators can get better results in the style of Vox Future Perfect?”

Hello? Hello?

“Error”? You’re kidding me.

Leave a Comment

CONNECT

SUBSCRIBE

Join The Makers

My blog/article newsletter, The Makers, explores the ideas and innovations shaping our future.