7 Comments
Sep 24Liked by Erman Misirlisoy, PhD

Very interesting, thanks for sharing. Do you know what happened in the study if the LLM got a fact wrong, or if the LLM actually started to make things up to support its argument? Did the scientists intervene?

Expand full comment
author

Great question, Carola. The authors of the paper checked this and reported that "when a professional fact-checker evaluated a sample of 128 claims made by the AI, 99.2% were true, 0.8% were misleading, and none were false"

Since this was such a small minority of claims, it's difficult to statistically check whether there was any counterproductive effect from AI mistakes. But it's encouraging to know that misleading facts were so rare.

Expand full comment
Sep 24Liked by Erman Misirlisoy, PhD

That’s great, I was expecting the rate of false or misleading claims to be much higher.

Expand full comment
Sep 24Liked by Erman Misirlisoy, PhD

Another excellent thought provoking piece, well done. The rise of social media ‘bubbles’ or echo chambers have stripped away chances to debate and discuss topics in a reasonable way. We are pitted against each other like never before with literally those shouting the loudest seemingly in control. We need to get back to the art of discussion without wanting to annihilate our opponent and realising that sometimes it’s ok to disagree and still be friends

Expand full comment
author

You hit the nail on the head in referring to those "shouting the loudest". My biggest worry has always been that the people shouting loudest tend to be the most extreme/least reasonable AND the ones who have their voices boosted the most. This paints a very distorted picture of reality in people's minds.

Thank you for reading & commenting!

Expand full comment
Sep 24Liked by Erman Misirlisoy, PhD

What an interesting positive impact of ChatGPT!

Expand full comment
author

Totally agree, thanks for reading Vishnu!

Expand full comment