ChatGPT AI News: Chat GPT, Google’s bard… Can we trust AI based news? understand from the expert
The more we rely on AI-based information in our daily work, the more important the question of trust becomes. Recent studies have attempted to find out whether people trust AI-generated news or AI-powered medical diagnoses. It turns out that most people remain skeptical about AI. A machine can write true news full of facts, but readers still have doubts about its veracity. A program may give a more accurate medical analysis than a human, yet patients are more likely to follow the advice of their (human) doctor.
The conclusion is that if an AI makes a mistake compared to a human, people are likely to distrust it. When a reporter makes a mistake, it does not occur to the listener that all reporters can be inauthentic. When AI makes a mistake, we are more likely to distrust the whole concept. Humans can be forgiven for doing this, machines cannot. AI material is generally not marked up. It is rare for a news organization to indicate that the news has been generated by an algorithm. But AI-generated content can increase bias or lead to abuse, and ethicists and policymakers have advocated for institutions to transparently disclose its use.
If the mandatory disclosure of information is implemented, AI may be seen in the byline under the headlines in future instead of the reporter’s name. Media organizations face the challenge of capturing the attention of readers as well as earning their trust in today’s highly competitive digital market. AI is a tool or software that certainly needs monitoring and regulation, but it also has the potential to do a lot of good. AI could make this easier, for example by creating an app to detect skin cancer risk. People who are unable to pay the fees of dermatologists or who are not able to get such facilities can get tested through this medium.
(Chiara Langoni, Boston University, Boston)