World News

ChatGPT AI News: Chat GPT, Google’s bard… Can we trust AI based news? understand from the expert

Boston: Nowadays artificial intelligence (AI) is prevalent in every field but still it is not fully trusted. The challenge of overcoming this barrier remains. When you look at the news headlines on your favorite news app every morning, do you ever wonder who wrote the news? It must have come to mind that some human must have done this work. But it is possible that some algorithm has written the news. In today’s time, text, picture and voice can be created by AI, in which a little help of human is required. For example, the network ‘Generative Pre-trained Transformer 3’ (GPT-3) can generate a fiction or fantasy based story, poem or any programming code that is indistinguishable from a story or poem created by a person. Big media organizations like ‘The Washington Post’, ‘The New York Times’ and ‘Forbes’ have automated news generation systems with the help of AI-algorithms. With great advances in machine learning and natural language processing, there is no distinction between human-written content and content produced by modern neutral networks such as GPT-3, and even in a genre such as poetry, which is characterized by Since then only human activity is considered.

The more we rely on AI-based information in our daily work, the more important the question of trust becomes. Recent studies have attempted to find out whether people trust AI-generated news or AI-powered medical diagnoses. It turns out that most people remain skeptical about AI. A machine can write true news full of facts, but readers still have doubts about its veracity. A program may give a more accurate medical analysis than a human, yet patients are more likely to follow the advice of their (human) doctor.

The conclusion is that if an AI makes a mistake compared to a human, people are likely to distrust it. When a reporter makes a mistake, it does not occur to the listener that all reporters can be inauthentic. When AI makes a mistake, we are more likely to distrust the whole concept. Humans can be forgiven for doing this, machines cannot. AI material is generally not marked up. It is rare for a news organization to indicate that the news has been generated by an algorithm. But AI-generated content can increase bias or lead to abuse, and ethicists and policymakers have advocated for institutions to transparently disclose its use.

If the mandatory disclosure of information is implemented, AI may be seen in the byline under the headlines in future instead of the reporter’s name. Media organizations face the challenge of capturing the attention of readers as well as earning their trust in today’s highly competitive digital market. AI is a tool or software that certainly needs monitoring and regulation, but it also has the potential to do a lot of good. AI could make this easier, for example by creating an app to detect skin cancer risk. People who are unable to pay the fees of dermatologists or who are not able to get such facilities can get tested through this medium.

(Chiara Langoni, Boston University, Boston)

Related Articles

Back to top button