More than 2,000 years ago Socrates thundered against the invention of writing, fearful of the forgetfulness it would cause. While writing has since redeemed itself, ChatGPT and its brethren in what is collectively known as GenAI now trigger similar warnings of linguistic novelty posing a threat to humanity. Geoffrey Hinton, who is sometimes called the “godfather of AI,” issued a stark warning that GenAI might get out of control and “take over” from humans.
The Word Economic Forum’s global risk report for 2024, which synthesizes the views of 1,500 experts from academia, business and government, has identified misinformation, turbocharged by GenAI, as the top risk worldwide for the next two years. Experts worry that manipulated information will amplify societal divisions, ideological-driven violence and political repression.
Although GenAI is designed to refuse requests to assist in criminal activity or breaches of privacy, scientists who conduct research on disinformation—false information intended to mislead with the goal of swaying public opinion—have raised the alarm that GenAI is going to become “the most powerful tool for spreading misinformation that has ever been on the Internet,” as one executive of a company that monitors online misinformation put it. One team of researchers has argued that through health disinformation, a foreign adversary could use GenAI to increase vulnerability in an entire population during a future pandemic.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Given that GenAI offers the capability to generate and customize messages at an industrial scale and within seconds, there is every reason to be concerned about the potential fallout.
Here’s why we’re worried. Our group at the University of…
Read the full article here