Public Perception: Misinformation Outweighs AI-generated Content in the UK

Public Perception: Misinformation Outweighs AI-generated Content in the UK

A recent YouGov survey reveals that a majority of UK respondents, 81%, worry more about the reliability of online information than the impact of AI-generated content, which concerns 73% of the population. Dive into the public sentiment towards misinformation and AI-generated content.

AI talent pipeline

AI talent pipeline

The UK public has a higher level of concern about misinformation online than it does the prevalence of AI-generated content.

A recent survey by YouGov asked 2,000 UK adults about their thoughts on online content. 81% of respondents said they were worried about how trustworthy online content is, while 73% expressed concerns about the amount of AI-generated content.

Additionally, 76% of respondents shared their unease about digitally altered content, such as photoshopped images and edited videos.

The YouGov data shows that 67% of consumers are worried about misinformation from AI-generated content. However, a larger group of 75% see digitally altered content, like deepfakes, as a major source of misinformation.

In the marketing industry, there is a mix of hope for the potential of AI in the future, but also concerns about the negative impacts it could have.

Earlier this year, Giffgaff's media strategy director Georgina Bramall shared with Our Website that generative AI, specifically, is playing a key role in helping Giffgaff improve its agile working practices in the creative and marketing departments.

"It [Gen AI] has been embraced by several of our teams as we aim to enhance our capacity for personalization and to react promptly and efficiently," she mentioned.

Labelling AI-generated content

According to YouGov, people have mixed opinions on whether labelling AI-generated content is helpful. 50% of respondents think that labels could help in stopping misinformation from spreading, while 29% think that labels are not effective.

When it comes to digitally altered content, 50% of people find labels helpful, while 29% do not agree.

In terms of trust in labels, nearly half (48%) of respondents express distrust in the accuracy of labels on AI-generated content on social media. Only a fifth (19%) of people would trust such labels.

When individuals come across social media posts identified as AI-generated content, 42% of them choose not to do anything right away. This shows that there is a sense of neutrality towards AI-generated content.

On the other hand, 27% of people would opt to block or unfollow the account, which could result in AI-generated and organic content from a creator being filtered out completely. Only small percentages of individuals show interest in engaging with the post (5%), wanting to see more AI content (2%), or sharing the post (2%), indicating a cautious approach.

Editor's P/S:

The article highlights the growing concerns among the UK public regarding misinformation online, particularly the prevalence of AI-generated and digitally altered content. The survey findings reveal that a majority of respondents are worried about the trustworthiness and potential for misinformation in online content. While there is some hope and excitement in the marketing industry about the potential benefits of AI, there are also concerns about its negative impacts.

The article also explores the issue of labelling AI-generated content and discusses the mixed opinions on its effectiveness in preventing misinformation. While some believe labels could be helpful, others express distrust in their accuracy. The survey data suggests that people have a cautious approach to AI-generated content on social media, with a significant percentage choosing to remain neutral or block/unfollow the account. These findings underscore the need for further research and dialogue on how to effectively address the challenges and opportunities presented by AI-generated content while ensuring the public's trust in online information.