Generative Artificial Intelligence in Media: Rapid Adoption and Lack of Regulatory Guidelines for its Use
The media industry is undergoing a significant change in the processes of production, publication, and distribution of information with the incorporation of generative artificial intelligence tools in newsrooms.
Considering the recent emergence of this phenomenon, there is still a shortage of quantitative data, highlighting the importance of early research in this area. In this context, a new survey conducted by WAN-IFRA in collaboration with SCHICKLER Consulting provides relevant information about the current state of news editors in relation to the use of this technology and its potential for future expansion.
The survey, which involved 101 media professionals from around the world, highlights several key points. Firstly, approximately half (49%) of the surveyed newsrooms are already using generative AI tools, such as OpenAI’s ChatGPT. Conversely, some newsrooms remain cautious due to the rapid evolution of the technology, with 70% of respondents believing that generative AI tools will be useful for journalists in the near future.
Regarding use cases, generative AI tools are predominantly used for summaries, bullet points, as well as simplifying research and text correction, representing some of the initial experimentation phases. However, it is expected that these use cases will evolve and expand in the near future, including personalization, translation, and improvements in different workflows within newsrooms.
Another aspect analyzed in the survey is the concerns raised by journalists regarding generative AI. Respondents pointed out that the inaccuracy of information and content quality are two significant risks in this regard, in addition to concerns about plagiarism, copyright infringement, and data protection. The combination of these issues highlights the importance of establishing clear guidelines and providing adequate training to journalists to ensure responsible use of this technology.
The survey also delves into the next steps for media outlets in relation to this technology, its impact on content quality, and the types of media that have benefited the most from generative AI. Here are some key findings:
- 30% of newsrooms plan to use generative AI tools in the next year: The trend of generative AI growth in newsrooms will continue in the future. As this technology becomes more sophisticated and accessible, it is likely that more newsrooms will adopt it.
- 53% of newsrooms using generative AI tools say it has helped improve the quality of their content: Generative AI can assist newsrooms in creating content faster, analyzing data more easily, and making better decisions.
- Newsrooms are concerned about the ethical implications of generative AI: Ethical concerns include the potential for generative AI to create fake news or manipulate public opinion. Newsrooms are working to develop ethical guidelines for the use of generative AI and educating their audiences about its potential risks.
- The most popular generative AI tools are Google AI’s Quill, OpenAI’s GPT-3, and OpenAI’s Generative Pre-trained Transformer 3 (GPT-3): These tools are popular because they are user-friendly, accessible, and offer a wide range of functions.
- 42% of newsrooms using generative AI tools say it has helped them reach new audiences: Generative AI can assist newsrooms in creating more engaging content for different audiences and reaching more people through social channels and other digital platforms.
- Newsrooms using generative AI tools are more likely to be small or medium-sized: This suggests that generative AI can be a valuable tool for small and local media outlets looking to compete with larger media organizations.
One concerning aspect revealed by the study is that only 20% of respondents indicated having established guidelines for the use of generative AI in their media outlets. The majority of editors (49%) allow journalists to use the technology at their discretion, highlighting the lack of specific regulations and policies.
It is essential that this type of adoption is accompanied by clear guidelines as they provide a solid foundation for ensuring consistency and quality in the produced content, addressing ethical concerns regarding data privacy and algorithm transparency, minimizing biases in content, and protecting copyright and intellectual property.
If you are interested in this particular topic, colleague Hannes Cools (University of Amsterdam) has published an article titled “Towards Guidelines for Guidelines on the Use of Generative AI in Newsrooms” which compiles links to guidelines developed by different media outlets around the world, providing valuable insights.
In a context of change and uncertainty, these guidelines are fundamental in setting a path for newsrooms to follow. Moreover, they promote efficiency and productivity by facilitating the effective integration of generative AI into workflows, enabling journalists to fully leverage the advantages of this technology based on their media’s overall strategy. From this year onward, media style guides should incorporate a new chapter to address these aspects.