The Hour of Biases: How AI Learns (and Amplifies) Human Patterns
The image of a clock set to 10:10 reveals how artificial intelligence models reflect the biases present in their training data. Understanding this phenomenon is essential for ensuring the responsible integration of AI into journalism.
Artificial intelligence (AI) models are designed to process and learn from vast amounts of data, identifying and replicating the patterns they find. This capability enables them to respond to natural language queries, generate images, or make predictions quickly and accurately.
However, this same ability can become problematic when the data they learn from contains biases — prejudices or tendencies that fail to accurately represent reality. In such cases, AI not only reproduces these biases but can also amplify them, leading to inaccurate or unhelpful results. This issue becomes particularly critical when AI is applied to tasks where precision and impartiality are essential.
A few weeks ago, I listened to an interview where philosopher Ned Block shared an example that vividly illustrates this issue. He explained that if you ask an artificial intelligence model, like ChatGPT or DALL-E, to generate an image of a clock showing 12:03, 09:30, or 15:40 — essentially any random time — it is highly likely the system will produce a clock set to 10:10.
Why does this happen? It is neither a technical error nor a limitation of the model’s ability to understand instructions. The reason lies in the data these systems are trained on. Most images of clocks available on the internet, particularly in advertisements, display the hands set to 10:10 for aesthetic reasons: this time frames the brand’s logo, creates symmetry, and conveys a visually pleasing message.
What may seem like a minor curiosity is, in fact, a powerful metaphor for how artificial intelligence models work: they do not “understand” the world as we do but instead learn and replicate dominant patterns found in their training data. This might not be a problem if it were only about clocks. However, in fields like journalism — where AI is increasingly being used to write headlines, analyze data, or create images for articles — the reproduction of biases can have significant consequences.
Media, Biases, and Algorithms: When AI Shapes the News
Artificial intelligence-powered tools are becoming increasingly prevalent in newsrooms, playing a role in everything from automatic headline generation to audience metrics analysis, while impacting various stages of the editorial process. Each integration of this type of technology into workflows — whether through automation or content generation — introduces unprecedented challenges that media organizations must address thoughtfully.
In the AI in Journalism, I emphasize a key principle for understanding one of the most pressing challenges in this new landscape: AI models are only as good as the data that feeds them. The quality of these tools depends directly on the datasets they are trained on, and any biases within those datasets can lead to serious distortions in how the world is reported and narrated. If the data reinforces structural inequalities, cultural prejudices, or ideological distortions, AI-powered tools will inevitably amplify these flaws.
For instance, imagine a model trained predominantly on sensationalist headlines being deployed in a newsroom. What kind of output can we expect? Likely, it will generate content that prioritizes alarmist narratives, perpetuates stereotypes, or reinforces polarizing viewpoints. Similarly, an algorithm deciding which news stories to highlight may overrepresent certain topics or dominant perspectives while sidelining the voices of minorities or marginalized groups.
This issue is compounded by another equally troubling factor: within newsrooms, decisions generated by these systems are often perceived as “neutral” or “objective,” when in reality they are deeply influenced by the human biases embedded in the data that shaped them. This false aura of impartiality can disarm the critical lens needed to question and contextualize the results, underscoring that technology, far from being infallible, must remain under constant human scrutiny.
A Roadmap for AI-Powered Journalism
In the coming years, media organizations will face the challenge of developing strategies for the responsible and effective implementation of artificial intelligence in their operations. This will go beyond simply adopting new tools to improve efficiency; it will require taking an active role in the development, oversight, and ethical use of these technologies.
To achieve this, it will be essential for newsrooms to adopt concrete measures to monitor and refine their use of AI. At least four key approaches should be prioritized in this process:
- Data Audits: Review the data used to train AI models to ensure it is diverse, balanced, and representative.
- Human Oversight: AI tools should complement, not replace, human editorial judgment. Human supervision is essential to ensure decisions are ethical and accurate.
- AI Education: Train journalists to understand how these tools work so they can identify biases or errors in the outputs generated.
- Clear Ethical Principles: Newsrooms must establish guidelines for AI use, supporting their journalists while prioritizing transparency, impartiality, and inclusivity.
Over the past three years, I have worked with various media outlets in the region, training their newsrooms and contributing to the integration of artificial intelligence into their editorial processes. This experience has allowed me to reaffirm a central idea: integrating artificial intelligence into journalism is not merely a matter of technological efficiency but also one of ethical and cultural responsibility.
This strategic approach requires journalists to go beyond being passive users of these tools and become active participants in their development and oversight. By engaging in the process, they can help minimize biases and ensure that emerging technologies serve to uphold and strengthen the mission and values of journalism within their organizations.
In this context, internal training programs within media organizations become essential. To enable a critical and effective use of these technologies, newsroom professionals need to understand their capabilities, limitations, and potential risks. Only by doing so can AI serve as a valuable complement that enriches journalistic practice, rather than becoming a substitute for editorial or ethical judgment.