Welcome to the Tech Journal. My name is Mark van Rijmenam, and I am The Digital Speaker. Today I’m going to be looking at AI and Journalism, the possibilities, limitations, and what outcomes we can expect.
AI journalists are increasingly ‘employed’ by publishers and organisations. The good news for human journalists is that there’s still very much a need for their services when writing articles.
But with machines now capable of doing more tasks than ever (and more complex ones), we face several important questions: what’s the role of AI in journalism, and what are the challenges and considerations when bringing AI into the newsroom?
So, get comfortable, sit back, and let us start speaking digital.
As I touched on in the first Tech Journal video, check it out here if you haven’t already, Microsoft made waves earlier this year when it announced that more than 70 journalists and editors across the US and UK working on Microsoft’s MSN website and its Edge browser had been let go.
But their positions were not empty for long, no, Microsoft quickly replaced them with GPT-3, a state-of-the-art language AI.
This cutting-edge digital employee quickly and accurately identifies relevant, interesting, and most importantly, trending, articles, performing the exact same job as the team it replaced, but cheaper and much more efficiently.
GPT-3’s talents don’t just end with spotting patterns and trends though, this clever little thing can also write, as it demonstrated in its guest article for the Guardian writing to convince humans that robots come in peace.
While AI’s have come a long way, experts, like Professor Kristian Hammond, predicted way back in 2012 that by 2027 that more than 90-percent of news articles would be written by an AI. And seeing as we’re only 7-years away it isn’t looking very likely. However, we are actually much closer than you would think, but I’ll touch on that later.
So the good news for us humans here is that AI’s still don’t do very well when left to their own devices. When it comes to writing opinion pieces, commenting on a nuanced political situation, or articles exploring the human condition, diving deep into moral and ethical decisions, AI’s are simply two processors short of a Macintosh
Although, as we move forward with AI’s now capable of doing more tasks with increasing complexity, we face several important questions.
What is the role of AI in journalism?
What challenges will we, as a species, face?
And what do we need to really consider before bringing AI’s into the newsroom?
What is the current state of AI journalism?
Well, Machine-written articles are more common than you’d expect.
Over the past decade, “plug and play” articles have become an AI journalist favourite. This is because these articles are simple, they follow a formula and do not need independent research. They cover simple topics, like business and sports, that don’t require much nuance.
Bloomberg News uses an AI called ‘Cyborg’ that automatically scans companies’ quarterly reports and outputs an article with the most relevant information. While another AI called, ‘Wordsmith’, is known for writing about college basketball games.
However, as soon as you leave the formulas the cracks begin to appear.
So, for now, most topics are best left to journalists with actual fingers. Otherwise, while GPT-3 can write several paragraphs of super convincing text, there’s no guarantee the content will have any basis in reality.
So, while AI’s don’t do so well writing on their own, they do make pretty good assistants.
One instance where AI’s really excel here is when they’re used to transcribe interviews. It’s a basic job, just write out what someone says, but offloading this work onto an AI ends up saving journalists a huge amount of time, and while the results are rarely flawless, the few errors that are made are easily corrected by a human editor.
We’ll likely see a lot more of this in the future as only a short while ago, back in April 2019, Trint, an automated speech-to-text service, closed a 4.5 million dollar funding round, with the ‘Associated Press’ and ‘Google Digital News Innovation Fund’ among its investors.
According to Jeff Kofman, the co-founder and former TV news reporter, the software can quickly output transcriptions with accuracy rates of 95 to 99 per cent.
As I briefly touched on before, AI’s can also assist in identifying trends and developments, worthy of investigation.
AI Journalism Examples
Back in 2018, we watched as Forbes rolled out a new AI content management system, “Bertie”. Bertie’s job was to recommend article topics and headlines to contributors, based on their previous work.
In the months following Bertie joining the team, Forbes reported that the number of loyal visitors, visitors who visit the Forbes website more than once a month, had doubled.
This then demonstrates that AI’s can really tap into what users want to see, leading to them filling in as curators, further spreading AI’s influence. And here’s where we can thank Facebook.
It is well-known that Facebook’s algorithm looks at user data and goes on to use this when generating the user’s feed.
Well, most news organisations followed Facebook’s lead on this one and now use similar AIs to keep track of all their subscribers’ data. The AI then analyses their behaviour and uses the data to “personalise” what the user is shown.
This way, the more a news outlet knows about its customers, the more user-specific content it can display on its website or newsletters, delivering a “personalised” experience encouraging interaction.
According to a 2019 survey by the digital media company. ‘Digiday’, 70-percent of digital publishers, like Microsoft Bing, Associated Press Media, and The New York Times, personalise content for visitors.
Data-Drive Business Decisions
But AI’s are not just helping out front, they are also helping in boardrooms making data-driven business decisions.
By collecting data and crunching the numbers, an AI can help managers and executives make decisions about a whole range of factors, including content decisions, subscriber acquisition, marketing campaigns, and pricing.
The Wall Street Journal’s dynamic paywall is a good example for this.
The paywall gives non-members differing amounts of access based on the likelihood that they will buy a subscription.
So, for example, a reader finding an article through the actual Wall Street Journal’s website will have more access to content before being blocked by the paywall, than someone who found the article through a link on social media.
Basically, an AI decides who gets the free-samples based on how likely the people are to actually buy something.
Challenges AI Journalism
So far so good. AI’s seem pretty advanced.
However, as Professor Kristian Hammond’s prediction showed us earlier, AI Journalism isn’t where we thought it would be by now.
According to a 2019 ‘JournalismAI’ report, which surveyed 71 news organisations in 32 countries, there are still significant difficulties limiting AI.
So what’s limiting it, and is there a hard limit?
Well, the top three challenges of bringing AI into the newsroom seem to be Financial resources, Lack of knowledge or skills, and ‘Cultural resistance’.
Beyond these institutional barriers, there are other good reasons to question whether AI in journalism is really the game-changer that advocates claim it will be.
Columbia journalism professor, Francesco Marconi, estimates that in the future, only 8 to 12 percent of a reporter’s tasks will be replaceable by a machine. So, things like interview transcription like I talked about earlier, and maybe proofreading.
The state of artificial intelligence for journalism helps illustrate the difference between strong and weak AI.
Strong or “general” AI, is a machine that approaches human-level intelligence across the board. We still seem decades away from developing something like this, if it is even possible.
While weak AI, on the other hand, is really good at a small number of specific tasks. This weak AI is what we are seeing pop-up here and there and is a lot more likely to find employment.
At the end of the day, it is pretty reasonable to be sceptical that newsrooms will ever see an AI reporter that can stand shoulder to shoulder, metaphorically, with its human counterpart, having said that, we cannot forget that AI is and will play an increasingly important support role in journalism.
Whether it is creating article summaries, generating ideas and proposals, analysing data and trends, finding interesting stories, or making marketing and subscriber decisions, it is pretty safe to say that weak AI will only become more prevalent with every passing year.
AI Journalism Ethical Concerns
Alright, so strong AI out, weak AI in. Will there be any repercussions to this or is it pretty plain sailing?
Well, there seems to be varying repercussions of AI journalism, but most ideas seem to be focused around ethics.
Ethics in journalism, as well as AI, is already a sensitive topic, and so combining journalism with AI makes for an especially contentious combination.
After all, we are living through the era of “fake news”, where only 41 percent of Americans are reported to trust the media, while only 46 per cent of Europeans say they trust their written press.
This lack of trust is exactly why we are developing AI’s like Mavin, a program designed to score for online content based on its reliability, is necessary.
There are two distinct ethical concerns for mixing journalism and AI.
First, using AI to generate content, and second, using AI to curate and display content.
Both issues play into each other too.
Back in 2014, Facebook revealed the results of a controversial experiment with users’ news feeds. People who were exposed to “positive emotional content” on Facebook posted more positive posts of their own, while the inverse effect happened with people who saw more negative content.
Similarly, a news company that promotes articles based only on users’ interactions would likely find itself in a race to the bottom, spitting out clickbait articles and news that induces sadness and outrage.
Meaning, an AI will probably decide to show more negative content, since it gets more engagement, meaning that it’ll produce more negative content, leading to users share more negative content, leading to more interactions, leading back round to the AI generating more negative content.
Creating a self-fulfilling prophecy, or a self-perpetuating cycle of negative news.
To avoid this dystopian nightmare, news outlets must maintain a steady human hand on the wheel.
As while AIs can churn out hundreds of cut-and-paste clickbait articles per day, these still need human editing and fact-checking before they go to print.
Likewise, humans need to oversee news curation algorithms to ensure that the results are consistently high-quality, and keep recommendation algorithms under control.
Still, AI journalists have proven their ability to take on much of the field’s hard labour. Collecting data, transcribing recordings, writing short interest articles, and so on.
But when it comes to the work that truly makes a news organisation stand out, in-depth reporting, political commentary, and opinion columns, it’s clear that humans will always be an essential part of the Journalism industrial complex.
And with that cheery note, I’ll leave this video here.
My name is Mark van Rijmenam, and I am The Digital Speaker.
Don’t forget to like and subscribe to keep up with all the latest digital news, and see you next time for your digital download.
If I managed to retain your attention to this point, please leave a comment or subscribe to my weekly newsletter to receive more of this content:
Dr Mark van Rijmenam is The Digital Speaker and he offers inspirational (virtual) keynotes on the future of work, either in-person, as an avatar or as a hologram, bringing your event to the next level: