Artificial intelligence is revolutionizing technology, the creative industry, and journalism is no exception. So how do you ensure that this technology is used ethically and responsibly? In today’s column, we will dive into a ethical debate that illustrates the challenges of AI in journalism.
Adam Singolda, CEO and founder of Tabola, wrote an insightful analysis in his article “Ethical Debate in the Age of AI: OpenAI, the New York Times, and the Path to Responsible Innovation,” which I would like to share with you today. Adam goes beyond pointing out blame and explores the nuances of the problem, questioning the desired results, culture and values at stake. This is an insightful look at the interaction between technology and journalism, contributing to reflection on the role of AI in the current media landscape.
…
No news, or New York Times is suing OpenAI, expressing his discomfort with the use of journalistic content to train AI. And who wouldn’t be? No payment, no assignment. That’s not cool. If we look back, we can identify a similar scenario almost a decade ago, when Meta (then Facebook) launched Instant Articles in 2015. Despite using valuable articles and hard journalistic work, Facebook paid little (or nothing) to publishers, failing to send significant traffic to them. The result? Many publishers, including the New York Times, have abandoned Instant Articles. Now, ten years later, the dynamic is repeated with OpenAI.
OpenAI argues that it has the right to use publicly accessible news to train its AI, perhaps comparing this to Google’s web crawling. However, in many debates, the question is not who is right, but what is the desired outcome, the culture, identity and values involved.
Now, looking forward, OpenAI has a chance to become a positive force, supporting journalism, promoting high-quality content and the open web. Or you can choose not to do this and follow another path. If the top 100 websites in the world (Wikipedia, New York Times, Reddit, etc.) block OpenAI or even demand that it disable data from its last scan, the company would lose a lot of value. On the other hand, Generative AI is a significant revolution that perhaps cannot be ignored.
In my opinion, OpenAI will do the right thing. There are reports that they are considering paying up to $5 million to license content from publishers to train their AI. And from my point of view, OpenAI should pay publishers whatever they want and more. Unlike Facebook, which is a 100% advertising company, OpenAI has the opportunity to get its technology into the hands of hundreds of millions of users through partnerships with corporate accounts around the world and charge for it. In fact, they can charge a lot for this, if we compare how much corporate accounts pay for cloud services, this could easily be $100 billion in revenue per year for OpenAI. Not to mention that OpenAI will be remembered for being on the right side of history, unlike what Facebook did with publishers and the open web.
Given the relationship between Microsoft and OpenAI, I see Microsoft as a more suitable partner for journalism than Facebook ever was. This also makes me think that OpenAI will do the right thing, and where Facebook failed, Microsoft will succeed in strengthening the open web and journalism by paying editorial teams for the important content they produce.
As a parent, I understand the importance of this issue. We need an open web and robust journalism, especially with social media threatening our children’s future with so much hate and fake news – I wrote about this for CNBC. Journalism newsrooms are essential for this confrontation. In short, I’m optimistic. My opinion is that OpenAI will do the right thing.
* This text does not necessarily reflect the opinion of the vehicle
Follow Adnews on Instagrame LinkedIn. #WhereTransformationHappens