From The Verge:
Microsoft is laying off dozens of journalists and editorial workers at its Microsoft News and MSN organizations. The layoffs are part of a bigger push by Microsoft to rely on artificial intelligence to pick news and content that’s presented on MSN.com, inside Microsoft’s Edge browser, and in the company’s various Microsoft News apps. Many of the affected workers are part of Microsoft’s SANE (search, ads, News, Edge) division, and are contracted as human editors to help pick stories.
What’s interesting here is that these are not editors in the conventional sense — they are curators, picking stories (“content”) to feature on the company’s various online news channels.
On the surface, curation seems like an easy pick for AI. (Harder problems have been solved.) And one might assume, correctly, that its pickings will be more eclectic than what a single human can possibly gather. But would I be inclined to follow a blog curated by an AI program instead of a single person?
No. Here’s why.
AI curators can’t take a moral stance. In the last weeks some curated blogs I follow have been linking to and writing about the protests in the U.S. following the murder of George Floyd. It has affected some of them deeply. This human connection is something I cannot get out of reading an AI-curated website.
AI curators don’t have skin in the game. Many of the things human curators link to are matters that affect them, directly or indirectly. Or they are things the curator has had a personal experience with. For the reader, knowing this creates a deeper resonance with the topic being shared and discussed. It’s not just the subject that’s interesting and worth thinking about, it is also the relationship the curator has with that subject. With an AI curator this relationship is absent, which strips meaning from the shared “content”, leaving it superficial and flavourless.
AI curators optimise for popularity, not interestingness. This is how machine learning algorithms are designed: over time they “learn” what kinds of stories are read and shared by more people, and these popular themes are given precedence over others. While human curators can also fall prey to this incentive, they aren’t hard-wired the way AI programs are. I’m far more likely to chance upon weird stuff not many are interested in on a blog curated by a human.
But whether I like it or not, I cannot completely avoid curation algorithms. My Twitter or Instagram or LinkedIn feed is curated not by humans but by algorithms that decide what I should see from the people in my network. (Which is one of the reasons I prefer RSS feed readers: I can choose whom to follow, and there’s no AI intermediary curating what I see on the feed.)
What about curators in the world of art? Their jobs are also on the line, it seems. The Bucharest Biennale in 2022 is to be curated by an AI named JARVIS. If JARVIS rings a bell, it’s because he appears in the movie Iron Man. Let’s hope his sense of humour has improved by the time JARVIS starts work on the 2022 event.
The Guardian has an update on this MSN story:
Microsoft’s decision to replace human journalists with robots has backfired, after the tech company’s artificial intelligence software illustrated a news story about racism with a photo of the wrong mixed-race member of the band Little Mix.
Perhaps it isn’t a mistake at all: the AI curator may have learned that stories that go wrong generate more publicity, and hence more traffic.
The MSN folks now see how humans and AI can work together in this context:
In advance of the publication of this article, staff at MSN were told to expect a negative article in the Guardian about alleged racist bias in the artificial intelligence software that will soon take their jobs.
Because they are unable to stop the new robot editor selecting stories from external news sites such as the Guardian, the remaining human staff have been told to stay alert and delete a version of this article if the robot decides it is of interest and automatically publishes it on MSN.com. They have also been warned that even if they delete it, the robot editor may overrule them and attempt to publish it again.
Will the “remaining human staff” remain in this duty, or will they too be replaced sometime by racially aware robot supervisors that police the robot editors? It could be turtles all the way down.