What’s with social media these days?

It seems not so long ago that we were all discussing the way social media captures our personal data and uses it, and the way it grabs our attention and keeps us constantly distracted. The recent years have been filled with many good books that examined these issues. The Age of Surveillance Capitalism and How to do Nothing are two that readily come to mind.

But 2020 has altered our calculus about these matters. Online interactions through social media have been a godsend during lockdowns; and one doesn’t hear noise anymore about all the data these companies are gathering. Instead, the spotlight is back where it was four years ago: on the potential of social media platforms to help launch disinformation campaigns to disrupt elections, heighten polarisation, and spread conspiracies. Here’s the NewYorker, writing about about social media’s disinformation problem:

In the run-up to this year’s Presidential election, e-mails and videos that most analysts attributed to the Iranian government were sent to voters in Arizona, Florida, and Alaska, purporting to be from the Proud Boys, a neo-Fascist, pro-Trump organization: “Vote for Trump,” they warned, “or we will come after you.” Calls to voters in swing states warned them against voting and text messages pushed a fake video about Joe Biden supporting sex changes for second graders. But a truly ambitious disinformation attack would be cleverly timed and coördinated across multiple platforms. If what appeared to be a governor’s Twitter account reported that thousands of ballots had gone missing on Election Day, and the same message were echoed by multiple Facebook posts—some written by fake users or media outlets, others by real users who had been deceived—many people might assume the story to be true and forward it on. The goal of false information need not be an actual change in events; chaos is often the goal, and sowing doubt about election results is a perfect way to achieve it.

Compared with this problem, the matter of social media influencing our attention seems trivial now. That’s quite a turnaround from the previous years, and it makes me wonder if things we deem important are entirely a function of the dominant media narratives of the times we live in. Covid-19 has grabbed our attention away from other healthcare matters, and the U.S. election (and what it means to democracy) seems to take precedence over things happening elsewhere in the world.

Will history restore objectivity?

The internet is not what you think it is

From a superb essay by the incomparable Justin E.H.Smith:

The internet is not what you think it is. For one thing, it is not nearly as newfangled as you probably imagine. It does not represent a radical rupture with everything that came before, either in human history or in the vastly longer history of nature that precedes the first appearance of our  species. It is, rather, only the most recent permutation of a complex of behaviors as deeply rooted in who we are as a species as anything else we do: our storytelling, our fashions, our friendships; our evolution as beings that inhabit a universe dense with symbols.

Among other things the essay talks about fungi networks, the symbiosis between fungi and the roots of plans that reveal an intelligence we do not typically associate with such species. It reminded me of another essay I read recently about these networks. Here’s Ashutosh Jogalekar writing in 3 Quarks Daily:

The discovery that fungal networks could supply trees with essential nutrients in a symbiotic exchange was only the beginning of the surprises they held. Sheldrake talks in particular about the work of the mycologists Lynne Body and Suzanne Simard who have found qualities in the mycorrhizal networks of trees that can only be described as deliberate intelligence. Here are a few examples: fungi seem to “buy low, sell high”, providing trees with important elements when they have fallen on hard times and liberally borrowing from them when they are doing well. Mycorrhizal networks also show electrical activity and can discharge a small burst of electrochemical potential when prodded. They can entrap nematodes in a kind of death grip and extract their nutrients; they can do the same with ants. Perhaps most fascinatingly, fungal mycelia display “intelligence at a distance”; one part of a huge fungal network seems to know what the other is doing.

Blockchain: A force of nature?

From The Correspondent:

His first job was to explain what blockchain is. When I asked him, he said it is “a kind of system that can’t be stopped”, that it’s “actually a force of nature”, or rather, “a decentralised consensus algorithm”. OK, it’s hard to explain, he conceded eventually. “I said to Zuidhorn: ‘I’ll just build you an app, then you’ll understand’.”

When someone talks about a technology as a force of nature, it’s time to grow cautious. The hype around blockchain has been huge, and the results not so promising:

The only thing is that there’s a huge gap between promise and reality. It seems that blockchain sounds best in a PowerPoint slide. Most blockchain projects don’t make it past a press release, an inventory by Bloomberg showed. The Honduran land registry was going to use blockchain. That plan has been shelved.  The Nasdaq was also going to do something with blockchain. Not happening.  The Dutch Central Bank then? Nope.  Out of over 86,000 blockchain projects that had been launched, 92% had been abandoned by the end of 2017, according to consultancy firm Deloitte. 

I’ve seen this in IT companies and research institutions. The hype led to some projects using blockchain technology where a centralised database would have done the job better. Apart from the hype, the problematic thing with such initiatives was that people didn’t really understand the technology well. To many, it was — and still is — like “magic”. And as the article says, the market for magic is big: “Whether it’s about blockchain, big data, cloud computing, AI or other buzzwords.”

Maybe this is blockchain’s greatest merit: it’s an awareness campaign, albeit an expensive one. “Back-office management” isn’t an item on the agenda in board meetings, but “blockchain” and “innovation” are. 

The Jobs AI is Replacing

From a recent article in Vogue titled I am a Model and I Know That Artificial Intelligence Will Eventually Take My Job:

Digital models and influencers are successfully breaking into the fashion industry from every angle. Some have even been signed to traditional modeling agencies. Take Miquela Sousa, a 19-year-old Brazilian American model, influencer, and now musician, who has amassed a loyal following of more than 2 million people on Instagram.

This is not a segment I expected AI to cover so soon. First, there’s the uncanny valley (and looking at Miquela, it’s clear they haven’t crossed it). There are also the practices of a decades old industry, with fashion shows and fashion shoots, involving not just models but a larger ecosystem with photographers, fashion designers, etc. And then there’s the aspect of who the models represent and what they stand for:

There are major issues of transparency and authenticity here because the beliefs and opinions don’t actually belong to the digital models, they belong to the models’ creators. And if the creators can’t actually identify with the experiences and groups that these models claim to belong to (i.e., person of color, LGBTQ, etc.), then do they have the right to actually speak on those issues? Or is this a new form of robot cultural appropriation, one in which digital creators are dressing up in experiences that aren’t theirs?

Miquela

The digital approach has its advantages, and these have been amplified by the environment created by Covid-19:

The COVID-19 pandemic has directly highlighted the need for these types of digital solutions. Anifa Mvuemba, a fashion designer and creative director for Hanifa, a contemporary ready-to-wear apparel line for women, recently made headlines when she launched her collection on Instagram Live using 3D models on a virtual catwalk. 

The AI wave is here, and it will affect more kinds of jobs than we imagine today. To understand and deal with its impact, we need to move away from stereotypes (like truck drivers losing their jobs to self-driving trucks) and statistics and look instead at specific instances like the one above. Each instance will bring its own challenges (and opportunities, perhaps). Each instance will also help us understand what humans can do better and (this is unfortunate but has to be said) help us redefine ourselves to differentiate from (and in some cases co-opt) our AI counterparts. As the author, a model herself, writes:

So what does all of this mean for living and breathing models? It’s safe to say that we will have to prepare for a changing workforce just like everyone else. We will have to exercise skills such as adaptability and creative intelligence to ensure that we too can sustain the shift to digital. Edwards-Morel, the 3D fashion design expert, advised me to look into creating a digital avatar of myself.

This instance also shows how fast we are moving towards replacing real life experiences with their digital twins. Humans have been confusing the representation of a thing with the thing itself for a long time. Writing in 1967 about the rising consumer culture, the French philosopher Guy Debord said: “In societies dominated by modern conditions of production, life is presented as an immense accumulation of spectacles. Everything that was directly lived has receded into a representation.”

The trend is only going to intensify as representations become more life-like and assume intelligent traits. The boundary between our real lives and those second lives are inevitably going to blur.

Languages will change significantly on interstellar flights

In his Sci-Fi novella ‘Story of your Life’, Ted Chiang takes up the question of communicating with aliens who’ve appeared on Earth. The story’s protagonist is a linguist who is hired to understand and communicate with the aliens. (The movie ‘Arrival’ is based on this novella.)

Language is a matter most science fiction works gloss over. There’s always a handy piece of technology — like Arthur Dent’s Babel Fish — that smoothens out any issue related to understanding the aliens. (Such devices already exist, and they are getting better.) But even if we don’t run into aliens, interstellar travellers — in science fiction, and perhaps in our future — would need to communicate with people back on earth. According to a recent study by a team of linguistics professors, this would pose a problem:

In this study, McKenzie and Punske discuss how languages evolve over time whenever communities grow isolated from one another. This would certainly be the case in the event of a long interstellar voyage and/or as a result of interplanetary colonization. Eventually, this could mean that the language of the colonists would be unintelligible to the people of Earth, should they meet up again later.

The problem gets worse with a new generation of immigrants arriving at a distant space colony:

Last, but not least, they address what will happen when subsequent ships from Earth reach the colonized planets and meet the locals. Without some means of preparation (like communication with the colony before they reach it), new waves of immigrants will encounter a language barrier and could find themselves subject to discrimination.

The solution? No, it isn’t AI.

Because of this, they recommend that any future interplanetary or interstellar missions include linguists or people who are trained in what to expect—translation software ain’t gonna cut it. They further recommend additional studies of likely language changes aboard interstellar spacecraft so people know what to expect in advance.

Linguists among the crew. I’m sure Ted Chiang would agree.

Your printer too is a surveillance machine

Among the many forensic faculties Sherlock Holmes possessed was one that allowed him to determine the typewriter a typed sheet originated from. Every typewriter left a unique fingerprint, and this is true not only in the realm of detective fiction: US courts have been using typewriter-based evidence for decades.

With the fading away of typewriters this forensic skill may have turned redundant, but printing technology has found a way to help the millennial variants of Holmes-like characters. From The Generalist:

Since the 1980s most colour printers and photocopiers add a set of secret near-invisible dots to every page they print. The dots uniquely identify the origin and timestamp of that printout.

Every printed page contained yellow dots that included the serial number of the printer / copier and a date stamp. It’s an anti-counterfeit trick, essentially. If someone prints out fake money, all that law enforcement needs to do is find the yellow dots with an ultraviolet lamp and decode them. Once decoded, they’ll know just where it came from (assuming, of course, that they can track the unique serial number) and when it was printed.

A cool technique perhaps, but it can be used as a surveillance tool too:

…these hidden marks were printed on everything… which is a problem if you are (for example) a whistle-blower in a totalitarian government and don’t want your leaked documents to connect back to you.

Virtual Assistants will own the relationship with the consumer

I’d heard of B2B2C, but I recently stumbled upon B2R2C in an article in Forbes: Why CMOs need to start thinking about Business To Robot To Consumer (B2R2C):

Today, robots are already engaging with customers on a personal level. Bots can qualify leads, personalize user experiences to create custom ads, simplify the purchase process, and even help customers shop and style themselves. For instance, H&M uses a messaging chatbot that helps customers find clothes and outfits using conversation, as if the customer was shopping with a friend. These examples are just a few as to how people already engage with robots. 

As talking to and relying on robots become more mainstream, it only makes sense that robots are marketed to so that they know what to offer and suggest to people based on their personalized preferences. 

It may seem early for this, but the direction is clear.

Millions of users across the globe use digital assistants like Alexa, Google, and Siri on a daily basis. These devices already use Artificial Intelligence to try to predict what we’re going to type, what we may ask, or what we might need. For instance, these voice command services can predict if we’re coming down with a cold from the sound of our voice. These systems will only become stronger and more adept as the technology improves and our confidence in their advice and capabilities becomes more mainstream. Our voice assistant will know when we need something before we need it. That means for CMOs, now’s the time to create a strategy for marketing to robots.

But what happens when the CMO’s decisions are taken over by bots? We then will have the CMO bot deciding which consumer bots to target. In a bot filled universe, humans will have a place too. They will turn into something capitalism ultimately wants them to be: just consumers.

If you thought Wall-E was merely trying to be funny, think again.

The school for poetic computation

Founded in 2013, the School for Poetic Computation is “an artist run school in New York” where “a small group of students and faculty work closely to explore the intersections of code, design, hardware and theory — focusing especially on artistic intervention.” Their mission:

“…is to promote completely strange, whimsical, and beautiful work – not the sorts of things useful for building a portfolio for finding a job, but the sort of things that will surprise and delight people and help you to keep creating without a job. However, employers tell us they appreciate this kind of work as well.

This is not a program to get a degree, there are large programs for that. This is not a program to go for vocational skills, there are programs for that. This is a program for self-initiated learners who want to explore new possibilities. This is a program for thinkers in search of a community to realize greater dreams.”

We need more schools like these.

Coding is seen as an art form by many of its practitioners. I’m reminded of the writings of Paul Ford on Ftrain. In Processing Processing, he confesses his passion “for languages like Processing—computer languages which compile not to executable code, but to aesthetic objects, whether pictures, songs, demos, or web sites.” I’m reminded of Paul Graham’s essay, Hackers and Painters. And I’m reminded of Vikram Chandra’s Mirrored Mind: My life in letters and code, a gift from a friend and a book I’m yet to read, whose blurb says “Chandra delves into the writings of Abhinavagupta, the tenth-and-eleventh century Kashmiri thinker, and creates an idiosycratic history of coding.”

This is a topic that merits a longer essay. Someday.

When AI becomes the curator

From The Verge:

Microsoft is laying off dozens of journalists and editorial workers at its Microsoft News and MSN organizations. The layoffs are part of a bigger push by Microsoft to rely on artificial intelligence to pick news and content that’s presented on MSN.com, inside Microsoft’s Edge browser, and in the company’s various Microsoft News apps. Many of the affected workers are part of Microsoft’s SANE (search, ads, News, Edge) division, and are contracted as human editors to help pick stories.

What’s interesting here is that these are not editors in the conventional sense — they are curators, picking stories (“content”) to feature on the company’s various online news channels.

Curation is what I do here on this blog. And this idiosyncratic picking of stories is what draws me to blogs curated by humans: Kottke, Thought Sharpnel, Daring Fireball, and so on.

On the surface, curation seems like an easy pick for AI. (Harder problems have been solved.) And one might assume, correctly, that its pickings will be more eclectic than what a single human can possibly gather. But would I be inclined to follow a blog curated by an AI program instead of a single person?

No. Here’s why.

AI curators can’t take a moral stance. In the last weeks some curated blogs I follow have been linking to and writing about the protests in the U.S. following the murder of George Floyd. It has affected some of them deeply. This human connection is something I cannot get out of reading an AI-curated website.

AI curators don’t have skin in the game. Many of the things human curators link to are matters that affect them, directly or indirectly. Or they are things the curator has had a personal experience with. For the reader, knowing this creates a deeper resonance with the topic being shared and discussed. It’s not just the subject that’s interesting and worth thinking about, it is also the relationship the curator has with that subject. With an AI curator this relationship is absent, which strips meaning from the shared “content”, leaving it superficial and flavourless.

AI curators optimise for popularity, not interestingness. This is how machine learning algorithms are designed: over time they “learn” what kinds of stories are read and shared by more people, and these popular themes are given precedence over others. While human curators can also fall prey to this incentive, they aren’t hard-wired the way AI programs are. I’m far more likely to chance upon weird stuff not many are interested in on a blog curated by a human.

But whether I like it or not, I cannot completely avoid curation algorithms. My Twitter or Instagram or LinkedIn feed is curated not by humans but by algorithms that decide what I should see from the people in my network. (Which is one of the reasons I prefer RSS feed readers: I can choose whom to follow, and there’s no AI intermediary curating what I see on the feed.)

What about curators in the world of art? Their jobs are also on the line, it seems. The Bucharest Biennale in 2022 is to be curated by an AI named JARVIS. If JARVIS rings a bell, it’s because he appears in the movie Iron Man. Let’s hope his sense of humour has improved by the time JARVIS starts work on the 2022 event.

Update:

The Guardian has an update on this MSN story:

Microsoft’s decision to replace human journalists with robots has backfired, after the tech company’s artificial intelligence software illustrated a news story about racism with a photo of the wrong mixed-race member of the band Little Mix.

Perhaps it isn’t a mistake at all: the AI curator may have learned that stories that go wrong generate more publicity, and hence more traffic.

The MSN folks now see how humans and AI can work together in this context:

In advance of the publication of this article, staff at MSN were told to expect a negative article in the Guardian about alleged racist bias in the artificial intelligence software that will soon take their jobs.

Because they are unable to stop the new robot editor selecting stories from external news sites such as the Guardian, the remaining human staff have been told to stay alert and delete a version of this article if the robot decides it is of interest and automatically publishes it on MSN.com. They have also been warned that even if they delete it, the robot editor may overrule them and attempt to publish it again.

Will the “remaining human staff” remain in this duty, or will they too be replaced sometime by racially aware robot supervisors that police the robot editors? It could be turtles all the way down.

Disaster Capitalism in Covid times

Naomi Klein, author of The Shock Doctrine, writes in The Intercept about how the Tech industry is rushing to capitalise on the pandemic induced crisis:

This is a future in which, for the privileged, almost everything is home delivered, either virtually via streaming and cloud technology, or physically via driverless vehicle or drone, then screen “shared” on a mediated platform.

It’s a future that claims to be run on “artificial intelligence” but is actually held together by tens of millions of anonymous workers tucked away in warehouses, data centers, content moderation mills, electronic sweatshops, lithium mines, industrial farms, meat-processing plants, and prisons, where they are left unprotected from disease and hyperexploitation. It’s a future in which our every move, our every word, our every relationship is trackable, traceable, and data-mineable by unprecedented collaborations between government and tech giants.

If all of this sounds familiar it’s because, pre-Covid, this precise app-driven, gig-fueled future was being sold to us in the name of convenience, frictionlessness, and personalization. But many of us had concerns.

Today, a great many of those well-founded concerns are being swept away by a tidal wave of panic, and this warmed-over dystopia is going through a rush-job rebranding. Now, against a harrowing backdrop of mass death, it is being sold to us on the dubious promise that these technologies are the only possible way to pandemic-proof our lives, the indispensable keys to keeping ourselves and our loved ones safe.

This narrative will be familiar to those who’ve read her work on ‘Disaster Capitalism’, which, as she explains in a VICE interview, “describes the way private industries spring up to directly profit from large-scale crises.”

So “the ‘shock doctrine’ is the political strategy of using large-scale crises to push through policies that systematically deepen inequality, enrich elites, and undercut everyone else. In moments of crisis, people tend to focus on the daily emergencies of surviving that crisis, whatever it is, and tend to put too much trust in those in power. We take our eyes off the ball a little bit in moments of crisis.”

Which, according to Naomi, is exactly what’s happening with the Covid-19 crisis today.

She singles out Eric Schmidt, ex-CEO of Google, for orchestrating this move towards an AI-driven, surveillance based economy. Before Covid, his lobbying strategy was based on instilling the fear of (the U.S.) being overtaken by China. Now, however, the same intent is being advanced under the guise of fighting the virus. And he is faring much better in reaching his goals.

Until recently, “democracy — inconvenient public engagement in the designing of critical institutions and public spaces — was turning out to be the single greatest obstacle to the vision Schmidt was advancing”, but now:

…in the midst of the carnage of this ongoing pandemic, and the fear and uncertainty about the future it has brought, these companies clearly see their moment to sweep out all that democratic engagement. To have the same kind of power as their Chinese competitors, who have the luxury of functioning without being hampered by intrusions of either labor or civil rights.

The problems here are many, but the fundamental issue is that the primary beneficiaries of this technology-based approach are the tech companies (and their investors or shareholders). Not the children who are being taught remotely, not the nurses whose jobs are being affected by telemedicine, and so on.

In each case, we face real and hard choices between investing in humans and investing in technology. Because the brutal truth is that, as it stands, we are very unlikely to do both.