Disaster Capitalism in Covid times

Naomi Klein, author of The Shock Doctrine, writes in The Intercept about how the Tech industry is rushing to capitalise on the pandemic induced crisis:

This is a future in which, for the privileged, almost everything is home delivered, either virtually via streaming and cloud technology, or physically via driverless vehicle or drone, then screen “shared” on a mediated platform.

It’s a future that claims to be run on “artificial intelligence” but is actually held together by tens of millions of anonymous workers tucked away in warehouses, data centers, content moderation mills, electronic sweatshops, lithium mines, industrial farms, meat-processing plants, and prisons, where they are left unprotected from disease and hyperexploitation. It’s a future in which our every move, our every word, our every relationship is trackable, traceable, and data-mineable by unprecedented collaborations between government and tech giants.

If all of this sounds familiar it’s because, pre-Covid, this precise app-driven, gig-fueled future was being sold to us in the name of convenience, frictionlessness, and personalization. But many of us had concerns.

Today, a great many of those well-founded concerns are being swept away by a tidal wave of panic, and this warmed-over dystopia is going through a rush-job rebranding. Now, against a harrowing backdrop of mass death, it is being sold to us on the dubious promise that these technologies are the only possible way to pandemic-proof our lives, the indispensable keys to keeping ourselves and our loved ones safe.

This narrative will be familiar to those who’ve read her work on ‘Disaster Capitalism’, which, as she explains in a VICE interview, “describes the way private industries spring up to directly profit from large-scale crises.”

So “the ‘shock doctrine’ is the political strategy of using large-scale crises to push through policies that systematically deepen inequality, enrich elites, and undercut everyone else. In moments of crisis, people tend to focus on the daily emergencies of surviving that crisis, whatever it is, and tend to put too much trust in those in power. We take our eyes off the ball a little bit in moments of crisis.”

Which, according to Naomi, is exactly what’s happening with the Covid-19 crisis today.

She singles out Eric Schmidt, ex-CEO of Google, for orchestrating this move towards an AI-driven, surveillance based economy. Before Covid, his lobbying strategy was based on instilling the fear of (the U.S.) being overtaken by China. Now, however, the same intent is being advanced under the guise of fighting the virus. And he is faring much better in reaching his goals.

Until recently, “democracy — inconvenient public engagement in the designing of critical institutions and public spaces — was turning out to be the single greatest obstacle to the vision Schmidt was advancing”, but now:

…in the midst of the carnage of this ongoing pandemic, and the fear and uncertainty about the future it has brought, these companies clearly see their moment to sweep out all that democratic engagement. To have the same kind of power as their Chinese competitors, who have the luxury of functioning without being hampered by intrusions of either labor or civil rights.

The problems here are many, but the fundamental issue is that the primary beneficiaries of this technology-based approach are the tech companies (and their investors or shareholders). Not the children who are being taught remotely, not the nurses whose jobs are being affected by telemedicine, and so on.

In each case, we face real and hard choices between investing in humans and investing in technology. Because the brutal truth is that, as it stands, we are very unlikely to do both.


In an era where software is increasingly being offered “as-a-service”, what prevents cyber gangs from peddling their wares to other extortionists? Apparently nothing.

Lockbit is a newcomer to the ransomware scene, but it has some differentiators. Here’s what an ArsTechnica article has to say:

Many LockBit competitors like Ryuk rely on live human hackers who, once gaining unauthorized access, spend large amounts of time surveying and surveilling a target’s network and then unleash the code that will encrypt it. LockBit worked differently.

“The interesting part about this piece of ransomware is that it is completely self-spreading,” said Patrick van Looy, a cybersecurity specialist at Northwave.

Lockbit is also selective in whom it does not target. The ransomware aborts if it detects that the machine being attacked is in Russia or any of the CIS member nations.

But most intriguing of all, it is offered as a service. And the Lockbit owners seem to have an ethics code too:

LockBit is sold in underground broker forums that often require sellers to put up a deposit that customers can recover in the event the wares don’t perform as advertised. In a testament to their confidence and determination, the LockBit sellers have forked out almost $75,000.

You may be wondering why they don’t simply disappear once the money arrives, instead of releasing the encrypted data, or, in the case of Ransomware-as-a-Service, even returning money when their product fails in the ransomware attack. The answer is simple: it makes more business sense. Their goal is not just one target, but many. In the words of one user who commented on the article, “If one target pays and the files aren’t decrypted, other future targets will hear about it and not pay. The scheme only works long-term if people are able to get their files back.”

A report from Sophos, a security firm, shows how business-savvy these ransomware vendors are:

As with most ransomware, LockBit maintains a forum topic on a well-known underground web board to promote their product. Ransomware operators maintain a forum presence mainly to advertise the ransomware, discuss customer inquiries and bugs, and to advertise an affiliate program through which other criminals can lease components of the ransomware code to build their own ransomware and infrastructure.

The legacy of Y2K

The other day I came across a New Yorker article about the time Russia and America Coöperated to Avert a Y2K Apocalypse. The piece recalls how dark some of those Y2K scenarios were:

When January 1, 2000, rolled around, computers all over the world, from stock-market systems and A.T.M.s to nuclear power stations and gas pumps, would jump back to “00,” a catastrophic resetting of machine time that, it was feared, could trip them into failure or malfunction. Airplanes could turn off midair and fall from the sky; financial systems might freeze; municipal water-filtration plants could fail, polluting drinking supplies for millions; and electrical plants might shut down, plunging civilization into darkness. What’s more, the effects of Y2K would persist. Computers wouldn’t simply start working again when the clock read “01.” Experts feared that the breakdown of digital infrastructure could push the modern world into a new dark age.

It makes Covid-19 related restrictions seem minor in comparison.

In 1998, when I entered the software industry, companies in India were in the middle of solving this supposedly monumental problem. Teams in Infosys, TCS, and Wipro were engaged in a number of Y2K projects around the world. I avoided these (software services) companies during campus recruitment. I wanted to work on products, not projects — this is what I told myself.

My assumption was naive: product companies were not spared work on the Y2K problem. While I was lucky to be assigned to a specific product, my wife — who had joined the same product software firm — found herself assigned part-time to a central team tasked with scanning all product codebases and database schemas for vulnerabilities. Her team was on call duty on the night of 31st December 1999, and I joined her in the office campus after dinner. (This was 1999 — ‘work from home’ was a concept reserved for vision presentations on the future of work.) The team watched a couple of movies in a conference room. Around midnight I drifted off to sleep in the ‘bunker room’. The next morning I saw sleepy but smiling faces: there had been zero incidents.

The New Yorker article considers the legacy of Y2K:

“The thing about Y2K,” she told me recently, “was that it illustrated, in very direct terms, how software gets deployed. Pieces of software are black boxes to one another. Different manufacturers. Different companies. Those interfaces are often muddied, misunderstood, and badly documented.” In her view, though the specific risk of Y2K might be over, the broader, systemic risk presented by computer-system interconnection is still very real. “I feel like the lesson has just passed us by, which is the sad thing,” Ullman said. “There’s now a generation or two that really doesn’t know about Y2K, that doesn’t even have any memory of it.”

That fast-fading memory is unfortunate because Y2K times have some important lessons for the increasingly interconnected pieces of software in the cloud today. (Emphasis in text below is mine):

“They all had a sense that things were pretty well in hand,” she said. “But then they slid off into: ‘What about these other people? Can I trust the banks? Can I trust the suppliers?’ The fear of this systemic failure infected people, one after another.” Y2K wasn’t just a technological crisis; it was a social one. It exposed widespread fears of dependence and interconnection that bordered on paranoia. Even if you took precautions, someone else could trigger the end of the world. 

“Fears of dependence and interconnection that bordered on paranoia” — you could say about the current political sentiment across the world.

And you could wonder what would happen if a computer virus wrecks havoc in our highly connected network of software in the cloud. Would that fear of dependence and interconnection return? Would that mean islands on the internet that mirror the isolation nation states are marching towards?

It is now hard to imagine the sort of cooperation between the U.S. and Russia the article talks about. Which is another reason why we need to keep alive the memory of those Y2K years. There are times when the world has to fight a common adversary. We are living in one of those.

Update: Not everyone agrees with this narrative of Y2K. Bob Cringely is one of those with a different view on how interconnected systems were back then:

…when I jumped into the research in 1999 I found that Y2K remediation, as it was called, seemed to be going well. I also found that systems weren’t as inter-connected or dependent as many of us had thought — that the world simply wasn’t as much at risk as we feared. I had to fight for this position, but ultimately that was the more conservative story we told two months before the actual event. And we were right.

Retrofuturism: How Yesterday viewed Tomorrow

Starting at a Hacker News thread on Retrofuturism, I spent a good part of the morning following the crumbs on this seemingly endless internet trail. To begin with, here’s Wikipedia:

Retrofuturism (adjective retrofuturistic or retrofuture) is a movement in the creative arts showing the influence of depictions of the future produced in an earlier era. If futurism is sometimes called a “science” bent on anticipating what will come, retrofuturism is the remembering of that anticipation. Characterized by a blend of old-fashioned “retro styles” with futuristic technology, retrofuturism explores the themes of tension between past and future, and between the alienating and empowering effects of technology.

Bruce McCall, whose work you may have seen on New Yorker covers, gave this charming talk in 2008:

There’s an entire issue of The Appendix about “about how past generations have reckoned their collective futures”:

Several pieces in this issue suggest that we should study the futures of the past not simply because they’re quaint or charming, but because they show us that even our most confident predictions can go wrong.

There’s this delightful gallery on retrofuturism, with illustrations featuring things “from family flying saucer rides to domestic living on the lunar surface”.

And there’s the Paleofuture blog, “where we explore past visions of the future. From flying cars and jetpacks to utopias and dystopias.”

I could go on, and so can you. Before you begin, think again: the future of your past will rob your present.

Slack vs Teams

The Slack vs Teams debate appeared recently in the news again when Stewart Butterfield, co-founder and CEO of Slack, said in an interview with CNBC that Microsoft Teams is not a competitor to Slack:

“I think there’s this perpetual question, which at this point is a little puzzling for us, that at some point Microsoft is going to kill us,” says Butterfield in the CNBC interview. “In another sense, they’ve got to be a little frustrated at this point. They have 250 million-ish Office 365 users, they just announced this massive growth in Teams to a little under 30 percent. So after three years of bundling it, preinstalling it on people’s machines, insisting that administrators turn it on, forcing users from Skype for Business to switch to Teams, they still only have 29 percent which means 71 percent of their users have said no thank you.”

The comments thread in the Verge article has people weighing in on both sides of the debate. One viewpoint is that the Teams user-experience may not be as good as what Slack offers, but is good enough — which is probably why Teams will end up winning this battle.

I’ve used both Slack and Teams. I started with Slack, using it for communication in small groups, and I loved it. But when Microsoft Teams was broadly rolled out in the company, it made sense to switch over for some use cases. I continued to use Slack for communication, but preferred Teams for storing documents or creating short wikis. While Slack did not eliminate the need for email, it made short exchanges within well-knit groups really efficient. And it allowed us to easily imagine and try out new use cases, like getting notifications from our continuous delivery pipelines.

Slack clearly is much more cool to work with, and it has a bigger range of integrations. But what if Teams is “good enough” from an enterprise perspective, if not from an end-user standpoint? Since end-users are not the ones buying software in enterprises, will Teams end up upstaging Slack in enterprises already invested into Microsoft platforms and solutions?

There’s also the other segment, next-gen enterprises that have grown up with cloud-native software and consumer grade experiences. If keeping employees happy is key to their success, such enterprises may end up choosing the slickness of Slack over baggage of Teams. (Such segments could also exist as niches within large enterprises that predominantly use Microsoft.)

I’ll be watching this space.

(Butterfield is an interesting character. He’s known as the person behind Flickr and Slack, both of which emerged out of his ventures to create something else altogether: a massively multi-player online game. Wired profiled him in 2014, and Ezra Klein had an engaging conversation with him in 2016. And then there’s this resignation letter he wrote when he left Yahoo!)

Agile issues

“The most successful thing about Agile is the word Agile,” says Robert C. Martin (aka Uncle Bob) in a podcast from October 2019. “The original concepts almost instantly started to get muddied, and lost, and twisted and turned…”

Uncle Bob was in the group of seventeen men who put together — at a Utah ski resort, early in 2001 — the Agile manifesto. And he’s among the early proponents of Agile who are trying to set the record straight. In the podcast he talks about how project managers, not engineers, took over the Agile movement:

The Agile conferences went from being technical conferences to being management conferences, almost overnight. And literally, that didn’t change. That has been the theme ever since. Agile has become a “soft skills, people management” kind of thing… 

In a recent interview Mary and Tom Poppendieck bring this up and question Agile’s relevance today:

“Way too much of agile has been not about technology, but about people and about managing things and about getting stuff done — not necessarily getting the right stuff done, but getting stuff done — rather than what engineering is about,” she said. “Agile has come to mean anything but the fundamental, underlying technical capability necessary to do really good software engineering.”

One of the less discussed consequences of this focus away from engineering is the way it impacts the role of women in the industry. Here’s Mary, in the same article:

“If you look at agile, where do women end up? They end up being scrum masters and that kind of thing. That’s not an engineering job. That’s putting women ‘in a woman’s place,’ rather than putting women in an engineering job. And I think that’s really bad,” she said.

Tomorrow has no use for our monuments. It needs our data—and warnings.

Paul Ford, CEO of Postlight, in Wired:

We came to believe that our recent history is the range of what is possible, and now we are watching charts where the y axis can’t keep up with events. For its part, the future is not awaiting our wise counsel. That is the wealthy man’s folly, to believe that people want your wisdom. The future is concerned with itself. The people in that time will abide your wisdom in exchange for safety. They will be amused by our clocks and space cars, but what they will want to know is, how high did the water get, please?

What’s powering that smartphone of yours

While the cloud brings an impressive array of capabilities literally to our fingertips, its presence is barely felt, let alone understood, by most smartphone users.

If you think working from home is resulting in a huge reduction in energy consumption, think again. Writing in Tech Crunch, Mark Mills says Our love of the cloud is making a green energy future impossible:

An epic number of citizens are video-conferencing to work in these lockdown times. But as they trade in a gas-burning commute for digital connectivity, their personal energy use for each two hours of video is greater than the share of fuel they would have consumed on a four-mile train ride. Add to this, millions of students ‘driving’ to class on the internet instead of walking.

Meanwhile in other corners of the digital universe, scientists furiously deploy algorithms to accelerate research. Yet, the pattern-learning phase for a single artificial intelligence application can consume more compute energy than 10,000 cars do in a day.

Put in individual terms: this means the pro rata, average electricity used by each smartphone is greater than the annual energy used by a typical home refrigerator.