Categories
Politics Technology

The Robot is the Bad Guy

A technological revolution hits us while we’re down.

In 1952, Russian scientist Dmitry Belyayev and his graduate student, Lyudmila Trut, drew up a laboratory experiment attempting to replicate what humans had only previously accomplished semi-purposefully and over hundreds of years: the domestication of wild animals.

Out of the millions of animal species on Earth, only around forty of them have been domesticated by humans. A domesticated animal, for clarity, is one that, as a species, has been bred over time in such a way that it is innately comfortable with and more useful to humans. Many wild animals can be tamed, or trained to be more comfortable around humans, but these behavioral changes are limited to the individual animal that has been tamed and do not persist across generations.

The Russian scientists, thus, set out to begin a study they themselves would not be able to finish: purposeful microevolution accomplished by breeding for particular traits over potentially hundreds of generations. 

In selecting a species to subject to this experiment and novel attempt at domestication, Belyayev and Trut opted for a close relative of mankind’s most well-known (and arguably most successful) domestication subject: the dog. Domesticated dogs diverged from gray wolves more than 20,000 (and potentially as many as 40,000) years ago. Over the next one or two score millennia, the once fairly-homogenic wolves underwent an explosion in phenotypic diversity, with such disparate creatures as the tiny Chihuahua and curly-haired poodle, massive Saint Bernard, agile Grayhound, and genetic abomination that is the pug all somehow representing not only the same species, but the same subspecies (Canis lupus familiaris).

Many of these dogs were bred purposefully, with humans selecting traits that were favorable for hunting, protection, or showing up the other ladies at court with how irresistably adorable your handbag gremlin docile pet is.

For centuries, the conventional wisdom was that it was this sort of purposeful trait selection that created the domesticated dog in the first place; humans took wild wolves and purposefully bred them to be nicer and more useful.

In recent years, researchers have challenged this assumption. Belyayev in particular decided to test an alternative idea: rather than try to replicate the myriad differences between dogs and wolves, he resolved to selectively breed his foxes with one feature in mind: friendliness.

Each generation of foxes, Belyayev and his team would identify those foxes that were most amenable to human interaction and breed them together, creating a new generation of foxes mothered and fathered by only the friendliest parent foxes.

In the more than forty generations that have followed since the uninterrupted experiment started over seventy years ago, Belyayev’s team has had remarkable success: the foxes in the process of being domesticated are much friendlier than their kin in the wild. But that’s far from the most notable change they’ve seen.

Early on, some of the tamer foxes started to display tail wagging, a behavior common in domesticated dogs but not in wild silver foxes. By the tenth generation, some of the foxes had developed floppy ears, and some had started to show fur discoloration again of the sort common in dogs but, again, not in foxes. In other words, in setting out to breed foxes that were as friendly as dogs but nothing else, Belyayev’s team, effectively by accident, bred foxes that were dog-like across a variety of physical and behavioral characteristics.

The implication of Belyayev’s study challenges the long-standing assumption that human domestication of dogs was purposeful. It might be, instead, that it was simply those wolves most amenable to human contact that were rewarded with the food left behind by human camps. Over time, the friendlier of the friendly wolves would have come so close as to interact directly with humans who would feed, shelter, and care for them. In other words, dogs may have domesticated us as much as we domesticated them.

Bait and switch

I want to pivot now and talk about social media. It feels a little insidious, admittedly, to lure you in with a story about cute dogs and comparably cute foxes and to transition to the problem of our time. But I promise, by the end of this article, you’ll get it. You might even forgive me.

Social media today is in a rough spot. Depending on who you ask, sites (or, more commonly, apps) like Facebook, Instagram, and the website formerly known as Twitter command a wide scope of utility ranging in uses from “casual method of keeping in contact with distant relatives” and “conduit for secondhand furniture purchases” to “microchip heroin” and “communications nexus for the members of my ‘destruction of democracy’ militia”.

Whatever the intended use, it’s taken for granted that social media outlets put a premium on attention and do everything they can to maintain it once secured. More modern entrants to the field like TikTok demonstrate absolute mastery of the art, being able to cultivate an infinitely-scrollable feed personalized to the user in minutes with minimal purposeful effort on the consumer end.

More traditional sites like Facebook and Twitter lag behind in this respect, but not for lack of trying. Over the past few years, each of these titans has slowly eliminated the expectation of a timeline in the traditional sense (a line of content organized by time, with new content at the top and old at the bottom) and replaced it with a content buffet cultivated to the perceived personal tastes of the user. 

A personalized feed of content designed to fit each user’s wishes may sound lovely, the sort of development that would satisfy the clients of any company that could offer it.

But with today’s social media giants, the “clients” aren’t the people we think they are. It’s grown trite and overplayed, but the modern-day adage “if you’re not paying, you’re the product” is at its truest here. Facebook makes money from users navigating their personalized feeds, but only in a roundabout way. For every few pictures of new babies, foreign vacations, and AI-generated photographs of African children building temples out of soda cans or triple-thumbed heteroromantic soldiers rearranged to look like Jesus, you’ll see an advertisement or sponsored post. By selling this valuable ad space, Facebook rakes in billions each year.

But that ad space is only valuable so long as it’s attended to. A person logging on, scrolling through three posts, and logging off, isn’t generating nearly as much ad income for Meta, Facebook’s parent company, as someone scrolling for hours and hours on end.

This is what forms the basis of Facebook’s modern corporate philosophy as well as the philosophies of Twitter, YouTube, TikTok, and whichever other social media platform can break through the long-calcified mycelial kevlar crust that once fostered an ecologically-inspired series of competing startups but for the last decade or so has grown content hosting a few titanic sequoias instead. Engagement is king. Implementing asked-for features might satisfy users, but maximizing engagement means maximizing profit. The longer you scroll, the more money they make.

This is where we come back to the Russian fox study. In Belyayev’s experiment, he selected for one variable: friendliness. For the past decade plus, social networks have been doing the same type of experiment, themselves selecting for a single variable: engagement.

This is the algorithm. While sites and apps like Facebook, Twitter, TikTok, Reddit, and Instagram may each serve content to users that makes them happier, entertained, or better educated, these are accidents of the format in the same way that the occurrence of dog-like traits in Belyayev’s experiment are unintended consequences of single trait-focused breeding.

The sole goal of this modern experiment is to increase the average time spent scrolling through the machine-generated feed. Sometimes, and for some users, the algorithm can accomplish this through routine delivery of dog videos, harmless memes, and family photos.

Often, the system is more insidious. We enjoy content that makes us smile, but the time we spend on it is short. We tap ‘like’, maybe we comment, but before long we move on. Content that makes us angry, though, does something deeper to our animal brains, and tends to encourage us to stick around. We might engage more directly, commenting not just once, but several times as we delve into arguments with friends, family members, strangers who hold stances opposite ours. And when we want the reactions of our friends or their reinforcement or a simple reminder that we’re not alone in our rage, we share.

A “like” is engagement in the barest sense. It may encourage the algorithm to share a post more widely, but a comment encourages repeat engagement, inviting those who have already viewed a post to view it anew. But this, too, pales against the power of sharing, which directly invites new eyes to the page; new eyes to witness, new eyes to scroll, new eyes to be advertised to.

The algorithm is incentivized to enrage us, but it does so completely without malice. It doesn’t know that it’s dividing us as people, only that it’s bringing more of us together on the feed. Like Belyayev selecting foxes for friendliness and accidentally encouraging the rise of dog-like traits, social media companies build algorithms that select for attention and engagement and accidentally inspire rage and hate.

Refining the experiment

Today, Belyayev’s Silicon Valley heirs design their experiments to work more efficiently than ever before. A new silver fox takes at least the better part of a year to reach reproductive maturity, but TikTok can run dozens of microexperiments per minute on each of its billion users. The old-school like-and-share metrics of a social media dinosaur like Facebook have nothing on TikTok and its capacity for data aggregation. Each time a user watches the entirety of a TikTok video is a data point, and each time they prematurely scroll past one before it has a chance to loop is an equally-valuable data point.

Elder platforms like Facebook and Twitter support video, but TikTok’s focus on a fast-paced, one-after-another infinite feed of (usually) short, curated videos elevates the experiment, representing an endless stream of A/B tests in pursuit of a single goal: engagement.

TikTok, to me at least, represents a sort of bittersweet departure from the rage-based formula inadvertently favored by its peers. But this comes as a result of a built-in advantage: with TikTok, attention is guaranteed. Nobody opens (or, at least, very few people open) TikTok with the express intent to interact with a specific piece of content. Theoretically, you can choose to browse content posted only by accounts you follow, but for most of us, the meat of TikTok is baked into the For You Page, an infinite collection of content packaged by the algorithm quicker than we can take all of it in.

Launched as a Vine clone that sprouted both from the decaying husk of that platform and the syphilitic body of its newer host organism, the middle schooler-and-divorcée karaoke app Music.ly, TikTok’s short form content didn’t really lend itself to the same sort of rage bait that prospers on traditional social media. With time, the maximum length of a TikTok video has ballooned, and while political content is now as at home there as any other platform, TikTok’s association with the incumbent era of political collapse is more related to concerns that the Chinese government will use it to datamine our teenagers before our corporations have had a swing at it.

But I might be wrong to downplay the potential usefulness of an app like TikTok as a weapon, because the potency of the For You Page comes from its comprehensiveness, a comprehensiveness that other social media apps and sites like Twitter and Reddit have desperately been trying to initiate.

For years, traditional sites like these, Instagram, and Facebook, operated on a business model of splicing ads into a news feed of content produced by a user’s friends. In short time, these feeds came to also play host to celebrities and corporations, but their inclusion came as the result of the same user choice that put their friends there.

Today, a Reddit user accustomed to a curated feed of subreddits tailored to their  unique interests won’t be able to scroll far before they begin seeing posts and content from subreddits they’re not subscribed to, often alongside blurbs explaining that this content is similar to content they’ve been interested in before. If they open a post, even one from a subreddit they’re subscribed to, they won’t be able to view all of the comments without first shooing away a list of recommended posts from this and other subreddits.

Not long ago, Twitter tossed its default chronological news feed in favor of a hodgepodge melting pot of algorithmically-curated content, following the decisions of Facebook before it.

Not all of these decisions were made in an attempt to ape TikTok, but it’d be hard for executives at any of these companies to deny comparing their own metrics to those of the clock app and inevitably coming up short.

TikTok’s content is so rich, eye-catching, and reliable that it’s no surprise that it’s eclipsed all of its competitors in user engagement time despite its relative youth.

I’m waxing anecdotal here, but I think these factors; richness, attraction, and reliability, are what make TikTok so addicting. The app is engineered to operate as a traditional Skinner Box in that any rat given access for just a short while will soon learn that enough button presses, or, in this case, swipes, will eventually result in the sort of reward you’re looking for. In peacetime, we can call it a hobby. Raise the stakes? Neurochemical addiction.

It will continue to be funny to watch Republican senators and representatives bumble their way around trying to understand this new behemoth of social media, especially as they inevitably enter their ninetieth and one hundredth decades on this earth, and especially through as long as video of their lopsided duels continues to be uploaded to TikTok. But we’d be giving too much leeway to suggest their arguments are entirely without merit. Is the Chinese government mining TikTok for influence? Maybe not, but maybe. It’s starting to look like any nefarious security agency worth its salt would be stupid not to weaponize our dwindling supply of social recreation outlets. But the real magic is free for the taking for anyone able to make their way into the control suite. The real nightmare here isn’t turning TikTok overnight into an in-your-face propaganda machine, but gently molding it to, in turn, gently mold those who attend to it, those accustomed to feeds built for them, and slowly souring their content in a way that invites their worldviews to follow.

When Elon Musk bamboozled himself into buying his favorite emotional slot machine and rebranding it “X” in a ceremonious attempt to be crowned arch-edgelord, his sweeping array of changes made Twitter into a website that was, for the first time in a decade, easy for me to leave. Musk’s mandates flooded the platform with the Nazis and bigots who had previously been subject to cursory bans, eliminated the news feed that sat at the core of the site’s identity, and overnight dismantled the system of vetting (blue check-marks) that verified the identities of prominent people and organizations, replacing it first with a subscription system that promoted the tweets of the weird, incel-adjacent sort of sycophants willing to pour money into a South African billionaire’s weird dopamine factory.

Some users fled Twitter for fledgling clones like Mastodon, BlueSky, or Meta’s Threads. So far, none of these apps have proven to be the Twitter killer that each undoubtedly set out to be. But between their nibbles and the more substantial atrophying of attrition by users like me (though I suspect we are still in the minority), Twitter has languished as its peers continue to rise. Musk successfully molded Twitter to his weird specifications in the way that every gamer would know exactly how to fix the flaws in their favorite games, but the way he did it was conspicuous and discomforting.

Imagine now a situation where the person in power is an intelligent, goal-directed individual motivated more by metric success than the approval of incels and the core audience of Reddit circa 2010. Instead of Musk’s ugly exodus, we might imagine instead the old parable of a frog being placed in a pot of water and having the temperature rise so slowly that he doesn’t realize he’s being boiled alive. If we can ignore the obvious wisdom that frogs have temperature-sensitive neurons just like humans do and will not actually consent to being boiled to death just because the stove administrator is chill about it, we can appreciate the aptness of the metaphor. 

If it feels like I’m far off base here, recall that the key element at play in this conversation is TikTok’s algorithm, one so powerful that even its closest competitor in the engagement war, grizzled dark horse YouTube, has fundamentally modified the look and approach of its mobile app to steal TikTok’s whole flow at every turn. Twitch has done the same.

This effectiveness, the ability to keep users engaged without necessarily enraging them, isn’t the harmless, filtered reimagining of Facebook and Instagram’s old-school cigarette it might appear to be. In perfecting its algorithm, TikTok cast aside the online forever war to become the only thing humans love more: hard drugs.

Internal data produced by TikTok suggests that viewing videos in the app becomes a habit after just 260 videos. 260 sounds like a lot, but it’s important to remember the brevity of the average TikTok video. The information, leaked to the public after being provided as part of a lawsuit by the attorneys general of thirteen states and the District of Columbia against the new social media colossus. In filing the suit, Kentucky’s Attorney General used a figure of 8 seconds per video to allege that habits could be formed in just 35 minutes.

…of course, what kind of professional journalist would I be to not point out that in the year 2024, most TikTok videos (in my house and yours, “TikToks” suffices) are considerably longer than 8 seconds. All the same, even if we assume most videos are a minute or so long on average, we’re still looking at under four and a half hours. Imagine going to the movies for a double feature and leaving the theater chemically addicted to cinema.

With each step down the rabbit hole of social media development, the game becomes less and less about competing to provide the better social space and more and more about vying to produce the most potent drug. The concept isn’t new. Humans have been doing the same thing for centuries with what we call “actual drugs”. Most of today’s psychoactive substances, both licit and illicit, originate from some sort of natural ancestor that produces a product far less potent. Millennia of human ingenuity and an unyielding desire to get blitzed out of our own fucking minds have led us to cultivate these crops until they can give us the sort of highs we’re willing to die for.

We’ve been making dangerous drugs for as long as we’ve been partaking in agriculture, wrestling the natural state of the world for our benefit. But for hundreds of thousands of years, we’ve been doing the manipulating. And for most of social media’s history, we’ve held the keys of responsibility. But now, for the first time, we’ve afforded robots the right to determine the content that is delivered to us.

The elephant in the room

Those same robots, the ones that dictate our social media feeds, are complex silicon organisms that each took years to build, but they’re comparatively dumb. The circuit of finding out what we engage with and feeding us more of it is nothing compared to the abilities of those byzantine pathways of falsified neurons that make up today’s leading artificial intelligence platforms.

Over the past couple of years, we’ve been asked to embrace (or brace for) AI in all corners of our lives. The long-foretold robot revolution that would usher humans from the burden of work and into creative pursuits instead threatens to take away those creative pursuits without any promise of easing labor quotas or, at the very least, fulfilling the simple task of furthering the human condition. Today’s major AI engines instead allow us to cheat on our homework, minimize social interaction, and turn art theft into a white collar money laundering operation with ease. 

To be sure, there are positive implications for the furthering of AI technology as well. Artificial intelligence has been enlisted in the detection of cancer, identification of fake news and fraud, and furthering of medical research. Boiled down, AI offers to let us offload busywork entirely and then our attention to tasks that previously would have required us to significantly upscale manpower.

But giving tasks previously reserved for humans to robots comes with obvious tradeoffs, even in simple machines. When we enlist industrial crushers and laser cutters over people, we not only free humans from the burden of this specific kind of labor, we also free the labor from humans. Sometimes the result is beneficial to the task as well as the person—machines are better able to do the same rote task ad nauseam without the intrusion of unrelated thoughts, emotions, or neural fatigue.

But some of these human traits, though theoretically unrelated to the task, serve to keep us grounded. A human who gets in the way of a working human is an annoyance. A human who gets in the way of an industrial crusher is dead.

Of course, machines can be made safer. They can be built to stop when accidents happen and when human lives are put in danger. The moral of this piece is not one in lockstep with the neoluddites.

We can adapt. But in order to do so, to prioritize safety, we have to be clued into it. We have to care. And for the last couple of decades, if social media companies have taught us anything, it’s that they don’t. When the goal is something as nebulous as engagement, it’s easy for concepts like informational accuracy and mental health to take the back seat.

AI isn’t social media, but as these models get larger and take on more responsibility, and as the name of the game increasingly becomes building a bigger, faster, and stronger model than the next guy, we have to ask ourselves how much trust we want to put in systems built with these nebulous goals.

When I sat down to write this piece several months ago, social media was in most ways the same it is now, largely the same few platforms dominating our daily conversations. Besides a slightly more potent pull away from Twitter (now “X” (it was also “X” then but who’s counting)) by certain (liberal) groups, the platforms in play then are the same ones on the field now. The TikTok ban referred to earlier in this piece came and went over the span of mere hours, during which most of those users who would have been affected by it were asleep. 

Addicts Users who returned compulsorily to the supposedly-dead app the following morning were met by a message proclaiming that the apparent ban had been staved off through the intercedence of Donald Trump, a man who was not yet President at the time the message had been sent. Individuals politically savvy enough to detect bullshit from steak tartare will have cast aside the message, recognizing it immediately for what it is: an attempt to curry favor with America’s new executive through his native language of abject flattery. There’s no evidence Trump or his team did anything meaningful to stymie the ban, and there’s similarly no evidence that TikTok’s development team did anything to make the app more amenable to the incoming administration.

What matters, and why this is relevant, is that one of these social behemoths was threatened not only by government intervention, but by effective redistribution of the keys of power to a more acceptable overlord. The imagined scenario above, the one where Elon Musk’s takeover of Twitter was repeated by a smarter person usurping control over a better-populated app, came one step closer to happening, to being made law.

Let’s return one last time to the story of Belyayev and Trut’s fox experiment, or rather, to a series of experiments they ran in parallel with their stars, those friendly foxes. In their spare time between riveting nice guy fox breeding sessions, Belyayev, Trut, and their team also selected for at least one other trait in both foxes and rats: aggression. Unsurprisingly, just as the friendlier foxes over time produced friendlier offspring, aggressive foxes purposefully bred together over time produced more aggressive offspring.

Foxes can only become so dangerous before they begin to transcend the boundaries of modern experimental evolutionary biology and drift into the neighboring but separate field of science fiction. But these social media networks, with the command they have on our minds and the minds of our children, have no such upward bound. These are aggressive foxes bred to wield firearms and encourage casual meth use in children, held back from destruction by leashes held by money-motivated men who can be convinced or compelled to give them up for the right price or poorly-reasoned reason.

The same is true of AI. My worry when I use Google or any other platform that’s shoehorned AI-generated content into its delivery system isn’t that the AI will be bad. Bad AI results are, of course, at least funny and make for good content on TikTok. What worries me, and should worry you, is the prospect of these AI results becoming good enough. For some of us, they’re already there. And for many more, maybe most, there will come a day when you type or speak a query to Google and it parrots you back an AI-generated answer that you accept without questioning.

We accept pretty readily that our TikTok, Facebook, and YouTube feeds are generated non-maliciously. And we might as easily accept that our search results are digested using a similar, pro-human, or, at least, not anti-consumer algorithm and returned to us faithfully. But these systems, despite all of the Asimovian restraints their better-minded designers may attempt to put in place, are not inherently benevolent. They’re mercenaries, contractors who do their work for the benefit not necessarily of even the highest bidder, but for whomever happens to hold the keys.

America’s wealthiest deadbeat dad took control of one of the nation’s largest social outlets just before the singularity started to feel like less of a science fiction concept and more of a promise. The next power transition we arrive at may not occur under as auspicious circumstances.

To accomplish even its most milquetoast goal–driving engagement to meme parlors and after-hours divorce courts of the internet–the algorithm almost immediately identifies a shortcut: making us angry. With the exception of occasional micro-connections when things grow too spooky, the keyholders have largely allowed this shortcut to stay open unabated. For the robot, the ends do largely justify the means.

If, then, the robot cannot be trusted to curb itself, and neither can its owners be trusted to curb it themselves, we cannot trust in it as a force for good. For our health and our continued survival, we have to operate under the assumption that the robot is the bad guy.

Leave a Reply

Your email address will not be published. Required fields are marked *