AI is killing the old web, and the new web struggles to be born (2023)

In recent months, the signs and portents have been accumulating with increasing speed. Google istrying to killthe 10 blue links. Twitter isbeing abandonedto bots and blue ticks. There’s thejunkification of Amazonand theenshittification of TikTok. Layoffs are gutting online media. Ajob postinglooking for an “AI editor” expects “output of 200 to 250 articles per week.” ChatGPT is being used to generatewhole spam sites. Etsy is flooded with “AI-generated junk.” Chatbots cite one another in amisinformation ouroboros. LinkedIn is using AI tostimulate tired users. Snapchat and Instagram hopebotswill talk to you when your friends don’t. Redditors are stagingblackouts. Stack Overflow mods areon strike. The Internet Archive is fighting offdata scrapers, and “AI is tearing Wikipedia apart.” The old web is dying, and the new web struggles to be born.

The web is always dying, of course; it’s been dying for years, killed by apps that divert traffic from websites or algorithms that reward supposedly shortening attention spans. But in 2023, it’s dyingagain —and, as the litany above suggests, there’s a new catalyst at play: AI.

AI is overwhelming the internet’s capacity for scale

The problem, in extremely broad strokes, is this. Years ago, the web used to be a place where individuals made things. They made homepages, forums, and mailing lists, and a small bit of money with it. Then companies decided they could do things better. They created slick and feature-rich platforms and threw their doors open for anyone to join. They put boxes in front of us, and we filled those boxes with text and images, and people came to see the content of those boxes. The companies chased scale, because once enough people gatheranywhere, there’s usually a way to make money off them. But AI changes these assumptions.

Given money and compute, AI systems — particularly the generative models currently in vogue — scale effortlessly. They produce text and images in abundance, and soon, music and video, too. Their output can potentially overrun or outcompete the platforms we rely on for news, information, and entertainment. But the quality of these systems is often poor, and they’re built in a way that is parasitical on the web today. These models are trained on strata of data laid down during the last web-age, which they recreate imperfectly. Companies scrape information from the open web and refine it into machine-generated content that’s cheap to generate but less reliable. This product then competes for attention with the platforms and people that came before them. Sites and users are reckoning with these changes, trying to decide how to adapt and if they even can.

AIis killing the old web, and the new web struggles to be born (1)

AIis killing the old web, and the new web struggles to be born (2)

Screenshot by Jay Peters / The Verge

In recent months, discussions and experiments at some of the web’s most popular and useful destinations — sites like Reddit, Wikipedia, Stack Overflow, and Google itself — have revealed the strain created by the appearance of AI systems.

Reddit’s moderators are staging blackouts after the company said it would steeply increase charges to access its API, with the company’s execs saying the changes are (in part) a response to AI firms scraping its data. “The Reddit corpus of data is really valuable,” Reddit founder and CEO Steve HuffmantoldThe New York Times. “But we don’t need to give all of that value to some of the largest companies in the world for free.” This is not the only factor — Reddit is trying to squeeze more revenue from the platform before a planned IPO later this year — but it shows how such scraping is both a threat and an opportunity to the current web, something that makes companies rethink the openness of their platforms.

Wikipedia is familiar with being scraped in this way. The company’s information has long been repurposed by Google to furnish “knowledge panels,” and in recent years, the search giant has startedpaying for this information. But Wikipedia’s moderators aredebatinghow to use newly capable AI language models to write articles for the site itself. They’re acutely aware of the problems associated with these systems, which fabricate facts and sources with misleading fluency, but know they offer clear advantages in terms of speed and scope. “The risk for Wikipedia is people could be lowering the quality by throwing in stuff that they haven’t checked,” Amy Bruckman, a professor of online communities and author ofShould You Believe Wikipedia?toldMotherboard recently. “I don’t think there’s anything wrong with using it as a first draft, but every point has to be verified.”

“The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typicallylook liketheymightbe good.”

Stack Overflow offers a similar but perhaps more extreme case. Like Reddit, its mods are also on strike, and like Wikipedia’s editors, they’re worried about the quality of machine-generated content. When ChatGPT launched last year, Stack Overflow was the first major platform tobanits output. As the mods wrote at the time: “The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typicallylook liketheymightbe good and the answers are very easy to produce.” It takes too much time to sort the results, and so mods decided to ban it outright.

The site’s management, though, had other plans. The company has since essentially reversed the ban by increasing the burden of evidence needed to stop users from posting AI content, and it announced it wants to instead take advantage of this technology. Like Reddit, Stack Overflow plans tocharge firmsthat scrape its data whilebuilding its own AI tools— presumably to compete with them. The fight with its moderators is about the site’s standards and who gets to enforce them. The mods say AI output can’t be trusted, but execs say it’s worth the risk.

All these difficulties, though, pale in significance to changes taking place at Google. Google Search underwrites the economy of the modern web, distributing attention and revenue to much of the internet. Google has been spurred into action by the popularity of Bing AI and ChatGPT as alternative search engines, and it’s experimenting with replacing its traditional 10 blue links with AI-generated summaries. But if the company goes ahead with this plan, then the changes would be seismic.

Awriteup of Google’s AI search betafrom Avram Piltch, editor-in-chief of tech siteTom’s Hardware,highlights some of the problems. Piltch says Google’s new system is essentially a “plagiarism engine.” Its AI-generated summaries often copy text from websites word-for-word but place this content above source links, starving them of traffic. It’s a change that Google has been pushing for a long time, but look at the screenshots in Piltch’s piece and you can see how the balance has shifted firmly in favor of excerpted content. If this new model of search becomes the norm, it could damage the entire web, writes Piltch. Revenue-strapped sites would likely be pushed out of business and Google itself would run out of human-generated content to repackage.

Again, it’s the dynamics of AI — producing cheap content based on others’ work —that is underwriting this change, and if Google goes ahead with its current AI search experience, the effects would be difficult to predict. Potentially, it would damage whole swathes of the web that most of us find useful — from product reviews to recipe blogs, hobbyist homepages, news outlets, and wikis. Sites could protect themselves by locking down entry and charging for access, but this would also be a huge reordering of the web’s economy. In the end, Google might kill the ecosystem that created its value, or change it so irrevocably that its own existence is threatened.

AIis killing the old web, and the new web struggles to be born (3)

AIis killing the old web, and the new web struggles to be born (4)

Illustration by Alex Castro / The Verge

But what happens if we let AI take the wheel here, and start feeding information to the masses? What difference does it make?

Well, the evidence so far suggests it’ll degrade the quality of the web in general. As Piltch notes in his review, for all AI’s vaunted ability to recombine text, it’s people who ultimately create the underlying data — whether that’s journalists picking up the phone and checking facts or Reddit users who have hadexactlythat battery issue with the new DeWalt cordless ratchet and are happy to tell you how they fixed it. By contrast, the information produced by AI language models and chatbots is often incorrect. The tricky thing is that when it’s wrong, it’s wrong in ways that are difficult to spot.

Here’s an example. Earlier this year, I was researchingAI agents— systems that use language models like ChatGPT that connect with web services and act on behalf of the user, ordering groceries or booking flights. In one of the many viral Twitter threads extolling the potential of this tech, the authorimagines a scenario in whicha waterproof shoe company wants to commission some market research and turns to AutoGPT (a system built on top of OpenAI’s language models) to generate a report on potential competitors. The resulting write-up is basic and predictable. (You can read ithere.) It lists five companies, including Columbia, Salomon, and Merrell, along with bullet points that supposedly outline the pros and cons of their products. “Columbia is a well-known and reputable brand for outdoor gear and footwear,” we’re told. “Their waterproof shoes come in various styles” and “their prices are competitive in the market.” You might look at this and think it’s so trite as to be basically useless (and you’d be right), but the information is also subtly wrong.

AI-generated content is often subtly wrong

To check the contents of the report, I ran it by someone I thought would be a reliable source on the topic: a moderator for the r/hiking subreddit named Chris. Chris told me that the report was essentially filler. “There are a bunch of words, but no real value in what’s written,” he said. It doesn’t mention important factors like the difference between men’s and women’s shoes or the types of fabric used. It gets facts wrong and ranks brands with a bigger web presence as more worthy. Overall, says Chris, there’s just no expertise in the information — only guesswork. “If I were asked this same question I would give a completely different answer,” he said. “Taking advice from AI will most likely result in hurt feet on the trail.”

This is the same complaint identified by Stack Overflow’s mods: that AI-generated misinformation is insidious because it’s often invisible. It’s fluent but not grounded in real-world experience, and so it takes time and expertise to unpick. If machine-generated content supplants human authorship, it would be hard —impossible, even— to fully map the damage. And yes, people are plentiful sources of misinformation, too, but if AI systems also choke out the platforms where human expertise currently thrives, then there will be less opportunity to remedy our collective errors.

AIis killing the old web, and the new web struggles to be born (5)

AIis killing the old web, and the new web struggles to be born (6)

Image:NewsGuard

The effects of AI on the web are not simple to summarize. Even in the handful of examples cited above, there are many different mechanisms at play. In some cases, it seems like the perceived threat of AI is being used to justify changes desired for other reasons (as with Reddit), while in others, AI is a weapon in a struggle between workers who create a site’s value and the people who run it (Stack Overflow). There are also other domains where AI’s capacity to fill boxes is having different effects — from social networks experimenting withAI commitmentto shopping sites whereAI-generated junkis competing with other wares.

In each case, there’s something about AI’s ability to scale — the simple fact of its rawabundance— that changes a platform. Manyof the web’s most successful sites are those that leverage scale to their advantage, either by multiplying social connections or product choice, or by sorting the huge conglomeration of information that constitutes the internet itself. But this scale relies on masses of humans to create the underlying value, and humans can’t beat AI when it comes to mass production. (Even if there is a lot ofhuman work behind the scenesnecessary to create AI.) There’s a famous essay in the field of machine learning known as “The Bitter Lesson,” which notes that decades of research prove that the best way to improve AI systems is not by trying toengineerintelligence but by simply throwing more computer power and data at the problem. The lesson is bitter because it shows that machine scale beats human curation. And the same might be true of the web.

Does this have to be a bad thing, though? If the web as we know it changes in the face of artificial abundance? Some will say it’s just the way of the world, noting that the web itself killed what came before it, and often for the better. Printed encyclopedias are all but extinct, for example, but I prefer the breadth and accessibility of Wikipedia to the heft and reassurance of Encyclopedia Britannica. And for all the problems associated with AI-generated writing, there are plenty of ways to improve it, too — from improved citation functions to more human oversight. Plus, even if the webisflooded with AI junk, it could prove to be beneficial, spurring the development of better-funded platforms. If Google consistently gives you garbage results in search, for example, you might be more inclined to pay for sources you trust and visit them directly.

Really, the changes AI is currently causing are just the latest in a long struggle in the web’s history. Essentially, this is a battle overinformation— over who makes it, how you access it, and who gets paid. But just because the fight is familiar doesn’t mean it doesn’t matter, nor does it guarantee the system that follows will be better than what we have now. The new web is struggling to be born, and the decisions we make now will shape how it grows.

FAQs

How is AI harmful to society? ›

Others argue that AI poses dangerous privacy risks, exacerbates racism by standardizing people, and costs workers their jobs, leading to greater unemployment. For more on the debate over artificial intelligence, visit ProCon.org.

What are the dangers of AI technology? ›

Some of the major dangers of AI include misinformation (including creating convincing fake images and video known as deepfakes), privacy concerns, the loss of jobs, bias and discrimination, market and financial volatility, and a so-called singularity in which AI surpasses human intelligence.

Why is AI a bad thing? ›

Real-life AI risks

There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.

Is AI a real threat? ›

A group of industry leaders warned on Tuesday that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on a par with pandemics and nuclear wars.

How AI will affect human life? ›

From morning to night, going about our everyday routines, AI technology drives much of what we do. When we wake, many of us reach for our mobile phone or laptop to start our day. Doing so has become automatic, and integral to how we function in terms of our decision-making, planning and information-seeking.

Will AI help the world or hurt it? ›

AI systems can cause harm when people use them maliciously. For example, when they are used in politically-motivated disinformation campaigns or to enable mass surveillance. But AI systems can also cause unintended harm, when they act differently than intended or fail.

Will AI affect humans? ›

AI-powered technologies such as natural language processing, image and audio recognition, and computer vision have revolutionized the way we interact with and consume media. With AI, we are able to process and analyze vast amounts of data quickly, making it easier to find and access the information we need.

Why are people worried about AI? ›

AI lacks the ability to consider these intangible factors and may make decisions solely based on pre-programmed algorithms or data inputs, which could lead to unintended consequences or even catastrophic errors. Secondly, the use of AI in military decision-making raises significant ethical concerns.

Is AI good or bad? ›

Experts emphasize that artificial intelligence technology itself is neither good nor bad in a moral sense, but its uses can lead to both positive and negative outcomes.

Why is AI morally wrong? ›

But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.

How will AI affect the future? ›

The productivity of artificial intelligence may boost our workplaces, which will benefit people by enabling them to do more work. As the future of AI replaces tedious or dangerous tasks, the human workforce is liberated to focus on tasks for which they are more equipped, such as those requiring creativity and empathy.

How is AI making humans lazy? ›

AI performs repetitive tasks in an automated manner and does not let humans memorize, use analytical mind skills, or use cognition (Nikita, 2023). It leads to an addiction behavior not to use human capabilities, thus making humans lazy.

What did Elon Musk say about AI? ›

Speaking via video link to a summit in London, Musk said he expects governments around the world to use AI to develop weapons before anything else. Elon Musk has hit out at artificial intelligence (AI), saying it is not "necessary for anything we're doing".

Will AI replace humans? ›

While AI can replace some tasks, it cannot replace human problem-solving skills. Therefore, combining the strengths of AI and human curiosity is necessary to achieve outstanding results in scientific pursuits.

Could AI take over the world? ›

It's unlikely that a single AI system or application could become so powerful as to take over the world. While the potential risks of AI may seem distant and theoretical, the reality is that we are already experiencing the impact of intelligent machines in our daily lives.

Is AI warning of extinction? ›

Leading experts warn of a risk of extinction from AI : NPR. Leading experts warn of a risk of extinction from AI The AI programs we are creating could outsmart us and lead to our collective demise, according to the tech industry's leading experts who say it's time to address the threats they pose.

Can humans control AI? ›

In conclusion, the possibility of a superintelligent AI system becoming uncontrollable and dangerous cannot be ignored. The theoretical calculations presented in the study suggest that controlling such a system would be impossible, and an algorithm that can prevent it from harming humans cannot be developed.

How AI is changing the world? ›

“The main advantage of AI is that it leverages data and incrementally improves efficiency, accuracy, and decision-making across a range of industries. In healthcare, AI helps improve disease diagnosis, and develop new treatments," John Kallelil, CEO and Founder, XED said.

Is Siri an artificial intelligence? ›

Yes. Alexa and Siri are applications powered by artificial intelligence. They rely on natural language processing and machine learning, two subsets of AI, to improve performance over time.

What are the benefits and dangers of AI? ›

AI can offer many benefits, such as improving efficiency, accuracy, accessibility, and quality of life. However, AI also poses some risks, such as displacing jobs, invading privacy, creating bias, and undermining human dignity.

Will AI solve all our problems? ›

Not Every Problem Can Be Solved

Many problems are solvable using a Turing machine and therefore can be solved on a computer, while many others are not. For example, the domino problem, a variation of the tiling problem formulated by Chinese American mathematician Hao Wang in 1961, is not solvable.

Will AI be the best or worst thing for humanity? ›

Professor Stephen Hawking has warned that the creation of powerful artificial intelligence will be “either the best, or the worst thing, ever to happen to humanity”, and praised the creation of an academic institute dedicated to researching the future of intelligence as “crucial to the future of our civilisation and ...

Can AI become self aware? ›

The CEO of Alphabet's DeepMind said there's a possibility that AI could become self-aware one day. This means that AI would have feelings and emotions that mimic those of humans.

Why can't AI replace humans? ›

AI Can't Compete With Human Originality

AI can perform many tasks with high accuracy and efficiency, but it still lacks the ability to think creatively and come up with original ideas that resonate with human emotions and experiences and speak to a brand's unique character.

What is the fear of AI called? ›

Fear of computers, artificial intelligence, robots, and other comparable technologies is known as technophobia.

How many people are scared of AI? ›

Out of the 4,415 adults living in the US who participated, 61 percent said they believe AI poses risks to humanity. Just 22 percent felt the opposite way, while 17 percent were unsure. Parsed another way, 78 percent of respondents either believe AI is a potential threat to humanity or are not willing to say otherwise.

Why we should not be afraid of AI? ›

The amount of data we generate every day is staggering, and we need AI to help us make sense of it all. By using AI, we can analyze large amounts of data quickly and accurately, allowing us to make better decisions. Some people might be hesitant to embrace AI because they fear it will take their jobs.

Can an AI be evil? ›

Even in this fictional story, ChatGPT notes that the AI was “programmed” to be malicious and evil. As with any technology, it can be used for good or for bad. The technology itself is not going to break bad on its own. AI is just like every other technology — it is just a tool that can be used for good or evil.

Is AI better than humans? ›

AI is best suited for handling repetitive, data-driven tasks and making data-driven decisions. However, human skills such as creativity, critical thinking, emotional intelligence, and complex problem-solving still need to be more valuable and easily replicated by AI.

Is AI really smarter than humans? ›

AI has already achieved and surpassed human intelligence in many tasks. It can beat us at strategy games such as Go, chess, StarCraft and Diplomacy, outperform us on many language performance benchmarks, and write passable undergraduate university essays.

What is the danger of AI weapons? ›

Autonomous weapons can also accelerate the use of force beyond human control. While speed might be advantageous to militaries in some circumstances, when uncontrolled it risks escalating conflicts in an unpredictable manner and aggravating humanitarian needs.

Can AI robots harm humans? ›

1. A robot may not harm a human being. This modification is motivated by a practical difficulty as robots have to work alongside human beings who are exposed to low doses of radiation.

Why are people scared of AI? ›

Among the other concerns: AI systems, left unchecked, can spread disinformation, allow companies to hoard users personal data without their knowledge, exhibit discriminatory bias or cede countless human jobs to machines.

Why should we be worried about AI? ›

AI lacks the ability to consider these intangible factors and may make decisions solely based on pre-programmed algorithms or data inputs, which could lead to unintended consequences or even catastrophic errors. Secondly, the use of AI in military decision-making raises significant ethical concerns.

Is AI harmful in future? ›

As AI grows more sophisticated and widespread, the voices warning against the potential dangers of artificial intelligence grow louder. “The development of artificial intelligence could spell the end of the human race,” according to Stephen Hawking. The renowned theoretical physicist isn't alone with this thought.

What are the 3 rules of AI? ›

The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.

How does AI change human behavior? ›

By analyzing patterns in people's online activities and social media interactions, AI algorithms can predict what a person is likely to do next. Cult leaders and dictators can use predictive models to manipulate people into doing what they want by providing incentives or punishments based on predicted behavior.

How robots will affect human life? ›

Modern technologies, including robots and AI, contribute to the development of digital health and significantly improve medical care. Robotic machines have become irreplaceable surgeons' assistants. They enable minimal invasiveness and improved accuracy during operations, which reduces recovery time for patients.

Why is Elon Musk afraid of AI? ›

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said in his interview with Tucker ...

What is the scary side of AI? ›

The dark side of AI tools includes bias and discrimination, privacy and security concerns, job displacement, and unintended consequences. The ethical and social implications of AI tools include responsibility and accountability, transparency and explainability, and human dignity and autonomy.

What percentage of people are scared of AI? ›

According to the data, 61 percent of respondents fear that AI poses risks to humanity, while only 22 percent disagree, said Reuters, noting that 70 percent of those who voted for Donald Trump in 2020 are worried about the issue compared to 60 percent of President Joe Biden's supporters.

References

Top Articles
Latest Posts
Article information

Author: Sen. Ignacio Ratke

Last Updated: 19/11/2023

Views: 6497

Rating: 4.6 / 5 (76 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Sen. Ignacio Ratke

Birthday: 1999-05-27

Address: Apt. 171 8116 Bailey Via, Roberthaven, GA 58289

Phone: +2585395768220

Job: Lead Liaison

Hobby: Lockpicking, LARPing, Lego building, Lapidary, Macrame, Book restoration, Bodybuilding

Introduction: My name is Sen. Ignacio Ratke, I am a adventurous, zealous, outstanding, agreeable, precious, excited, gifted person who loves writing and wants to share my knowledge and understanding with you.