Will AI Finally Deliver on the Promise of the Internet?

And tear down its destructive advertising-based business model

Is This Really Why We Can’t Have Nice Things?

In the 1990s, the Internet promised to unite the world and put all of the accumulated knowledge of humanity at anyone’s fingertips, instantly.

So why is the Internet today a cesspool of misinformation, scams, vitriol, “sponsored results,” and time-wasting, deceptive “dark patterns?”

That’s easy, you may be thinking. “Because people don’t want to pay for anything.” As the saying goes, if you don’t want to pay for the product, you become the product. Since people are so cheap, advertising was the only viable business model to fund the services people wanted. As a result, web services are designed not to provide the highest quality service and experience to the user, but to keep the user using the service as long as possible so that they can see as many ads as possible. They are designed to trick users into clicking on sponsored links so that the hosting site can earn advertising dollars. They are designed not to give us the best results, but to give us the results that make the most money for the operator of the service. The inevitable outcome has been search engines whose top search results are all sponsored, shopping sites whose search feature returns paid product placements rather than the best product to meet your needs, and social media sites that stoke anger and outrage since that keeps people on the site the longest. The wasting of our time, the degradation of our political discourse, and the potential destruction of our democratic societies are just collateral damage.

It Didn’t Have to Be This Way

During the 1990s, several serious efforts were made to develop “microtransaction” systems to facilitate the payment of fractions of a cent over the Internet. The goal was for users to be able to pay a fair price on a “pay as you go” basis for content and services. Major players at the time, including IBM, Digital Equipment Corporation, and Carnegie Mellon University, developed standards and systems to enable micropayments. The World Wide Web Consortium (W3C) even deliberated embedding micropayment request and response messages into HTTP itself.[1]

So, What Happened?

The ultimate reason these efforts failed is mostly a result of the antiquated banking system in the United States. Before cryptocurrency, the only way to move money electronically was through the banking system. In the 1990s, in the United States, that meant either wire transfers (which typically cost $25 - $50 per transfer) or ACH transfers, which are executed as batch jobs of bulk transfers and require one to three days to complete. Neither of these methods was viable for the near-instantaneous transfer of tiny amounts of money.

The Internet that Could Have Been

You may be thinking, “if people don’t want to pay for anything, how would micropayments have helped?” And you would be correct. But we must first ask whether it’s actually true that people don’t want to pay for anything.

The MP3 “crisis” of the late 1990s is an excellent case in point. With Napster and, later, peer-to-peer file-sharing networks like BitTorrent becoming popular ways for people to download “free” music from the Internet, Hilary Rosen, the CEO of the Recording Industry Association of America (RIAA), famously said (perhaps apocryphally), “You can’t compete with free.” The RIAA then embarked on a pointless, unwinnable legal battle, suing over 35,000 individual Internet users for downloading songs illegally from the Internet.[2]

Until, in 2003, Apple proved that you can compete with free, by launching the iTunes Store on April 28 of that year. It turns out that people do want to pay for content. But they want to pay a fair price for content delivered in a convenient format. While the RIAA was busy suing its customers trying to defend an antiquated business model that forced people to spend $15 - $20 on an entire CD when they may only be interested in one or two tracks, on Apple’s iTunes Store music fans could buy the tracks they wanted individually, for $0.99 each, and have them instantly downloaded to their device.

Without a micropayment standard, Apple had to rely on customers using credit cards to pay for music. At Apple’s price of $0.99 per track, this was barely viable given that the prevailing credit card processing fees at the time for a small transaction like this would have been $0.15 - $0.30. Of course, Apple was likely able to negotiate lower fees given their total transaction volume, and many people purchased more than one track at a time, but even so, while iTunes was a lesson in customers’ willingness to pay a fair price for content on the internet, it was also a lesson in the limitations of e-commerce using credit cards.

The Rest is History…

Which brings us back to the present. According to Gizmodo, Google’s search algorithm has essentially been broken by “SEO hackers” who create content to manipulate the algorithm and achieve high search ranking in order to drive users to low-quality “review sites” that earn the owners affiliate revenues when users click on the product links.[3] [4] [5]

I find it hard to believe that users are making a conscious decision to trust review sites whose content is mostly AI-generated and inaccurate, and exist just to skim a commission off of each sale. I’m sure they would pay for an experience that quickly offered them the most highly rated product that meets their needs for the lowest price. I’m also sure that retailers would prefer not to have to pay affiliate revenues for these sites that add no value. But there’s no business model to support that.

So why not go directly to an e-commerce site to look for products, you might ask? Unfortunately, you’ll face the same situation. Most e-commerce sites are “marketplaces” of tens of thousands of independent sellers, which means their search results are up for sale to the highest bidder just like any other search engine. For example, in 2023, Amazon made nearly $50B from advertising alone. This is money that sellers, authors, and others pay Amazon to promote their products on Amazon, and represents 10% of Amazon’s overall e-commerce-related revenue.[6]  Ever wonder why you see so many ”Sponsored,” “Top Picks,” “Customers Also Viewed,” and ”Similar Products” results on e-commerce sites? Even though it makes it harder for users to find what they’re looking for, it’s just too tempting for these sites to pass up that kind of revenue.

When I searched recently for “8 liter warm and cool mist ultrasonic humidifier” on the mobile app of a major e-commerce site, here’s what I got:

Search Rank

Sponsored?

Ultrasonic?

8 Liters?

Warm/Cool Mist?

1

YES

2

YES

3

NO

4

NO

5

YES

6

YES

7

The “4 stars and above” section was at the 7th position in the list. This is a horizontal scroll consisting of five sponsored results. None of them met the search criteria.

8

YES

9

NO

10

NO

The first result to match all three criteria came in at search rank #12. If you include the five products shown in a horizontal scroll at position 7, that means the first accurate result was the 16th product in the list, and an incredible 10 of those 16, or 62% were sponsored results. It’s possible that there aren’t a lot of products that meet all three criteria, but there was definitely at least one. Why was it the 16th product I was shown?

Imagine if there were a search engine that you could pay a reasonable price to use and that always gave you the most accurate results without any advertising or sponsored results.

Goodbye Bard, Hello Gemini (and who is Astra)?

At last November’s MIT EmTech conference in Cambridge, I had the opportunity to ask Google’s General Counsel, Halimah DeLaine Prado, how Google plans to monetize the responses generated by AI assistants like Bard without compromising their accuracy with sponsored content.

She dodged the question, instead answering that Google does not “currently” monetize the responses that Bard gives.

OK, not “currently”. But what about the future? It got me thinking.

Despite projecting an image of being a technology company, Google is an advertising company. Advertising is 78% of Google’s revenue. Google made $238 billion in ad revenue in 2023.[7],[8] Google also owns a whopping 91% of the global search engine market.[9] It’s a brilliantly simple model. Since Google is the place where anyone can search for anything for “free,” sprinkling ads and “sponsored results” has been an easy way to monetize without directly charging for the service. But what happens when everyone starts searching for stuff using AIs like OpenAI, Perplexity, and You.com?

“So what,” you may be thinking. Google is an AI superpower. They acquired DeepMind. They have Gemini. They’ll make money even if AI displaces traditional search.

But don’t forget, it’s the $240 billion a year in advertising revenue that funds all of that AI research and those pricey acquisitions. And while we’ve seen some amazing AI demonstrations from Google, including the Bard “Experiment” (now rebranded as “Gemini”), Google is not monetizing these services yet, at least according to their General Counsel. I decided to ask Bard directly and here’s how they answered:

“Yes, Google may earn advertising revenue when users click on links to purchase products mentioned in my responses. However, I strive to remain objective and unbiased in my recommendations, focusing on providing users with the most helpful and informative content possible.”

So, Google is monetizing Bard’s results? Just to be sure, I clarified with a more specific question. “Is there currently any way for advertisers to pay Google directly to have their content included in Bard search results, similarly to Google AdWords for Google Search?” Bard was pretty clear in their response.

“Currently, there is no direct way for advertisers to pay Google to have their content included in Bard search results.”

I just recently asked Gemini the same question, and got a more nuanced response:

No, Google doesn't directly make money from the specific answers I provide to your questions.

Google's main source of income is advertising. They sell ad space on their products like Search, YouTube, and Maps, and on partner websites https://www.cnbc.com/2021/05/18/how-does-google-make-money-advertising-business-breakdown-.html.

There are a couple of ways this connects to large language models like me.  First, the more users find Google Search helpful, the more likely they are to use it again, which creates more opportunities to show ads.  Second, research and development of large language models is expensive, and the revenue from ads helps fund this.

So, while my answers themselves don't generate ad revenue, I do play a role in Google's overall business strategy.

This brings us back to the question: What happens to traditional search engines and the ad money they bring in when everyone starts searching for everything using AI? First, consider how AI “search” differs from current search engines.

When a user searches for information on ChatGPT, Gemini, Pi, or any number of LLM-based AIs, the interaction is fundamentally different from that of a search engine. Although the interaction often starts similarly, for example, with a question like “What are the top ten humidifiers for a large bedroom?” the results are vastly different.

Interacting with AIs is iterative, like a conversation. You can ask follow-up questions. If the first answer didn’t get you what you wanted, you could say, “These are great, but can you select only the ultrasonic humidifiers with at least 6-liter capacity and with warm mist option, and sort them in order of highest customer rating?”

Today, AIs execute this search exactly how you want it and give accurate results in a clear, easy-to-understand format. Without my asking, Bard actually created a Google Sheet for me with data on the top ten humidifiers so I could easily compare different products based on the specified parameters. Most importantly, all of the results met my stated criteria.

Compare that to a standard search engine, where I got eleven results above the fold, ten of which were sponsored results, only one of which met my stated criteria. I had to wade through a hodgepodge of ads, videos (which all show ads before playing), “reviews” and “top ten” articles (which are also basically ads supported by “affiliate revenue”), and “Shopping” product listings.

I had to click into and back out of countless articles and product listings to confirm whether the listings matched my clearly stated search criteria. Most didn’t—they were too small, weren’t ultrasonic, or didn’t have warm mist—but I had to click into the listing to learn that (generating cash for the search engine with every click). And I had to keep a spreadsheet with all the data so that I could compare them all. It was just a frustrating, time-consuming, and inefficient process. And this experience is the same no matter what kind of product, service, or company I’m searching for on the internet. It was just as bad when searching for a product on an e-commerce site.

The perverse dynamic of the advertising-supported model of internet search is that users pay for these “free” services by having their time wasted, and having to be constantly on the lookout for inaccurate results that are bought and paid for.

Users for the most part understand and accept this from a search engine, but will they accept it from an AI bot?

Hal, are you Lying to Me?

When you interact with an AI agent, you feel like you’re talking to a person. On the other hand, when you look at a traditional search engine results page, you’re clearly looking at data output by a computer process. It’s annoying that many of the results are sponsored, which makes them less accurate. You may get frustrated, but you don’t typically feel “deceived” or “scammed.”

But when a friendly AI gives you a conversational answer in the same language as a human, you will trust it more. A strong body of scientific evidence shows that humans feel more trust and emotional connection to inanimate objects and technologies when those objects express human qualities (anthropomorphism). For example, researchers at Cornell found that placing a human-like doll in the driver’s seat of a self-driving autonomous guided vehicle made human workers trust it more, even though the doll had no control over the vehicle.[10] [11] In another study carried out at Munich University, researchers found that simply giving a robot a name and introducing study subjects to the robot by its name resulted in people rating the robots significantly higher in trustworthiness, warmth, and competence. The same was found when robots were programmed to mimic human gestures or had human faces painted onto them.[12]

So, it’s likely that people will trust AI agents more than search engines, and consider them to be more competent and accurate. This effect will only grow stronger as people begin interacting with AI agents using voice (which is already happening—ChatGPT’s mobile app can already interact fully verbally) and, later, two-way video via interactive AI avatars (which will become a reality by 2025).

Suppose the providers of these AI “assistant” services, who are all mostly search engine companies or funded by search engine businesses, try to monetize the responses these agents provide to their users by inserting sponsored content or product placements, or by not returning product results unless the product pays a placement fee. Users will feel that they have been betrayed or, worse, scammed by their trusted “AI friend.”

I really can’t see a viable path to transposing the current advertising-based model of internet search onto AI agents. If you’re not fully transparent about it, you risk provoking a very visceral response that could destroy the reputation of your AI agent. On the other hand, I don’t see a way to be transparent about it that isn’t clunky and doesn’t destroy the user experience, especially as we move away from text and towards audio and video interfaces. Do you just periodically play an ad before the results, like on YouTube? Does the AI just tell you which content is sponsored?

Frog and Scorpion

We stand on the cusp of a new world with far more powerful information discovery tools than we’ve ever known, fueled by AI. Currently, these AI bots are mostly free and do not have sponsored content, but that is undoubtedly a temporary situation until the bugs are all worked out and they are ready for prime time. The new world that comes after that could take very different forms. We could have AI assistants who give us quick, concise, and accurate answers to our questions, or we could have AI assistants who are designed to maximize advertising revenues for their real customers, the advertisers. We could have a two-tier system for users who are willing to pay and those who are not.

I currently pay $20/month to use ChatGPT, and I consider it an incredible value to get accurate results, so I don’t have to waste time filtering for sponsored and irrelevant content. I estimate that it takes me 3-5x less time to find the information I’m looking for using ChatGPT than using a traditional search engine. How many others are willing to pay for this? As of now, 10 million others, at least in ChatGPT’s case, from which OpenAI is projected to earn a cool $1B in revenue in 2024. But that pales in comparison to Google’s $240 billion in ad revenues. It’s still early days yet, but at $20/month, OpenAI would need 1 billion subscribers to equal Google’s ad revenue.

In an encouraging sign, Google just announced a premium version of its Gemini AI assistant, called Gemini Advanced, as a paid product for $20/month. A world where users pay for AI assistants that are optimized to provide the most accurate and relevant answers is within reach. But perils lie ahead. The tale of the scorpion and the frog seems relevant here.

As the story goes, the scorpion asked the frog to carry him across the river but promised he wouldn’t sting the frog since that would lead to them both drowning. Halfway across the river, the scorpion stings the frog anyway, telling the incredulous frog, as they both drown, “I can’t help it, it’s in my nature.”

It’s in the nature of search engines to provide irrelevant and inaccurate results because advertisers pay them to do it, and most AI research today is funded by search engine businesses. Will we end up in a two-tier world of unpaid AI agents that are sponsored, and paid AI agents that are unsponsored? Or will the scorpion sting the frog and we’ll end up with unpaid AI agents that are more sponsored, and paid AI agents that are less sponsored with no option at all for AI agents who just tell you the truth? We can only hope “this time will be different” for AI.

Copyright Notice

Copyright ©2024 by Journal of Business and Artificial Intelligence

This article was published in the Journal of Business and Artificial Intelligence under the "gold" open access model, where authors retain the copyright of their articles. The author grants us a license to publish the article under a Creative Commons (CC) license, which allows the work to be freely accessed, shared, and used under certain conditions. This model encourages wider dissemination and use of the work while allowing the author to maintain control over their intellectual property.

About the Journal

The Journal of Business and Artificial Intelligence (ISSN: 2995-5971) is the leading publication at the nexus of artificial intelligence (AI) and business practices. Our primary goal is to serve as a premier forum for the dissemination of practical, case-study-based insights into how AI can be effectively applied to various business problems. The journal focuses on a wide array of topics, including product development, market research, discovery, sales & marketing, compliance, and manufacturing & supply chain. By providing in-depth analyses and showcasing innovative applications of AI, we seek to guide businesses in harnessing AI's potential to optimize their operations and strategies.

In addition to these areas, the journal places a significant emphasis on how AI can aid in scaling organizations, enhancing revenue growth, financial forecasting, and all facets of sales, sales operations, and business operations. We cater to a diverse readership that ranges from AI professionals and business executives to academic researchers and policymakers. By presenting well-researched case studies and empirical data, The Journal of Business and Artificial Intelligence is an invaluable resource that not only informs but also inspires new, transformative approaches in the rapidly evolving landscape of business and technology. Our overarching aim is to bridge the gap between theoretical AI advancements and their practical, profitable applications in the business world.

References



[1] Micropayment - Wikipedia

[2] RIAA Gets a Fresh Start in 2009 - American Songwriter

[3] You're Not Imagining It: Google Search Results Are Getting Worse, Study Finds (gizmodo.com)

[4] Did SEO experts ruin the internet or did Google? - The Verge

[5] Google Ads and the War on Free Clicks (wordstream.com)

[6] Amazon.com, Inc. - Amazon.com Announces Fourth Quarter Results (aboutamazon.com)

[7] 2023q4-alphabet-earnings-release.pdf (abc.xyz)

[8] 111 Google Ads Statistics For 2023 (Revenue, Data & Trends) (demandsage.com)

[9] Search Engine Market Share Worldwide | Statcounter Global Stats

[10] [2208.14637] The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction (arxiv.org)

[11] Frontiers | Can Robots Earn Our Trust the Same Way Humans Do? A Systematic Exploration of Competence, Warmth, and Anthropomorphism as Determinants of Trust Development in HRI (frontiersin.org)

[12] Trust in AI Agent: A Systematic Review of Facial Anthropomorphic Trustworthiness for Social Robot Design - PubMed (nih.gov)