Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The AI bubble today is bigger than the IT bubble in the 1990s (apolloacademy.com)
103 points by akyuu 5 hours ago | hide | past | favorite | 146 comments





For those around for the .com bust it does feel very similar. In both cases the tech is amazing and isn’t going away, but the business models of many/most companies “innovating” with the tech is simply unsustainable. A lot of “AI” currently looks like a dry forest waiting for lighting to strike and burn it to the ground. The latest round of PR puff from CEOs saying they’re doing layoffs because of AI (vs their poor performance or prior bad business decisions) is fueling the perception that the hype is a mile wide and a millimeter thick, just waiting for the moment when it all comes crashing down.

This is a longstanding predictable pattern in tech. Most of these “AI companies” will go bust or become a shell of their former self and sold off for parts. The tech will be commoditized and become pretty ubiquitous across the board but not a profit center in its own right.


In most implementations of generative AI, as of today, the use case is as a feature. People don't buy features, they buy products. If your company is built solely on this hot button you better be sure you either have some IP backing it up, are building and own it (models), or are the best mouse trap for your target market. Because, I'm watching an entire industry segment all show up with slight iterations of the exact same thing and those things are not great. They're unreliable and mostly mediocre.

"Thank god <insert every service> added an AI chatbot to their site! It makes it much faster and easier to use!" - things no living soul has ever said

Actually, compared to the non-AI chatbots they are incredibly good.

Considering that the only purpose of chat-bots is to bullshit customers and filter out as much as they can before redirecting to the human support, then yeah - LLM chat-bots are good. Just not good for the the customers. I recently had to call my telco and they had a temerity to demand that I have(!) to speak to the fucking bot. And then bot declared it doesn't understand me and hang up the call. I have no idea if that bot was neural network powered, but the only improvements LLM bot may have achieved is to keep me in the hamster loop longer and delay transfering to the support even more.

Yes. But those feature companies are not IPO’d and listed on the exchanges. That is the key distinction with the markets this time. Most public companies in this space are actually growing their bottom lines. I’m not talking about just Nvidia here. You can see this effect in the entire food-chain. Everyone is benefitting.

CEOs are doing layoffs because Elon dumped 80% of Twitter staff and it didn't collapse.

Those layoffs were a make or break moment for tech.


> Elon dumped 80% of Twitter staff and it didn't collapse.

Have you looked at a graph of Twitter revenue and profit recently (hint: it is very low). Nobody has ever claimed that you cannot fire all the employees and keep Twitter online with a skeleton crew; the claim has always been that you can't do that _and remain a viable business_.


Twitter is now a private company and doesn't share financial numbers. The only insight we have is that the banks that were stuck with the debt from the purchase (the banks who loaned Elon money) were recently able to find buyers for that debt. That is not a sign of bad financial health or diminishing value.

Have you considered why investors were bidding something like 60 cents on the dollar right until Trump's election? (Hint: because Elon has been using the US government to press companies into buying his ad product, despite the fact that it is materially worse than Google or Facebook's).

> The only insight we have

This is untrue. There's fairly credible reporting (e.g. in the FT) that Twitter's reveue numbers have bottomed and they are not making money. Of course, a lot of these problems are being disguised by the fact that Elon Musk has merged in xAI (and investors are very happy to pile money into AI without even the slightest due diligence).


credible reporting based on what exactly?

> Nobody has ever claimed that you cannot fire all the employees and keep Twitter online with a skeleton crew

Lots of people claimed that, it was a common claim, even on HN, which was odd to me because I felt it had been common for 10 years on HN to wonder what the heck these companies are doing with 80% of their employees.


It didn't "collapse" but it lost a ton of users and stopped being a public service where anyone could freely read most tweets. The load on Twitter's servers and the scope of its services have gone down dramatically, and it's childish to conclude "Twitter was overstaffed." Musk made Twitter into a much smaller service.

I don’t think most (non social companies) were as overstaffed as twitter because most companies have valuations based on revenue/profit rather than “it’s internet/social so 1000x”. Many “traditional” companies have been running leaner since the 90s and can’t function at all with even a 25% headcount reduction.

It didn't collapse in the sense of ceasing to exist. It did collapse as a place where decent human beings cared to gather, and it lost over 75% of its market value.

> it lost over 75% of its market value.

No longer true. In March 2025, X Corp. was acquired by xAI, Musk's artificial intelligence company. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt.

Investor sentiment through secondary share trades still prices the company at $44 billion.


Well sure, but that was an inter-party transaction so basically valuations are irrelevant (unless they get sued, which probably won't happen unless the AI stuff collapses).

To be fair, the banks did offload the debt at par (two years later) so some people think X/Twitter is worth some money.


> it lost over 75% of its market value

I'm not sure what you mean by this. Twitter went private. It's no longer "on the market." Its market value is $0.

> It did collapse as a place where decent human beings cared to gather

That's your personal opinion, and though you're welcome to it, I think it's heavily colored by your personal politics, and doesn't reflect reality.

Before its purchase by Musk, Twitter was absolutely full of speech that I personally consider bad ...but apparently you think is okay.

I consider every tweet in this image[1] to be pretty bad. Most of them are by verified accounts. None of them were banned. And apparently you thought this was okay, and that these were """decent human beings"""

[1] https://i.imgur.com/802gCWz.jpeg


You are the one bringing politics into this, not the comment that you are falsely accusing of being politically motivated, with some sort of "free speech" nonsense and an image that has been filled with so many tweets that it's not even legible. Even collecting that image is odd and strange and shows highly politically motivated behavior on your part!

Twitter's valuation with investors was based on the idea of heavy growth. It wasn't even profitable when purchased, and when it did make a profit, it was on the order of $1B/year, nothing that would validate it's market cap.

Musk squashed all potential for growth. Twitter advertisers, users, and the hope of a brighter future are all gone now. A well-established brand has been abandoned for an amorphous X, on which it's pretty much impossible to establish a new brand.

Musk has destroyed so value much from Twitter with bone-headed moves that if he had to answer to investors, he'd be out in a second.


>Even collecting that image is odd and strange and shows highly politically motivated behavior on your part!

"If you care about what I say you are weird, and you care because you are an evil person". The first comment was obviously political, pretending otherwise is only because you are disingenuous, or because you are so polarized that you are unable to understand that your beliefs are politics and not universally accepted facts. You can zoom on the image and see the tweets. These are tweets from verified people before Musk, which were people that Twitter employees personally verified and thought deserved a recognition. Tell me, do you think "Kill all the White People!" and "white genocide is good", are things that good people say?


> You can zoom on the image and see the tweets.

No, I literally can not. The interface of imgur does not allow zooming in to see what is in the Tweets! Collecting that many tweets into such a massive image is the behavior of a crazy person, likely motivated by crazy politics. I do not know what's in the Tweets that's supposed to be objectionable, but the obsessive collection of stuff like that, and spending the time to create such an unreadable way.

> "If you care about what I say you are weird, and you care because you are an evil person"

Who are you quoting? Who brought evil into this? Very weird thing to start your comment with, what is your point?


>It did collapse as a place where decent human beings cared to gather

If it's no longer fit for decent humans, the meaning is that only evil people stay there. You understand this, you only pretend not to.

>the behavior of a crazy person, likely motivated by crazy politics. >Very weird thing

You don't even bother to refute the argument, you only insult.

>No, I literally can not. The interface of imgur does not allow zooming in to see what is in the Tweets!

I zoomed on pc, Firefox, and just did it too on my smartphone, Firefox. Your argument now depends on outright lies.

But I also quoted two tweets for your benefit, both allowed and enabled by Old Twitter.

Does a decent human being say "Kill all the White People!" and "genocide white people"? Yes or no.


In my opinion, if I had said anything that you could demonstrate to be false, you would quote what I said and present an explanation for why it's false. I believe you're not doing that because you cannot do it.

> You are the one bringing politics into this

No, that's not true. The person I responded to characterized the old, pre-musk twitter as "a place where decent human beings cared to gather" and contrasted that with the current twitter.

What possible non-political interpretation of that viewpoint is there? He is clearly saying that when twitter was run by people on the political-left, and (for example) Trump was banned, that was "decent human beings."

My rebuttal is to show a large list of verified accounts making hateful statements. My point is, those statements were allowed under pre-musk twitter. Hate was allowed before, and it's allowed now.

The only difference is that the person I responded to finds the former brand of it acceptable.

Which part of this analysis is incorrect?

> Even collecting that image is odd and strange

It's odd and strange to pretend that you believe I personally collected those examples. You're well aware of how the internet works. You know that someone else created that image.


Of course pre-Musk twitter had horrible people posting to it, just like Hacker News has horrible people like you posting to it. But dang doesn't fancy himself MechaHitler and post antisemitic white supremacist hate speech, or donate hundreds of millions of dollars to elect somebody he knows is a pedophile.

Private companies have valuations. Your can pedant on whether it’s appropriate to use the word “market”, but the fact is pretty well known.

Musk paid $44B and it is generally considered to be worth less than $10B today[1], though his open participation in political corruption has resulted in valuations all over the map, including $44B[2].

It’s kind of Schroedinger’s valuation; if he and Trump are having a spat it’s worth less because there’s a risk Trump orders it shut down or seized or whatever, if he and Trump make up the company is a a part of the ruling party’s government apparatus.

But as far as actual product value, outside of a clubhouse for red hats, it’s pretty close to zero. They have to literally sue companies to force them to advertise[3], which would be laughable if creeping fascism wasn’t so bad that it’s actually a successful strategy. Any attempt to claim free speech high ground while suing companies for declining to advertise is incredibly disingenuous.

1. https://www.theverge.com/2024/9/30/24258129/musks-44-billion...

2. https://www.theguardian.com/technology/2025/mar/19/value-elo...

3. https://www.npr.org/2025/02/01/nx-s1-5283271/elon-musk-lawsu...


> Private companies have valuations

and they are usually massively off.


If you think someone's personal politics of not liking fascists and white supremacists and companies run by and platforming them don't reflect reality, I'd hate to know what your personal politics are.

You created your account just to post that, so you obviously don't want your real name associated with your own personal politics, which is understandable, but cowardly, and undermines the nakedly partisan political points you're trying to score by carrying the water for fascists. How about finding some personal politics to believe that you're not embarrassed to put your real name by?


> you think someone's personal politics of not liking fascists and white supremacists and companies run by and platforming them don't reflect reality

No, I'm sorry. That's not true. You have misunderstood the comment you replied to.

Should I explain it or would you prefer to figure it out for yourself?

Hint: the guy I replied to said, in essence, "it used to be a place for descent people, but now it's not" ...and I showed a large list of verified users being bad. Conclusion: his position doesn't reflect reality.


So what's your real name if you're not ashamed to have it associated with your pro fascist white supremacist politics, water carrying, and apologetics, Mr. Single Use Throwaway Worthless Opinion Green Account Who Will Never Post Again Because You Just Wanted To Bring Politics Into Hacker News Then Run Away?

I think you'll find Twitter to be completely dead to many more people than it was in 2021.

> Elon dumped 80% of Twitter staff and it didn't collapse.

That was because Twitter was massively overstaffed. Are all these other businesses also overstaffed? The result of cheap money? Are they all atu


They are all overstaffed because they were financially incentivized to overstaff, both for valuation and defensive talent hoarding reasons. This was a positive feedback loop which unwound in tech and will unwind in AI too.

Definitely feels like since AI there's been a shift and it's gone from total head count as a measure of success to revenue per employee as a measure of success. The fewer employees the better.

Probably because they need to burn all their investment money on compute.


The thing is, you need those staff to implement expansion plans. Twitter, famously, didn’t make the kind of money people expected a site with that many loyal users to make. Give up on your ambitions and you can lose people. Meta did the same thing.

>That was because Twitter was massively overstaffed.

What's the proof it was the only tech company being overstaffed?

For example Meta doubled its headcount during the pandemic without any increase in market share or new products. How do you explain that not being overstaffed?


> For example Meta doubled its headcount during the pandemic without any increase in market share or new products.

Meta probably overhired, but if the overall market grows immensely due to even more commerce moving online because of the pandemic, it would make sense to hire a lot more people even without a change in market share or new products.


I think it very likely a lot of big tech companies are overstaffed. However a lot of companies in many industries are cutting back.

That's what I said.

Exactly this. Everyone else is doing what everyone else does. There's no direct relationship between AI adoption and layoffs, except for inflated board/CEO expectations.

He definitely started a trend, but the takeaway was a mess. The company makes less money, the technology/UX is objectively getting worse. Yes, some parts of the audience have proven sticky, but I don’t think that was ever a huge revelation. It’s always been possible to take some seeds off the bun.

What were all those people doing all day long?

Twitter was a whole media company. They had staff in every country writing news summaries for trending hash tags, etc.

Search on youtube "A day in the life of a Twitter/$FAANG employee".

Did so. According to Josh and Katie most of the day was spent eating.

Are you still using twitter?

Twitter produced mechahitler and didn't collapse. I don't like this timeline.

"For those around for the .com bust it does feel very similar."

I was around for the .com boom and it feels very different. I experienced the boom as exuberant without limits, the current situation is much more nuanced.


I was around as well and while tech financing is more sophisticated and mainstream, it feels like a similar cliff in regards to valuations - what are some of the nuances you see that separate these too time periods?

My grandmother wasn't using altavista in 2000. She is using chatgpt in 2025.

I was also around, and I concur.

NVDA has a P/E of 55, which is definitely elevated, but nowhere near the 230+ that CSCO had at that time. TO say nothing of SUNW.

The big AI labs are definitely losing money, but they're doing it on the back of tens of (rapidly growing) billions of dollars in ARR, versus the dot com e-commerce and portal flameouts who would go public on (maybe) a million in revenue, at best.

We also have large AI teams at FAANG who are being funded directly by the fat margins of these companies, whose funding is not dependent on the whims of VC, PE or public markets.

These times are not really comparable.


Tesla's P/E is at 177.04, a lot of other AI companies are private so we can't really say.

Teslas pe imbalance long predates the ai cycle so is not relevant.

I don't think Mark Zuckerberg salivating about data centers bigger than Manhattan is "nuanced." People gleefully predicting a 30% increase in national energy consumption strikes me as pretty darn exuberant.

I did as well, and then we had a few layoff rounds after having "positive" results when the VC money dried out, and those that stayed like myself, had several months of delayed salaries.

Maybe you were in the handbasket for the .com boom, and more of an outsider this time around?

I am certainly much older now;-)

So my question to the youngsters in the handbasket:

Do you feel pure and completely untroubled for being part of something big that is certainly not going away anymore? Do you look into your future and see bright skies without the slightest hint of a cloud?


The .com bubble casualties were companies acting like "the internet means we don't need a viable business model". I see echoes of that in today's "AI means we don't need (m)any human experts anymore, now everyone is a 10x engineer".

Unless it's leveraged by skilled experts, AI-generated code is the payday loan / high-interest credit card of tech debt.


I was around for the dot com boom and bust, and this does not feel similar. The issue with the internet was that much of the value came from network effects that were not there in the late 90s and early aughts when personal computing was desktop boxes with 56k dial up connections. Very much, “if you build it, they will come.” It was the mass rollout of cable modems and then smart phones that changed the math.

There is no cart before the horse here. AI is coming for you, not the other way around. The pessimistic takes are underestimating the impact by at least a couple orders of magnitude. Think smart phones as a lower bound.

I have no idea what capacity people here work with AI, but given my view and experience the pessimistic takes I commonly see on here do not seem realistic.


Smartphones as a lower bound is crazy

Karpathy’s talk about computing 3.0 was spot on. Look at what is going on with pydantic and langchain. “LLM programming” is about to be a thing.

Wait, there was computing 2.0? Damn, I missed whole revolution again...

I would say exactly the opposite, frankly.

With the internet, there was a clear value proposition for the vast majority of use cases. Even if some of the specific businesses were poorly-conceived or overly optimistic, the underlying technology was very obviously a) growing organically, b) going to be something everyone used & wanted, and c) a commodity.

All three of those parts are vital for a massive boom like that.

Generative AI is growing some, yes, but a lot of the growth is being pushed by the companies creating or otherwise massively invested in gen-AI. And yes, many people try out ChatGPT's webapp, but that's mostly a gimmick—and frankly, many of the cases where people are attempting to use it for more are fairly awful cautionary tales (eg, the people trying to use it as a therapist, and instead getting a cheerleader that confirms their worst impulses).

Gen-AI may be useful to some people, but it's not going to be a central feature of most people's lives—at least not in the forms it exists in today, or what can be clearly extrapolated from them. Yes, it can help some with coding—with mixed results—but not everyone's a programmer. Not everyone's even an office worker. The internet has obvious useful applications for a plumber or a lawyer; if I hired one of those and they said they were using generative AI to help them in their work, I'd fire them instantly. There are already a bunch of (both amusing and harrowing) stories of lawyers getting reamed out in court for using gen-AI to help them write legal filings.

OpenAI may or may not have a robust moat—I've seen people arguing both ways; personally I suspect lean slightly toward the "not" side—but generative AI as a whole is not something that's an interchangeable commodity the way internet access, or even hosting, is. First of all, in order to use the models that are touted as being advanced enough to actually look like more than spicy autocorrect, you need a serious GPU farm. Second of all, AFAIK, those models are being kept private by the big players like Google and OpenAI. That means that if you build your business on generative AI, unless you're able to both fork out for a massive hardware investment and do your own training to match what the big boys are already doing, you're going to be 100% dependent on another specific for-profit company for your entire business model. That's not a sound business decision, especially during this time where both the technology and the legal aspect of generative AI are still so much in flux.

Generative AI may be here to stay, but it's not going to take over the world the way the internet did.


Hindsight is 20/20. The company I worked at went under because people questioned whether enough people would ever buy stuff over the internet to make the business viable. It was very much not obvious then.

> Gen-AI may be useful to some people, but it's not going to be a central feature of most people's lives—at least not in the forms it exists in today, or what can be clearly extrapolated from them…

The problem is that you are going to have to compete with people who are using AI. There is a learning curve, and some people are better at using it than others. Some people know how to use it really well.


- internet & ecommerce & online changed the way we shopped.

- smartphones in every pocket changed behaviors around entertainment, communication, and commerce

what behaviors of people will gen-ai change ? perhaps the way we learn (instead of google, we head over to a chatbot), perhaps coding .. all up in the air, and unclear at the moment.


>For those around for the .com bust it does feel very similar.

Not it doesn't. I was around and we didn't have an entire group of GenX gang warning about the .com crash everywhere, everytime. Compared to nonsense metrics like eyeballs, this time we have real revenue and the biggest companies are tech. It might end in some crash but nothing like the .com one.


I keep coming back to this thought that the ability for computation itself sets the stage for speculation.. or rather, widely available and cheap computation.

It's interesting how both of those period have their tech stock flagship. dotcom: Cisco ai: Nvidia

on the other hand, it's easier to "copy" telecommunications equipment than state of the art chips. Not saying there won't be competitions to nvidia's dominance, but so far, not a pip from anyone (realistically that is).

> Not saying there won't be competitions to nvidia's dominance, but so far, not a pip from anyone (realistically that is).

It's so surprising. So much money at stake and there is zero competition for hardware purchase. Google's Tensor chips are excellent, but can only be rented.


I think Cisco imploded because people moved to the cloud. But even cloud providers are stuck with nvidia; they have a software moat.

Curiously, nvidia's P/E ratio is lower than it was two years ago!


Cloud is 2010+ thing

Do you think FANG companies inflate AI on purpose in order to create a scenario for a bust to happen, they can survive it given their vast warchest of cash

It’s one of those scenarios where the high level value prop is obvious and compelling, just like the dotcom bubble.

80% of the hype is about 20% of the bullshit. And the bullshit attracts 80% of the dollars. The current cohort of leaders are Jedi at separating sovereign wealth and markets from their treasure.


Disagree, because the very few very big AI players are (in contrast to the 90s) very solid. Yes, there is a breadth of absolute bullshit built on top of current AI, but if it were only ChatGPT-alikes and LLMs for coding from here on out, that in itself is enough real value, that requires very little imagination, a lot of implementation and there's more demand than can easily be satisfied right now for both.

You can’t be serious, surely?

Most of the LLM applications are either entirely useless or trivially reproduced with much simpler free models (or even entirely non-“AI” methods).


I’m an AI enthusiast, and it’s not clear to me that selling inference to a prop model is a winning business model, which is what Anthropic and OpenAI are doing. The open models are good enough today for many things, and are likely to only get better. Feels like inference is a commodity, and not clear how much money there is in it.

I would love to know how much of the inference I pay for is being paid for by VC cash: I suspect a lot of it.


To be fair though though, the "open" models are fairly sketchy as well, from a business standpoint, because somebody is paying for a lot of GPUs and expensive talent. It's probably the least obviously sustainable open source product of all time, and it's not at all clear to me why that would change going forward.

Right now there seem to be roughly two paths, when it comes to frontier-level LLMs: Meta just not giving a fuck, spending instagram money and pretending it's a business, and whatever Deepseek is doing that might make it both good and also super cost-effective (and there it's even less clear, how much of it is real and what the actual costs are).


> I would love to know how much of the inference I pay for is being paid for by VC cash: I suspect a lot of it.

Definitely a lot of VC subsidies for OpenAI and Anthropic, none for Google.


Sure, that doesn’t mean we’re not getting subsidised tokens from Google though

yeah, most of it

If I’m not mistaken they are using high valuations of top companies to conclude AI is overhyped?

Sorry, but weren’t these valuations escalated because of low interest rates and quantitative easing? Perhaps combined with increased concentration in Top 10 by investors navigating uncertainty?

Typical BS coming from a mega fund only supported by management fees. not saying AI isn’t hyped but this is laughable.


Exactly. Their chart even shows that these companies were more overvalued in 2020, before the AI "bubble" even started.

Good rule of thumb: When everybody talks about bubbles while rates are going down, it's a good time to invest. When everybody's talking about investing and rates are going up, it's a good time to drop out. Right now we are in the former timeframe. As long as cash remains cheap, there is no good reason from a financial market perspective for this to not go on. Is it sustainable indefinitely? No. But almost nothing in our current economy is. AI nowadays just generates easy clicks for opinion pieces like this looking at a single data point. That doesn't mean there is any reason to act on it or even just to read too much into it.

Better rule of thumb: have an automated investment strategy that takes a set percentage of your income every paycheck and invest it, regardless of current rates or what anyone's saying.

Note that this applies to vanilla investing, like index funds. You can easily automate that if you want. If you're really just looking for modest stable yields, you may as well invest in bonds right now. The US GOV 12 month is at >4%. With inflation significantly below, that's like free money (if you've been an adult in the 2010s you'll know what I mean). But don't expect to make a lot of money in less than a generational timeframe either way.

To make A LOT of money you probably need to start a successful business.

Looking to take on more risk in equity investments is just as likely to end up with you going broke as it is to get outsize returns.


>Looking to take on more risk in equity investments is just as likely to end up with you going broke as it is to get outsize returns.

You should look at the chances of your business becoming that successful. They are equally slim. And you have a lot more personal exposure if your business fails vs. if one fails that you only invested your money in and not your time and health.


this can (will, given enough years) get you rich but it won't get you wealthy :)

I think that at the moment the One Big Beautiful Bill ensures that the spending spree will continue and the world will stay afloat with cheap money so I would assume that we are about to see the last part of the bubble. But, I wouldn't bet on my assumptions.

I find AI useful, I use it most days to write snippets of code or to rubber duck with. It hasn't changed my workflows that much, just replaced Stackoverflow with ChatGPT. Feels like the sweet spot for me, everything else is noise.

Chat is the obvious application but the real value imho is using LLM to bridge a gap non-deterministically you couldn’t bridge deterministically before. Entity extraction for example allows us to connect two workflows that often required a human in the loop. Not anymore. I see this everywhere in our SaaS product.

Yep, replaces Google, Stackoverflow and autocomplete for coding with a much superior experience.

But anyone taking a vibe coded project with no human understanding of the code produced and puts it straight into production is going to have a bad time.


same,

- a better summarizing google for some queries

- snippet generator

but has not changed any workflows.


The LLM/AI tech has clear use cases and benefits. However, no, I do not need a shoehorned, dedicated AI in every single product and service I use. That is where is the bubble is in my opinion, everywhere the AI is built or applied in cases where it does not work or does not make sense.

A single chart can be found to support just about any conclusion

Also, the chart doesn't take into account that the biggest companies have more power, are bigger right now, and it's not inherent to them using AI. If not AI, it would be something else. Shares and revenue are growing, and people are getting fired. They will not collapse.

Reminds me of one of my favorite Simpsons lines: "Aw, you can come up with statistics to prove anything, Kent. Forty percent of all people know that"

I’m partial to Homer’s “Facts are meaningless. You can use facts to prove anything that's even remotely true!”

“They say sixty-five percent of all statistics Are made up right there on the spot”

https://www.youtube.com/watch?v=IUK6zjtUj00


The conclusion I drew was "The level of value inequality in the S&P 500 is higher than before".

From that, any number of conclusions are possible, including perhaps:

* The level of innovation at those companies is high. Certainly the 90s tech booms were actually very innovative and profitable.


It's a monetary phenomenon. The economy as a whole is very bubbly and frothy.

We've been in a bubble ever since people starting believing "data is the new oil".

Data has only driven advertising, and it's done it in such a botched way that it's tearing down the whole discipline of advertising. These companies know all the little tidbits of information about all of us that they need to put the right products directly in front of our eyes multiple times per day, and they still get it wrong.

Advert engagement goes down, people who use advertising realise their budgets are being wasted on the wrong audience and the whole thing will pop. It was naïve to ever believe that data really means anything. At a certain scale it just becomes loads of noise.


> Data has only driven advertising

This is not remotely true. I mean it's so incredibly not true I wonder how you came to believe this.

Haven't you ever heard of how hedge funds pay for cellular data to understand retail store traffic, or how satellite photos help them estimate the fullness of gas tanks at ports to predict pricing?

Or how data about predicted electrical pricing based on usage helps factories schedule energy-intensive production during times of low pricing?

Or how aircraft maintenance companies like AAR rely on "big data" to position replacement parts in a globally distributed system of warehouses to reduce the time it takes to repair aircraft (their contracts are based on airline uptime), thereby reducing passenger delays due to mechanical issues?

Or how farms use weather and satellite data to deal with droughts, identify areas to spray, and estimate competitor yields for the purposes of planning?

Or how governments now conduct surveillance of pathogens and drug use through sewer water data?

Or how semiconductor companies use massive amounts of data collected from production line sensors to massively increase yields and reduce chip prices, despite the complexity of chip production having increased massively?

You benefit directly or indirectly from companies using data all the time.


Those are good points.

I got a bit carried away in my original statement and undersold data a little bit. I think the point of the statement at the time ("data is the new oil" as an article in The Economist) was mostly hinging on data for use in digital advertising, but I didn't provide any of that context in my original post, and I was mostly considering user data.


Similar to dot-com, part of the reason is the multiplier effect of all the AI investments. If these investments prove to be uneconomic, which I strongly suspect, the backend of this investment cycle is going to be brutal.

Yes, we need the economic equivalent of anti-foaming agents.

Not even close. Dotcom bubble was massive. Mom and pops were leveraging into tech stocks. I don’t see anything like that today. Is your 75-yo aunt bragging about how she bought Nvidia options? People who lost everything in dotcom and lost it all again during the financial crisis have become PERMANENTLY risk-averse. These are a majority of retired boomers which makes them even more risk-averse because they’re now retired. Dotcom equivalent would be if S&P more than doubles from here.

I think it's unlikely the next bubble will involve the stock market. I mean the last bubble (real estate) didn't either. It can still be a bubble even if it's mostly VC money going into it, because more companies, endowments, pension funds, and ETFs than ever are exposed through VC. I don't know what the "total VC money invested" graph looks like right now, but even if investment stays constant, the lack of exits would still cumulatively result in a bubble-like inflation over time.

Bubbles happen because they haven't happened before, people know their history and don't repeat the same bubble. So just because there hasn't been a catastrophic stock market bubble in USA before doesn't mean it can't happen, it has happened in other countries and those stocks didn't recover.

Bubbles looks very impressive until they pop, most fall for them, that is why they are bubbles.


I do wonder if investors have a game-plan of what happens (to their investments, not society), based on future AI trajectories - where it becomes superintelligent, where it tapers off at the level where it's useful but still needs to be babied, or where it can genuinely replace some people, but it's clearly not superhuman.

In all these cases, it's very likely no AI shop is going to have a monopoly on the tech, and cartels are not very likely, considering China (and maybe Europe) is in the game as well.

In a gold rush, sell shovels, and the company's having a monopoly in shovels in Nvidia.


Investors are just people. They don’t know what superintelligence would mean any more than you or I do. Some will guess right, some will guess wrong.

I wish more people would understand this. VC is just professional guessing. But you're not guessing impacts, you're guessing future value of company stock compared to its value at time of offer. Some people are really, really, really good at it. That doesn't make them any more qualified to assess the future impact of a new technology than a university PR office.

A large part of the ecosystem around it is certainly going to implode in a pets.com fashion. But the underlying tech seems valid to me so think a handful will come out of this stronger than before

It's the usual HypeCycle[1] and most of the players playing know this.

[1]https://commons.wikimedia.org/wiki/File:Gartner_Hype_Cycle.s...


I don't think there's just one bubble. There's a meta-bubble and the normie-bubble.

If you're a CEO of a giant AI corp you're currently racing for superintelligence (meta-bubble).

The rest of us apes are flinging AI slop at each other until we've saturated each other in AI slop.

I don't really know what will happen, just offering my observation (ape noises)


CEOs know (or don't care) that they won't reach superintelligence. The reason they are where they are is that they are good at saying what they need to get the next round of funding.

Yes. As well as other hype-men and useful idiots. People are extremely naive when it comes to vested interest talking points. There’s a very natural reason why these guys want to keep talking about AGI and ASI: ”soon” is the magic word that makes investors feel fomo and make rash decisions.

During peak crypto madness vagueposting was an extremely effective market manipulation tool. I know people who made a lot of money on unconfirmed rumors in hours but of course it was just zero-sum gambling - the ”early adopters” made their money at the expense of the latecomers. No value was generated.

People don’t even need to be convinced that AGI/ASI is near, just ”but what if there’s a chance?”. It’s similar psychological tricks as selling lottery tickets.


There's also the monetary "everything bubble".

I don’t think AI is overhyped—am I missing something? I remember being skeptical of Dropbox and SpaceX but LLMs seem genuinely revolutionary. Yeah, it’s not “AI” as we understand it from the movies. But it can write papers better than a college freshman. That’s amazing.

It's over hyped insofar you're looking at the current valuation of the AI companies and look at the value they actually produce at the end of the day.

AI is here to stay, and long term, it's likely going to revolutionize almost all parts of the job market. But to get there... I'm really not sure what a reasonable time estimate would be. I can see it take something like 3 years, which would make current evaluation plausible, but I wouldn't bet on it.

It'd bet on it taking a tad longer, but I strongly suspect by 10-20 yrs, we'll get there.

Under that time horizon, it feels overvalued and hyped, because the winner's of this revolution might not even have been founded yet.


> I remember being skeptical of Dropbox and SpaceX but LLMs seem genuinely revolutionary

Dropbox or SpaceX wasn't valued at many trillions of dollars though. Just because its very useful doesn't mean it lives up to the biggest hype ever in human history in terms of monetary investment.


In this very thread people are discussing “superintelligence” being around the corner. So yes, it is overhyped. Like if I took the invention of a steam engine and said teleportation is coming tomorrow.

Of course the steam engine was revolutionary. That doesn’t excuse or legitimize the nonsense.


Well, most college freshmen aren't plagiarizing and inserting random falsehoods, while consuming an excess of electricity all at an artificially low cost.

I have no problem with the amount of money that is dumped into AI, but I'm annoyed by the false promises. People telling me that Claude Code has no problem implementing clearly defined little feature requests but when I let it tackle this one here https://github.com/JaneySprings/DotRush/issues/89 (add inlay hints for a C# VSCode extension) it kept on failing and never made it work. Even with me guiding it as good as I can. And I tried for a good 4 hours. So yeah, there's still way to go for AI. Right now, it's not as good as the amount of money dumped in would make you believe, but I'm willing to believe that this can change.

Cue the "you're doing it wrong" crowd.

Yup. The frustrating thing is that I already read tons of material on how to "hold it right", for example [Agentic Engineering in Action with Mitchell Hashimoto](https://www.youtube.com/watch?v=XyQ4ZTS5dGw) and other stuff, but in my personal experience it just does not work. Maybe the things I want to work on are too niche? But to be fair, that example from Mitchell Hashimoto is working with zig, which is for LLM standards very niche so I dunno man.

Really, someone, just show me how you vibecode that seemlingly simple feature https://github.com/JaneySprings/DotRush/issues/89 without having some deep knowledge of the codebase. As of now, I don't believe this works.


I think it really, really depends on the language. I haven't been able to make it work at all for Haskell (it's more likely to generate bullshit tests or remove features than actually solve a problem), but for Python I've been able to have it make a whole (working!) graph database backup program just by giving it an api spec and some instructions like, "only use built in python libraries".

The weirdest part about that is Haskell should be way easier due to the compiler feedback and strong static typing.

What I fear most is that it will have a chilling effect on language diversity: instead of choosing the best language for the job, companies might mandate languages that are known to work well with LLMs. That might mean typescript and python become even more dominant :(.


(user name checks out, nice)

I share similar feelings. I don't want to shit on Python and JS/TS. Those are languages that get stuff done, but they are a local optimum at best. I don't want the whole field to get stuck with what we have today. There surely is place for a new programming language that will be so much better that we will scratch our heads why we ever stuck with what we have today. But when LLMs work "good enough" why even invent a new programming language? And even if that awesome language exists today, why adopt it then? It's frustrating to think about. Even language tooling like static analyzers and linters might get less love now. Although I'm cautiously optimistic, as these tools can feed into LLMs and thus improve how they work. So at least there is an incentive.


>that example from Mitchell Hashimoto is working with zig

While Ghostty is mostly in Zig, the example Mitchell Hashimoto is using there is the Swift code in Ghostty. He has said on Twitter that he's had good success with Swift for LLMs but it's not as good with Zig.

I think it doesn't work as well with Zig because there's more recent breaking changes not in the training dataset, it still sort of works but you need to clean up after it.


Thanks for pointing that out. And yeah, with how Zig is evolving over time, it's a tough task for LLMs. But one would imagine that it should be no problem giving the LLM access to the Zig docs and it will figure out things on its own. But I'm not seeing such stories, maybe I have to keep looking.

> Cue the "you're doing it wrong" crowd.

or the "humans make mistakes too" crowd

or the "just wait, we are at an inflection point in the sigmoid curve" crowd


Even conceding the "doing it wrong" point, it demonstrates that these tools will require a massive amount of training or retraining to get the desired results. Which means don't lay off your current coders anytime soon.

With the amount of money being tossed around I am convinced this is going to be 10x worse than the dotcom bubble when it pops. And it will pop. You simply can't have pre-product companies valued at 10s of billions of dollars and expect a good outcome.

Everyone knows it's mostly bullshit, but that _someone_ is going to end up coming out as the Amazon-level winner.

Every single one of these "fake it till you make it" AI CEO/Founders are betting they are the Amazon.com and not the Pets.com... but if they are the Pets.com then what is the downside?

The CEO of pets.com certainly didn't end up out on the streets for being the biggest disaster of one of the largest bubbles and effectively burning billions of investor dollars (including institutions investing pensions and retirements funds).


Looks like she ended up with a net worth over $20 million so I guess you're right:

https://www.quiverquant.com/insiders/1780458/Julie%20Wainwri...


Even the sock puppet sold for $125,000.

You can buy a hell of a lot of dogs and socks for that much.

https://en.wikipedia.org/wiki/Pets.com#Sock_puppet


The “talent wars” and VCs with a really nasty case of FOMO clouding investment fundamentals is throwing more dry kindling in the pile. For anyone that’s been around a while we’ve seen this movie before

First mover advantage then meant access to a tier one ISP.

Today is it just a matter of cash and DC capacity?


A lot of the comments here are talking about startups. But the chart in the article is the forward P/E of the top 10 companies in the S&P.

For reference, those 10 companies are: Nvidia, Microsoft, Apple, Amazon, Meta, Braodcom, Alphabet (Class A), Alphabet again (Class C), Tesla, Berkshire.

This isn't a pets.com situation.

These companies are ENORMOUS cash engines with incredibly well-proven moats operating in an extremely monopoly-friendly political climate. Nothing like this existed in the 90s. Microsoft, but anti-trust still had some teeth.

The author makes a comparison between these companies and the rest of corporate America, arguing (implicitly) that the forward P/E of these ten symbols is too high relative to the rest of the S&P 500 index.

So let's look at the flip side. Many of the other companies in the S&P are vulnerable to these exact players' moats and pricing power. It's a zero-sum game and the winner is clear, so of course the winner's P/E looks really high compared to the expected loser.

Every single one of them has an AWS bill. Every single one of them has a big Windows/Office install base. Every single one of them probably has a huge apple install base. Every single one of them needs to pay to play in the App Store.

And many of them are also in the unenviable position of being on the losing side of an unfair competition in their actual core business. Walmart/HD/Coca-Cola vs Amazon. IBM/Oracle vs AWS. Or other complicated market dynamics that pose only upside to the big guys and potential downside to the rest (Biotechs vs Amazon Pharmacy).

The remainders are competing margins away from one another, are vulnerable to disruption of mid-market non-S&P players (or similarly sized companies that just aren't on the public markets -- see the huge size of privacy capital relative to the 90s). Some also face significant tariff risk. Think banks, consumer goods.

What percent of the difference in P/Es between the best and rest is justifiable on the thesis that we are entering a multi-decade period of (1) tech feudalism and (2) unpredictable populist fits that wreck havoc on everyone except the tippy top of the echelon who can blow enough cash to control the narrative?


I am having a hard time drawing the same conclusions. Half of the companies in 99 were not tech related compared to to 90% today.

AI is like a rocket engine, it just keeps on exploding

This is why many of the companies are trying to get sold to big tech. "Windsurf" is an example here. They want to exit, get paid, pay off investors, and let big tech hold the bag.

Another example is "Devin" or whatever the parent company is. Recently acquired some unknown company and they are cooking their books for the next acquisition


You're thinking of Cognition (makers of Devin) who acquired a _known_ company, Windsurf, right after Google "acquired" hand picked staff including the CEO for a total of $2.4b

I think the term "bubble" is far too presumptuous. You can only know if something is a bubble with hindsight.

There have been examples where things look like a bubble to some market participants, but turn out to be more or less a good reflection of that thing's future value.

AI is uniquely hard to value too because there's so many exponentials which may or may not occur, with those exponentials both having the potential to make products exponentially more valuable or redundant.

There's also different parts of the AI stack and again it's really hard to see which part of the AI stack holds secure value, perhaps with the exception of the hardware providers.

Anyway, I suspect in a few years those calling AI a bubble will mostly be proven wrong, but that's just my sense of things.


ITT: “it’s not possible to tell…” Also ITT: “it’s not a bubble”

Which is it? :)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: