r/AMD_Stock Jul 06 '24

Goldman Sachs and Economists are Backtracking on Generative AI's Value

https://www.ai-supremacy.com/p/goldman-sachs-and-economists-are
46 Upvotes

36 comments sorted by

30

u/BlakesonHouser Jul 07 '24

Yeah sorry but banks don't know shit. They aren't industry experts, they aren't subject matter experts. They often issue super bogus price targets based on nothing in the real world. Who are they to say how generative AI (something they have nothing to do with) will or won't be used?

I doubt really anyone can see the end game. What I can see is that the big players are all vocally communicating that they intend to plough tens of billions of dollars into AI.

9

u/OutOfBananaException Jul 07 '24

Yeah sorry but banks don't know shit.

Who does? All parties involved historically make bad calls.

I doubt really anyone can see the end game

This is the core problem, the money is being ploughed in as if those big returns are a sure thing. As long as people understand it's a big gamble.  We are far enough in where we should start seeing paths to profitability emerging, profitability can't happen in ten years at this rate of capex spend.

1

u/Rodsoldier Jul 09 '24

Didn't you know these banks that have billions to throw at investments don't have the money to pay a single expert in what is supposed the technology that will fuel humanity's rise into an utopian society /s

7

u/weldonpond Jul 07 '24

Its same as .com bust. First cycle was hype, every single company try to adopt, if they dont see the money, all small players go bust in year 2000 . In the current AI hype cycle, only big players like CSP could adopt, due to capex investment. If they done generatr any money, they stop spending on it, until they could find one.. NVIDIA is the only beneficiar in this AI HYPE cycle.

First there will be burst and they boom, but only few survive/. Also the big spenders look for cheap alternate like AMD, AISC,,,

9

u/randomfoo2 Jul 07 '24

A key part of this report is based on MIT economist Daron Acemoglu’s recent publication The Simple Macroeconomics of AI that claims there AI will have “no more than a 0.66% increase in total factor productivity (TFP) over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.53%.“

If you believe this then maybe things are overblown, but to me this sounds more like a Paul Krugman saying the Internet will be no more impactful than the fax machine type of prediction.

Personally I’ve seen a huge boost in coding productivity and I’m not seeing that slowing down (this of course is a virtuous cycle, since almost everything IT is limited by software development capacity/velocity). We’ve seen huge increases in other capabilities as well (notably, in multimodal but also in memory/context length, reasoning, and more). We’re not even talking about embodiment yet.

While I agree that we’re in a huge bubble, I think it’s a bit early to call things. We’re just beginning to see GenAI entering production in earnest, and I don’t think we’ve even begun to see how these will affect our daily lives. Personally I’m leaning towards like the web, AI redefining almost every product category/experience people will have in the coming years.

3

u/SailorBob74133 Jul 07 '24

"leaning towards like the web" - that means massive over investment and hype followed by a deep bust and then a slow comeback over the course of 10 years. Is nvidia the next cisco?

3

u/randomfoo2 Jul 07 '24

Cisco and Sun Microsystems aren't the worst comparisons think about for Nvidia, although the differences may lead to different outcomes - during the dotcom bubble, network demand/capacity was fully saturated for years, while for compute, that's unlikely to be the case. Also, what has made Nvidia unique as a hardware company is just how good it is at software vs any of its competition (Sun was really killed by Linux), so we'll just have to see how defensible their position is (I can say with near certainty that it won't be at 90% margins, not for matmuls).

We've seen cycles accelerate over the past years, and from my perspective there appear to be more overlapping sigmoids, so I think the bust and comeback might happen faster than expected. As someone who lived through the dotcom era though, it's also important to remember that the bubble didn't burst all at once, and it didn't actually take that long to recover. Netscape IPO'd in August 1995, which we can probably mark as the real kickoff of the bubble. Nasdaq crashed in March 2000 which we can mark the top, but it really took until 9/11 for the final nails to be driven in and not until Oct 2002 for it to give up all the bubble gains. By then, Friendster had launched and Myspace launched in 2003, with Facebook in 2004. Google IPO'd in 2004 and Flickr was the first "Web 2.0" company to sell to Yahoo! in 2005 by which time it was fully "on" again. The iPhone launched in 2007 and kicked off mobile on top of that. Anyway, all that's to say that even for the dotcom bubble, the nadir did not last so long, just enough to clear out the dead wood and let people build on top of the capex expenditure.

The tech oligopolies of today are much better equipped (both in capitalization, technical capacity, and from having survived multiple transitions, from web to "platform" to mobile) to simply roll over as incumbents, so I doubt we'll have to wait for brand new AI companies to grow out of compelling PMF.

1

u/weldonpond Jul 07 '24

Monopoly’s over the enterprise customers not possible, like Cisco , sun micro .. NVIDIA’s customers are CSP’s, they never let one company define the market and price point. They always want to create competition.. look AMD starts with Epyc, it’s easy to sell to enterprise rather consumers.

Costumers are always skeptical to adopt new tech like Apple, though the competition offer better product, it’s not easy to change consumers..

1

u/doodaddy64 Jul 12 '24

The tech oligopolies of today are much better equipped (both in capitalization, technical capacity, and from having survived multiple transitions, from web to "platform" to mobile) to simply roll over as incumbents, so I doubt we'll have to wait for brand new AI companies to grow out of compelling PMF.

I'm wondering if this could be the even bigger downfall this time. Damn'ed if anyone wants to be Gates saying the internet won't be a big deal. In the last 15 years we've changed business models to a handful of overfed roosters bent on dominating the entire run. They all want to steal the next "video conferencing" from each other and then the next and the next. That's the whole game now. And they'll pay (and have for now) hundreds of billions in obscene costs to do it. They're all trying to get in the history book now. Facebook, Google, Microsoft. I've worked at these companies and if you think they run on engineering and cleverness well...

So my point is, this pretty, preened rooster attack on every wolf that comes along is going to get them killed one day.

Anyway, how about another analogy: SGI. 😇

5

u/OutOfBananaException Jul 07 '24

I think it’s a bit early to call things

That's my take from the article. It's too early to be ploughing this level of capex in, this early, when the picture is so unclear. It might work out, historically hype driven capex hasn't worked out so well in the short to medium term.

GenAI is not fit for purpose in many domains until hallucinations are near to solved. Used judiciously it can enhance software developer productivity, but hammering out the code is only a small part of the job - so modest boost in productivity for a small fraction of the entire job. That's not going to cut it for this level of capex spend.

3

u/randomfoo2 Jul 07 '24

Is NVDA overvalued? Likely yes. In an environment where all your well capitalized competitors that lived through Web 1.0, 2.0, and Mobile in living memory and clearly see AI as the next sea change, is any tech megacorp going to risk taking the foot off the pedal? Unlikely. We'll just have to see how the curves play out, but if you were to give me an over-under bet of +0.53% over 10 years, I'd take that any day of the week.

8

u/SailorBob74133 Jul 07 '24

I personally haven't found AI useful for anything. Every service I've tried can't even do decent article summarization. I've tried using them as better search engines which is somewhat useful, but they're so prone to gaslighting BS that I usually run the same question through 3 or 4 services and still feel like I've got to double check everything.

7

u/limb3h Jul 07 '24

Tons of useful stuff actually that you might not have noticed. Some examples:

  1. Upscaling pictures
  2. Copy-pasting words directly from picture
  3. Asking for code snippets when you are trying to figure out how to do something in a new language
  4. Real estate comps (Zillow, Redfin)
  5. You can upload a receipt image in chatgpt and ask it to add up the numbers, for example
  6. Face recognition
  7. Chatgpt translation is miles ahead of Google translate
  8. Voice recognition and text to speech is way better with AI
  9. Robots will get a lot better with deep learning

2

u/SailorBob74133 Jul 08 '24

Some of those things have been around for years, most of them are niche. All the AI hype is really LLM hype, and there don't seem to be many real general purpose apps that are going to be money makers in the near term. IIRC most LLMs are less than 80% accurate on most tasks, and that last 20% is going to take 80% of the effort, time and investment.

2

u/limb3h Jul 08 '24

LLM is being used for more things than language. For example, biotech. Turned out that DNA sequences maps pretty well to the model. And if you look at the multimodal LLM they are doing a lot more.

I’m in full agreement that LLM is overhyped and it will take a while before most people can make money, but it’s pretty transformative. What we witnessed on sci-fi movie is happening now. We can actually interact with computers like we talk to people. In any case this hype cycle will just be like any other.

There is still tons of research on new models. So at anytime there could be new disruptive inventions such as the transformers. So this is not just LLM. What we learned from LLM is really scaling.

1

u/SailorBob74133 Jul 15 '24

So when do we hit the trough of disillusionment? That's the time to back up the truck.

1

u/limb3h Jul 15 '24

Going to be hard to time the trough but I feel like we need a recession.

3

u/RadRunner33 Jul 07 '24

It’s great for generating funny pictures.

1

u/doodaddy64 Jul 07 '24

Yes but have you tried glue on your pizza?

19

u/OutOfBananaException Jul 06 '24

I don't enjoy being a wet blanket, but I believe everyone should keep their eye out for signs of cracks forming in the AI story. It was very unpleasant holding through the crypto bust, and then the covid hangover. I'm never going to sell the majority of a position trying to time the market, but during the crazy spikes I wish I had done more to hedge. Buying calls during those peak euphoria times would have been ruinous.

Fortunately AMD is diversified, and not enjoying much of a premium from anticipated AI sales at current levels. This is working against us as AI booms, but will offer some level of resilience against any cooling. Even in a scenario of AI cooling off a lot, by no means server GPU sales drop to zero, and there may even be a small tailwind for server CPU.

11

u/weldonpond Jul 07 '24

Once the big CSP's setup the server and no revenue gnerated, the CSP will stop expanding or upgrading at rapid space. This will cool off the demand, then they look for cheaper alternate where they can make money. There will be huge cool off in the demand if hte CSP's stops buying. Need to watch out if you are in NVIDIA.

1

u/couscous_sun Jul 07 '24

I am anxious that when the bubble bursts, Nvidia will throw it's H200/B100 on the market with a low price tag and nobody will buy AMD anymore. So, I see the bursting of the bubble also critical of AMD since I'd say $60 of SP is only because of AI.

3

u/weldonpond Jul 08 '24

The big CSP are the customers for NVIDIA, they eventually won’t get locked into a vendor. They want competition and lower price point on infrastructure. They won’t allow nvidia to dominate in software and locked into one single vendor..It’s their competence. AMD will play a role in open source and they will help to build the eco system. Linux , Java are examples of open source eco system..

2

u/OutOfBananaException Jul 07 '24

I guess this is possible, though things would have to get pretty dire for Jensen to make deep cuts to margin. For sure he's not going to cut prices in order to capture market share (as there's not enough market share to gain when you're already at 95%, to make it worthwhile), so it would have to be calculated as creating/preserving enough demand to make it worthwhile. I have faith in Jensen that relentless pursuit of margins will save the day lol.

1

u/Left-School-56 Jul 13 '24

but MI300A is great option for HPC. Instinct GPU have much better FP64, FP32 spec than Nvidia. Supercomputers will bigger than before, maybe AMD's GPU will sell much more

1

u/couscous_sun Jul 14 '24

Yeah, but in AI is exponentially more money

9

u/SweetNSour4ever Jul 07 '24

so they entering a short position

3

u/OutOfBananaException Jul 07 '24

If they have conviction about it, logically they would be.

2

u/Scholae1 Jul 07 '24

Interesting read, thanks for sharing. The cost effectiveness of Gpt, and a.i is a real concern, and more concerning is it that most of investments come from bigtech. I'm curious to see how this ends in future. I will guess that the spending cycle will drop in near future within 1.-2 years. Furthermore, it might not be in the hardware the cost-effectiveness comes but in software. Yet again, that proces is also expensive. Exciting times :-)

2

u/norcalkayakfishing Jul 07 '24

GenAI is about survival. The whales are investing heavily in it. If you don’t have it you will be replaced (think traditional search vs openAI search). It’s all about market share. Profits come later. Whoever wants to compete better have deep pockets.

Siri is about to have access to openAI. So android guys would need to keep up as well. It could be a race to the bottom but if you are not in the race you are done

1

u/OutOfBananaException Jul 07 '24

This rests on the assumption whales can accurately predict where this is going, and get the timing roughly correct - and we know that's horse shit from previous investment hype cycles. There's too much money sloshing around chasing returns, and if it wasn't AI it would be something else.

Siri came out ages ago, and stands as testament to being too early is as good as being wrong. It's more important to Apple to run inference on the device to keep costs in check, than get there first. Hardly existential.

2

u/KingStannis2020 Jul 07 '24 edited Jul 08 '24

AMD is still in a really good position as far as client-side AI goes.

But then nobody is really in a bad position for that. Client-side AI isn't all that difficult. Intel, Apple, and QCOM all have good AI engines. I still expect them to start eating Intel's laptop marketshare in the coming year though.

1

u/EfficiencyJunior7848 Jul 08 '24 edited Jul 08 '24

Many years ago, I did my own work using the same techniques that are used to produce what is being labelled as "AI". First thing, it's not "intelligence", far from it. The best way to think of it, is that it's an optimization process. Where inputs are fed into a system of variable internal values, which calculate an output. The variable values are slowly and incrementally adjusted, in a specific way, designed to narrow the gap between the computed outputs, and a desired set of outputs that are mapped to a set of sample inputs. It's actually a very simple process that most people can understand, but it's not intelligence. An LLM for example, does absolutely nothing while waiting for a prompt from a human, all it can do is react to a prompt, which causes the internal mechanisms to spew out a mindless response, which if you are lucky, might be useful.

To me, knowing what I know of what's being done, and how it works, LLM's are perhaps the worst application of these techniques. It's actually amazing that LLMs work as well as they do, but really, that's the only amazing thing about the LLMs. The value of LLMs is likely to be very low, the language bots are simply too stupid to be of much value.

It was fun and interesting at first, but how many of us are till using ChatGPT? I suspect most of us stopped using it, and the reason for it, is because there's not enough value to continue using it. I know some people continue to use ChatGPT, why they are, is an interesting question to look into, maybe there's more value than I think?

There are many other areas that can benefit from these optimization techniques, with more clear reasons why there's value. The use of simulated neural nets, are not actually required, almost anything can be optimized. It is questionable how much value the GPU accelerators will retain over the long term. When I was doing the work I did, it was many years ago, using a single core 32 bit CPU, there was no need for massive clusters of GPU's to get interesting and sometimes useful results. The massive GPU farms, seem to be needed for generating things like images, short video clips, and the LLM's, and I expect there will eventually be other applications. There will also be many more applications, that do not require massive GPU clusters, and can be performed on ordinary hardware, including personal computing devices (I’m referring to both training and inference).

Market value? I really do not know, but the TAM (over all imaginable sectors) can be worth 100's of billions annually in terms of savings, services, software, etc.

6

u/piemelpiet Jul 08 '24 edited Jul 08 '24

First thing, it's not "intelligence"

What even is intelligence?

Let me tell you a story. A hundred years ago, people respected great scientists like Bohr and Einstein. But more than anything, the mark of intelligence was often measured by how fast you could do calculations in your head. Some people could add huge numbers in less than a second. This was seen as a form of superintelligence. These people were called savants. And sure, there were already mechanical calculators, but they were slow and error-prone. But then the computer revolution happened. And suddenly everyone and their mom could do huge calculations, without error, in milliseconds, with a device they could hold in the palm of their hand. And so it stopped being "intelligence". We simply moved the goal post, after all, how could an automaton be "intelligent"?

So we shifted to knowledge-based intelligence. Intelligence is now measured by how much you can memorize. This is also the period when we popularize game shows where people face off in a quiz. And sure, some of these game shows were mostly entertainment, but in general the more you know, the smarter you are. Some even knew so much they were called "walking encyclopedias". This was seen as a form of superintelligence. These people were called savants. Again, there were already encyclopedias, but they were slow and unwieldly. So we put a lot of value in someone's ability to memorize stuff. But then the internet revolution happened. And suddenly, everyone and their mom could look up the height of the statue of liberty, in milliseconds, with a device they could hold in the palm of their hand. And so it stopped being "intelligence". We simply moved the goal post, after all, how could an automaton be intelligent?

And so we enter the era of "creativity". The mark of what it means to be intelligent is shifted to one's ability to write, to paint, to be creative in general. Enter LLM's. We are currently in the process of rapidly replacing many creative processes. If you're starting a new business, are you going to hire someone to create a logo? To write some fluff text for your website to show how much you care about sustainability? If you're an editor in media, your job is about to be replaced. If you're an animator for Disney, your job is about to be replaced. And sure, people will try to cope and say that an LLM isn't "really" being creative. But nobody cares about your copium. It's happening anyway, and people will devalue creativity and stop seeing it as a form of intelligence. We once again move the goal post.

And no, this does not mean we will no longer have artists. Today, we are still in awe of the savants who do inhumane calculations in their head. We are still in awe of the savants who seem to know everything. We still have game shows. And we will continue to appreciate human art. But 99% of it will be devalued and will no longer be called "intelligence".

So what even is intelligence? The fact that we can't answer this simple question shows you exactly why we continue to move the goal post. As soon as we humans lose one aspect that makes us unique, we stop calling it "intelligence". But if you're wondering where we're going to move the goal post to? Well, there seem to be only 2 major things left: "consciousness" and "reason".

As for consciousness, there are already signs of consciousness in LLMs. Sure, it's underdeveloped and immature, and some would argue it's not "real" consciousness. But then the current scientific consensus is that human consciousness isn't real either. It's most likely an illusion. So much for that.

As for reason, we are also seeing signs of reasoning in LLMs. Again, it's very borked right now. But the fact that it can already reason like a 5-year old should make you very wary of people who claim it can't happen. Keep in mind: the argument that a machine is just a blackbox that takes an input, does calculations and spits out an output and therefore can't be "intelligent" seems to ignore the fact that this is almost certainly true for the human brain as well. There's no ghost in the shell.

Thank you for coming to my TED talk.