Make NVidia Rise Again?
Ironically, I was exactly that unlucky and four-eye chowder head who just wrote a cream puff piece of paper praising incredible growth of NVidia stock right before the weekend, which was followed by an epic price plunge on Monday. Well, let me laugh at myself for a while to make it better and then draw first conclusions from what is happening. And here I am going to get the assistance of common sense. The AI darling NVidia tumbled about 17% in the blink of an eye to shed some $590 billion in market cap, most in Wall Street history. This record loss has been quickly replicated by other AI businesses from NVidia's close circle like Broadcom (AVGO), Oracle (ORCL), Micron Technologies (MU), as well as the Japanese SoftBank Group (TSE), which is one more major investor into Stargate project to build advanced data centres in the U.S. All that craziness took place because a Chinese guy with his friends had eaten a bat or a diseased pangolin. Oh no, by the saints, I must have got it wrong. This is not a legend telling about the origin of coronavirus. Today we have a story of how a small group of smart Chinese guys who spent $6 million for their successful start-up DeepSeek to cast doubt over all heavy trillions of investments in the global AI infrastructure and investigations that have been done before those great thinkers. It was widely reported that DeepSeek represented the best-performing open-source model, also exhibiting competitive effects against frontier closed-source models including ChatGPT, OpenAI and other generative pre-trained transformers (GPTs) by Google, Meta, Microsoft etc. I have no doubt that DeepSeek guys are all givers and great people. But forgive me if I question the rest of the stuff. Let's start with the fact that the experimentalists from DeepSeek used only NVidia chips when doing their work, and not components of their own production or from some other chip manufacturer. Training the model allegedly required 2,048 NVidia H800 GPUs, costing around $50 million. Comparable OpenAI chatbot models may cost hundreds of millions of dollars to build and test. H800 is a legal export version that NVidia made by slowing down its faster H100 chip after strict U.S. regulations were put in place to stop selling their coolest technology to China. And the H100 is not the latest advance already as there is Blackwell chip to offer up to 4x faster training and 30x faster inference than its predecessor H100.
If so, then it turns out that several dozens of Chinese eggheads repeated Western records, when using chips with lower performance and spending a hundred times less money to generate an AI-based feature. From this mass media concluded that now a lower total quantity and poorer quality of chips will be needed in principle for advanced AI tasks, so that revenues of NVidia and other AI-related firms will not be as huge as everyone in the market thought. It may be a small thing, but the DeepSeek startup is reportedly co-financed by the Chinese hedge fund High-Flyer, which has access to 50,000 of NVidia's original H100 GPUs. It's hard to tell if these H100 GPUs were involved in the work or not, and any version can be broadcasted later. They can't disclose the real truth about skipping U.S. export controls on AI chips, as another possible idea. There are also rumours of hidden funding from the Chinese government. But let's not speculate on this. Our idea would be they may use older slow chips, and much fewer chips by NVidia. This means there will soon be a lot of little tricksters who just want to repeat DeepSeek achievements somewhere in their home garage, trying to solve a similar kind of programming task literally on the knee. Just in a way like every college student was trying to become a Bitcoin miner not a long time ago. Here you have a sharp increase in demand for chips, let's say, of the previous generation from NVidia, meaning a demand by smaller customers. Which may even be good for NVidia, while the largest corporations like Microsoft and Meta are only interested in the newest Blackwell chips, and now there will be excellent demand for the older stuff. Mid-range customers may want to have their own supercomputers based on fewer expensive chips to solve more common problems at high quality rather than exclusive ones like it happens with supercomputers right now. A broader learning base will be available to them, adding popularity to chip producers. If I was in NVidia's place, I would be only but happy, as competitive threats for NVidia from Chinese-rooted and other chip producers seem to remain very distant in time.
Furthermore, even if the performance is roughly the same, AI models based on cutting-edge chips will likely be able to leverage more of the text and visual volumes to seek and finally generate their answer, so that cool and more expensive chips would probably continue to produce more digestible or better quality texts, pictures and videos at the output. There will be less hallucinating effects by chat bots. There will likely be fewer erroneous or unacceptable judgments based on the processing of a larger array of human-created information. If mega corporations adopt DeepSeek's simplification of program code or hardware methods, but using much more resources, they will soon create AI generators that are much closer to ideal, which will be more appreciated by consumers. If some produce quality texts in terms of the accuracy and responsibility of their answers, while others periodically slip into childish babble, then the choice is obvious. We will see what will happen. In a simple problem like "Alice has N brothers and X sisters, so can you tell me how many sisters does any brother of Alice have?" cheap models may be very good, but in more complex tasks, I think they will be inferior even to o1 by OpenAI, not to mention something that has not been created yet. The difficulties of time necessary for scaling successive models properly should also be considered. Already on Monday afternoon, DeepSeek could not withstand new registrations of free users on the network.
It's like comfort or business class taxis. Many people use a comfort class, although there is a cheaper economy class. BMW or Audi seem to be better cars than Renault or most Chinese brands, IMHO, but not everyone prefers Renault, and rather takes out a loan to buy a more expensive car. Although there are low-cost airlines, most people continue to fly regular airlines that offer more services, and extra-class airlines like Emirates or Turkish Airlines also have a lot of customers. In the case of high-quality chat bots, we are talking about a much more budget-friendly service, so that many will choose quality if the difference is noticeable, don't you think so? Because AMD makes cheaper but less useful chips, NVidia has not lost its market share so far, but on the contrary, has increased its expansion. In the same way, there will simply be budget AI products at low prices to promote the services of any small company or just to have fun, and premium products at a slightly higher price for those who strive for more.
What I also like as an NVidia investor is a cool hospitality and willingness to welcome the Chinese project that was clear in the initial reaction of NVidia people. There was not a condescending tone. "DeepSeek is an excellent AI advancement and a perfect example of Test Time Scaling," a NVidia spokesperson told on January 27. "DeepSeek’s work illustrates how new models can be created using that technique, leveraging widely-available models and compute that is fully export control compliant. Inference requires significant numbers of NVidia GPUs and high-performance networking. We now have three scaling laws: pre-training and post-training, which continue, and new test-time scaling”. OpenAI CEO Sam Altman spoke out on DeepSeek’s model, calling it "impressive", "particularly around what they're able to deliver for the price", but OpenAI is planning to deliver better models as "more important now than ever before to succeed at our mission". His firm continues to follow its "research roadmap", while demand for AI is "likely to remain strong", he added in a post on X. It looks more like genuine cheerfulness than putting on a good face when things are going badly. Big players can do more with existing computing power than we previously thought before DeepSeek case, is my conclusion, while smaller players will buy chips to try their best as well.
As for a further market dynamics of NVidia and other "lost" AI shares, the inertia of the fall and some medium-term profits taking out of harm's way, as I suppose, may be able to drag the price of NVidia to the $90+ area, where it already came once under an unfavourable combination of general market factors in early August 2024. But it may not be obliged to reach double-digit figures for the price per share, that's only a possibility, which is negative for shareholders but positive for potential buyers. On January 28, on the pre-market trading, NVidia shares have already added more than 5% to the previous day closing price, given that the prices nearly hit $125, which I consider to be the bottom of the current technical resistance zone, extending from $124 to $128 per share (see the chart). If they ever go further upstairs, which is unlikely to happen right away, then markets will make NVidia rise higher again. This is my basic scenario for now. In the meantime, the uncertainty and the need for investors to think carefully about new information will put some pressure on. And it may even offset the effect of quarterly corporate reports from major techs like Microsoft, Meta and Tesla, which are expected on the night from Wednesday, January 29, to Thursday, January 30.
I can only add here that OpenAI's market value could have fallen, but OpenAI is fortunately not a public company. Other interested parties, like Meta, for example, wisely and timely reduced their expenses on their own chat bots developments, choosing for now to use third-party products to make more complicated services, only winning if the costs for semi-finished products turn out to be less than expected. Likewise, companies that haven't built many of their own data centers and other chip infrastructure but have used the work of others in this field to make their AI features better, like Adobe in programming design or Walmart in smart shopping suggestions and customer preferences' analysis, can win even more.
Disclaimer:
The comments, insights, and reviews posted in this section are solely the opinions and perspectives of authors and do not represent the views or endorsements of RHC Investments or its administrators, except if explicitly indicated. RHC Investments provides a platform for users to share their thoughts on financial market news, investing strategies, and related topics. However, we do not guarantee the accuracy, completeness, or reliability of any user-generated content.
Investment Risks and Advice:
Please be aware that all investment decisions involve risks, and the information shared on metadoro.com should not be considered as financial advice. Always conduct thorough research, seek professional advice, and exercise caution when making investment decisions.
Moderation and Monitoring:
While we strive to maintain a respectful and informative environment, we cannot endorse or verify the accuracy of all user-generated content. We reserve the right to moderate, edit, or remove any comments or posts that violate our community guidelines, infringe on intellectual property rights, or contain harmful content.
Content Ownership:
By submitting content to metadoro.com, users grant RHC Investments a non-exclusive, royalty-free license to use, display, and distribute the content. Users are responsible for ensuring they have the necessary rights to share the content they post.
Community Guidelines:
To maintain a positive and respectful community, users are expected to adhere to the community guidelines of Metadoro. Any content that is misleading, offensive, or violates applicable laws and regulations will be subject to moderation or removal.
Changes to Disclaimer:
We reserve the right to update, modify, or amend this disclaimer at any time. Users are encouraged to review this disclaimer periodically to stay informed about any changes.