Previously reported that due to the surging demand of artificial intelligence (AI), the market needs more powerful solutions, The release time is advanced from the fourth quarter of 2024 to the end of the second quarter of 2024, and continues to suppress other competitors in the data center market.At the same time, Nvidia has reached an agreement with SK Hynix to choose to use the latters ultra -high -performance DRAM new product HBM3E on the new generation B100 computing card.
According to Seeking Alpha, although the delivery time of the H100 computing card currently used in AI and HPC is greatly shortened, the next generation is based on Blackwell-basedThe new products of architecture are not optimistic in terms of supply.Nvidias first financial officer Colette Kress said at the financial report conference call with financial analysts and investors that the supply of next -generation products is expected to be restricted because demand far exceeds supply.
It is rumored that there are already Nvidia customers who have booked a small amount of B100 computing card.Fast increase in yield.If the market demand is huge, it is likely to repeat the large -scale delay in the early stage of H100 shipping.
The GB100 GPU based on the Blackwell architecture is designed with small chips and MCM packaging, which can easier to improve the chip products, but multi-chip packaging solves solvesThe plan may also make the later packaging work more complicated.In addition to B100, Nvidia also prepared B40 for enterprises and training applications, as well as GB200 products combining B100 and Grace CPUs, and GB200 NVL for large -scale language model training.