Posted by - Support KAAYXOL -
on - 9 hours ago -
Filed in - Technology -
-
9 Views - 0 Comments - 0 Likes - 0 Reviews
NVIDIA is gobbling up as many HBM or high-bandwidth memory chips as it can get its hands on because the company is trying to meet insatiable demand for its AI accelerators. SK Hynix has done very well by emerging as the most important HBM supplier for NVIDIA, while Samsung still waits on passing the company's quality tests.
Since HBM orders are placed far in advance, Samsung has missed out on the bulk of HBM3E market that SK Hynix has cornered. It's betting on recovering with HBM4, and also by customizing its solutions towards ASICs, which may prove to be instrumental in the AI arms race.
ASICs are different than GPUs, in that these are chips that are optimized for a very specific fuction. They're more cost efficient than conventional AI accelerators and also reduce power consumption and investment costs for companies.
Many major tech companies view ASICs as a viable alternative to NVIDIA's AI chips. Google has recently unveiled its seventh-generation TPU or Tensor Processing Unit. It's purpose-built for inference, with 192GB of HBM capacity, considerably higher than NVIDIA's GPU-based AI accelerators.
Google and others who are developing ASICs for AI will need high-bandwidth memory, and Samsung can look to win orders from them to regain its lost footing in this market. Samsung mentioned during its recent earnings call that the company is in talks with customers for custom HBM chips, potentially hinting at a greater focus on meeting the demand from ASIC designers.
Analysts also predict that a demand boom from ASICs will be a bigger opportunity for Samsung than SK Hynix, as the latter has likely sold out its capacity to customers like NVIDIA in advance, whereas Samsung will have more flexibility in supplying high-bandwidth memory chips to a wider range of future customers.
The post As NVIDIA’s HBM approval lags, Samsung views ASICs as the next big opportunity appeared first on SamMobile.