AVGO Broadcom Inc.

Broadcom Extends Leadership in Custom Accelerators and Merchant Networking Solutions for AI Infrastructure

Broadcom Extends Leadership in Custom Accelerators and Merchant Networking Solutions for AI Infrastructure

Latest Offerings Advance Broadcom’s Portfolio of Open, Scalable and Power-Efficient Technologies for AI solutions

PALO ALTO, Calif., March 20, 2024 (GLOBE NEWSWIRE) -- Cloud and data center providers are building AI systems at a pace that requires a new level of performance, scale and efficiency. Consumer AI use cases are increasingly driving the need for lowest power custom AI accelerators, while open, standards-based merchant networking solutions scale large AI clusters. Broadcom Inc. (NASDAQ: AVGO) is evolving a broad portfolio of technologies to extend its leadership in enabling next-generation AI infrastructure. This includes foundational technologies and advanced packaging capabilities aimed at building the highest performance, lowest power custom AI accelerators. In addition, the complete set of end-to-end merchant silicon connectivity solutions ranging from best-in-class Ethernet and PCIe to optical interconnects with co-packaging capabilities drives the scale-up, scale-out and front-end networks of AI clusters.

“For providers contending with the ever-increasing demand for generative AI clusters, the key to success will be a network-centric platform, based on open solutions, that scales at the lowest power,” said Charlie Kawwas, Ph. D., president of Broadcom’s Semiconductor Solutions Group. “The innovations we’ve introduced extend our leadership for custom AI accelerators, Ethernet, PCI Express and optical interconnect portfolios. Built on our world-class foundational technologies like SerDes and DSP, they provide the best custom XPUs and merchant networking solutions enabling AI infrastructure.”

Broadcom’s latest AI infrastructure innovations include:

  • Delivery of its . Broadcom Bailly delivers unprecedented bandwidth density and economic efficiency addressing connectivity challenges in data center switching and computing.
  • An expanded portfolio of proven optical interconnect solutions supporting 200G/lane for AI and ML applications. enable high-speed interconnects for front-end and back-end networks of large-scale generative AI compute clusters.
  • The industry’s first end-to-end PCIe connectivity portfolio. , together with our PEX series switches, offer the lowest power solutions and unparalleled efficiency to interconnect CPUs, accelerators, NICs and storage devices.
  • chip integrates NetGNT technology, marking a pioneering advancement in switching silicon by enabling it to adeptly identify traffic patterns typical in AI/ML workloads and effectively avert congestion.
  • Vision for AI acceleration and democratization outlined at , spanning a combination of ubiquitous AI connectivity, innovative silicon, and open standards.
  • 800G PAM-4 DSP PHY for AI workloads at scale. The BCM85822 features 200G/lane serial optical interfaces, which enables lowest-power, highest-performance 800G and 1.6T optical transceiver modules to address the growing bandwidth demands in hyperscale data centers and cloud networks.
  • High-performance fabric for AI networks. Networks based on Jericho3-AI will help handle the ever-expanding workloads AI demands will present.
  • provides a major performance boost for AI/ML infrastructure. The family of Ethernet switch/router chips is available for production deployments.

Supporting Resources

About Broadcom

Broadcom Inc. (NASDAQ: AVGO) is a global technology leader that designs, develops, and supplies a broad range of semiconductor, enterprise software and security solutions. Broadcom's category-leading product portfolio serves critical markets including cloud, data center, networking, broadband, wireless, storage, industrial, and enterprise software. Our solutions include service provider and enterprise networking and storage, mobile device and broadband connectivity, mainframe, cybersecurity, and private and hybrid cloud infrastructure. Broadcom is a Delaware corporation headquartered in Palo Alto, CA. For more information, go to .

Broadcom, the pulse logo, and Connecting everything are among the trademarks of Broadcom. The term "Broadcom" refers to Broadcom Inc., and/or its subsidiaries. Other trademarks are the property of their respective owners.

Press Contacts:

Khanh Lam

Global Communications



Jon Piazza

Global Communications





EN
20/03/2024

Underlying

To request access to management, click here to engage with our
partner Phoenix-IR's CorporateAccessNetwork.com

Reports on Broadcom Inc.

Pierre FerraguÊ
  • Pierre FerraguÊ

The market has it backwards: Meta is the only hyperscaler overspending...

Microsoft, Google, and Amazon sold off after announcing large capex increases, while Meta rose ~10%. Our analysis suggests the market has this backwards. Please see the link for our take, summarised on a single slide.

 PRESS RELEASE

Broadcom Announces Industry’s First Enterprise Wi-Fi 8 Access Point an...

Broadcom Announces Industry’s First Enterprise Wi-Fi 8 Access Point and Switch Solution for the AI Era Unified, wireless-first architecture addresses rising demand for hybrid work and delivers performance, efficiency and security needed for next-generation enterprise networking PALO ALTO, Calif., Feb. 03, 2026 (GLOBE NEWSWIRE) -- Broadcom Inc. (NASDAQ: AVGO), a global leader in semiconductor and infrastructure software solutions, today announced the industry’s first Wi-Fi 8 access point (AP) and switch solution purpose-built with a unified architecture for AI-ready enterprise networks. ...

Pierre FerraguÊ
  • Pierre FerraguÊ

Reconciling TSMC guide with total AI Infra. Spending in 2030, on a sin...

TSMC raised its AI revenue growth outlook last week to mid-to-high 50% CAGR from 2024 to 2029. Click on the link for a reconciliation with our forecast for total AI Infrastructure Spending.

Pierre FerraguÊ
  • Pierre FerraguÊ

OpenAI deploying 750MW of Cerebras wafers. Our quick take.

OpenAI signed a multi-year deal to deploy 750 MW of Cerebras wafer-scale compute, ramping from 2026 through 2028.

ResearchPool Subscriptions

Get the most out of your insights

Get in touch