F Cisco Silicon One G300: 102.4 Tbps ASIC Powering the Next Generation of AI Data Centers - The Network DNA: Networking, Cloud, and Security Technology Blog

Cisco Silicon One G300: 102.4 Tbps ASIC Powering the Next Generation of AI Data Centers

Full technical deep-dive into Cisco's most advanced switching silicon — announced at Cisco Live EMEA 2026, Amsterda


Cisco Silicon One G300 — 102.4 Tbps full-duplex switching ASIC for AI data center networking. Announced February 10, 2026 at Cisco Live EMEA, Amsterdam.

102.4 Tbps Switching Capacity
512 × 200G On-Chip SerDes
64 × 1.6T Max Port Config
128K GPUs Per Fabric
28% Faster Job Completion
H2 2026 Availability

1. Overview: What Is the Cisco Silicon One G300?

On February 10, 2026, at Cisco Live EMEA in Amsterdam, Cisco unveiled the Silicon One G300 — its most powerful custom networking ASIC to date and the latest generation in its Silicon One family. The G300 is a 102.4 Tbps full-duplex standalone switching processor purpose-built for the demands of modern AI data centers: massive GPU clusters, low-latency all-to-all communication, nanosecond-granularity congestion control, and the kind of sustained bandwidth throughput required to prevent the network from becoming the bottleneck in AI training and inference workloads.

The announcement marks a significant acceleration of Cisco's transformation from a legacy routing and switching hardware vendor into a company positioning itself as the end-to-end AI infrastructure stack. As Jeetu Patel, Cisco's President and Chief Product Officer, declared at the event: the company is "spearheading performance, manageability, and security in AI networking by innovating across the full stack — from silicon to systems and software."

The G300 is not merely a faster chip. It introduces a new architectural philosophy called Intelligent Collective Networking (ICN), which coordinates behaviour across every G300 ASIC in a network fabric — sharing congestion state, telemetry, and load information in hardware time. Combined with a fully unified packet buffer, P4 programmability, and a new generation of liquid-cooled switch platforms, the G300 represents Cisco's most ambitious push into AI infrastructure to date.

📌 The Big Picture The G300 directly targets the $600+ billion AI infrastructure spending cycle. Cisco is competing head-to-head with Broadcom's Tomahawk 6 and Nvidia's Spectrum-X Ethernet ecosystem for the backend networking fabric in hyperscale AI training clusters, sovereign clouds, neoclouds, and large enterprise AI deployments.

2. G300 Architecture and Technical Specifications

The Cisco Silicon One G300 builds directly on the foundation established by its predecessor, the G200, while delivering a generational leap in raw bandwidth and architectural sophistication. At its core, the G300 is a single monolithic die designed for deterministic, low-latency switching at unprecedented scale.

SerDes and Port Configuration

The G300 integrates 512 × 200 Gbps SerDes developed entirely in-house at Cisco. These serializer/deserializers support both PAM4 (four-level pulse amplitude modulation) and NRZ (non-return to zero) signaling, providing both the ultra-high-speed performance demanded by AI scale-out networks and backwards compatibility with existing 100G and 400G infrastructure.

Those 512 SerDes lanes can be configured in multiple ways. In their maximum configuration, they support 64 × 1.6 Tbps ports — directly enabling connections to the latest generation of AI compute NICs. They can also be broken out into 128 × 800G, 256 × 400G, or 512 × 200G configurations depending on the deployment topology.

GPU Cluster Scale

A G300 fabric can connect up to 128,000 GPUs using just 750 switches. Achieving the same GPU count with the prior generation of silicon would have required approximately 2,500 switches — a 3.3× reduction in switch count that translates directly into lower capital cost, lower power consumption, less physical space, and a flatter, simpler network topology.

Full Specifications Table

Cisco Silicon One G300 — Verified Technical Specifications
Parameter Specification Notes
Switching Capacity 102.4 Tbps full-duplex Single-chip, standalone
On-Chip SerDes 512 × 200 Gbps (PAM4 / NRZ) In-house Cisco SerDes
Max Port Speed 64 × 1.6 Tbps Ethernet 128×800G / 256×400G also supported
Packet Buffer Fully Shared (Unified) Any port can consume full buffer
Load Balancing ICN Fabric-Wide Path-Based Nanosecond granularity
Telemetry In-band, hardware-speed Shared across all G300s in fabric
P4 Programmable Yes Field-reprogrammable packet processing
OS Support NX-OS, IOS-XR, ACI, SONiC Full disaggregation supported
Max GPUs per Fabric 128,000 GPUs Using 750 switches (vs 2,500 prev. gen)
Job Completion Time 28% reduction vs non-optimized path selection
Network Utilization 33% increase via ICN vs baseline
Announced February 10, 2026 Cisco Live EMEA, Amsterdam
Availability H2 2026 Pricing not yet disclosed
⚠️ Performance Caveat The 28% job completion time improvement and 33% network utilization gain are compared against what Cisco describes as "simulated non-optimized path selection." Real-world gains will depend heavily on existing network architecture, workload type, and baseline configuration. These figures should be treated as directional indicators, not guaranteed outcomes in every deployment.

3. Intelligent Collective Networking (ICN) — The Key Differentiator

The single most important architectural innovation in the G300 is not the raw bandwidth — 102.4 Tbps is now table stakes at this tier of silicon. What Cisco is betting on is a fundamentally different approach to how a network fabric behaves as a system, not just as a collection of independent forwarding devices. This approach is called Intelligent Collective Networking (ICN).

ICN is composed of three interlocking technologies that work together at hardware speed across the entire G300 fabric.

Intelligent Collective Networking ICN diagram showing three pillars: fully shared packet buffer, path-based load balancing and proactive network telemetry — Cisco G300

Cisco's Intelligent Collective Networking (ICN) — three integrated pillars that differentiate the G300. Together they deliver 33% higher network utilization and 28% faster AI job completion.

Pillar 1: Fully Shared (Unified) Packet Buffer

In conventional switch ASICs — including Broadcom's Tomahawk series — packet memory is partitioned between ingress and egress queues. Each port is allocated a fixed portion of the total buffer. When one port is idle and another is experiencing a burst, the idle port's buffer allocation sits unused while the busy port drops packets.

The G300 eliminates this constraint entirely. Its packet buffer is fully shared and undivided — every port has equal access to the entire buffer pool. This matters enormously for AI workloads, which are characterised by violent, highly synchronised traffic bursts — the so-called "incast" problem where hundreds of GPUs simultaneously transmit to a single destination at the completion of a computation step.

Pillar 2: Path-Based Load Balancing with Fabric-Wide Coordination

Standard Equal-Cost Multi-Path (ECMP) routing assigns traffic flows to paths using a hash of the five-tuple. This approach is stateless and fast — but blind to actual congestion. Two large elephant flows can hash to the same path, creating a hot spot, while an alternative path sits idle.

The G300's load balancing agents operate at a completely different level. Each G300 in the fabric continuously shares telemetry with every other G300, building a real-time map of queue depths, link utilisation, and flow states across the entire network. Path selection is based on actual current congestion across all available paths — driving the 33% improvement in network utilisation.

Pillar 3: Proactive Network Telemetry

Every G300 in the fabric acts as both a telemetry consumer and a telemetry producer. In-band telemetry is collected and shared at hardware speed — not at software polling intervals — giving the load balancing system a continuously updated, fabric-wide view of congestion, link health, and flow distribution. When congestion is building on a particular path, traffic is shifted proactively rather than reactively.

4. New Switches: Cisco N9000 and 8000 Series

The G300 ASIC powers a new generation of purpose-built switches spanning both the Cisco Nexus 9000 and Cisco 8000 product families. Cisco offers both 100% liquid-cooled and air-cooled variants, targeting different deployment contexts from hyperscale AI training clusters to enterprise data center fabrics.

Cisco G300 systems and optics ecosystem showing N9364-SG3 liquid cooled switch, Cisco 8132, air-cooled N9000 3RU model, 1.6T OSFP optics and 800G LPO

Cisco G300 switch platform ecosystem — liquid-cooled and air-cooled options across the N9000 and 8000 series, plus 1.6T OSFP and 800G LPO optics.

Cisco N9364-SG3 — Liquid-Cooled AI Fabric Leaf

The N9364-SG3 is a Nexus 9000 series switch built around the G300 with a 100% liquid-cooled thermal design. It offers 64 ports at up to 1.6T line rate and supports NX-OS, ACI, and SONiC. Liquid cooling enables the N9364-SG3 to achieve approximately 70% better energy efficiency per bit compared to the prior generation of air-cooled switches. A single N9364-SG3 delivers the same total bandwidth as six prior-generation systems — an exceptional density gain.

Cisco 8132 — Liquid-Cooled Spine for Service Providers and AI Fabric

The Cisco 8132 is a Cisco 8000 series platform running IOS-XR, designed for service provider edge deployments and AI fabric spine roles. It offers 64 × 1.6T ports in a 100% liquid-cooled chassis and brings Cisco's carrier-grade IOS-XR feature set to the AI data center spine role — particularly relevant for operators needing to bridge WAN and AI fabric environments under a single OS.

Air-Cooled N9000 3RU Model

For deployments where liquid cooling infrastructure is not available or practical, Cisco offers a 3RU air-cooled Nexus 9000 model powered by the G300. This platform provides the same 64 × 1.6T port density in a conventional air-cooled chassis and supports all four OS options: NX-OS, IOS-XR, ACI, and SONiC — making it the most flexible deployment choice for organisations mid-transition on liquid cooling.

Cisco G300 Switch Platform Comparison
Platform Cooling Ports OS Support Target Role
N9364-SG3 100% Liquid 64 × 1.6T NX-OS, ACI, SONiC AI fabric leaf, enterprise data center
Cisco 8132 100% Liquid 64 × 1.6T IOS-XR, SONiC SP edge, AI fabric spine
N9000 3RU (Air) Air-Cooled 64 × 1.6T NX-OS, IOS-XR, ACI, SONiC General data center, enterprise AI

5. New Optics: 1.6T OSFP and 800G LPO

Alongside the G300 silicon and switch platforms, Cisco announced two new optics products that complete the AI networking stack. Both address the same core problem: moving data between GPUs, NICs, and switches at the highest possible bandwidth with the lowest possible power consumption.

1.6T OSFP — Ultra-High-Bandwidth AI Scale-Out Optics

OSFP (Octal Small Form-Factor Pluggable) is the industry's current highest-density optical form factor, supporting up to 8 × 200G lanes in a single module. Cisco's new 1.6T OSFP modules deliver 1.6 Tbps of raw optical bandwidth per port — directly matching the G300's maximum per-port speed. These modules target switch-to-NIC links in AI scale-out deployments and support multiple operational modes: 1.6T, 800G, 400G, and 200G, providing a smooth migration path for organisations upgrading GPU cluster interconnects incrementally.

800G LPO — Linear Pluggable Optics for 50% Power Reduction

LPO (Linear Pluggable Optics) eliminates the DSP retimer chip found in traditional optical modules. Conventional retimed modules use a DSP to re-time and re-amplify the electrical signal before converting it to optical — this provides cleaner signal integrity but consumes significant power. LPO removes the DSP entirely, converting the raw electrical signal directly to optical — consuming approximately 50% less power than retimed equivalents.

💡 Why LPO Matters at Scale In a large AI cluster with thousands of optical connections, a 50% reduction in per-module power consumption translates into a massive reduction in total system power. LPO requires a cleaner electrical signal from the host ASIC — which is why Cisco's in-house 200G SerDes development is directly tied to enabling LPO support on the G300.

6. P4 Programmability — What It Means and Why It Matters

One of the G300's most strategically significant capabilities is its support for P4 (Programming Protocol-Independent Packet Processors) — an open-source domain-specific language that allows the packet processing pipeline of a network device to be described, modified, and reprogrammed entirely in software, without changing any hardware.

In a traditional fixed-function ASIC, the packet processing pipeline is hardwired at fabrication time. If a new forwarding feature is needed, it requires a new chip revision. P4 changes this fundamentally: the G300's match-action pipeline is programmable in the field, meaning new forwarding behaviours, new encapsulations, new telemetry collection methods, and new load balancing algorithms can all be deployed via P4 code updates without hardware replacement.

For the G300, this has direct practical implications. A single G300 platform can serve as a leaf switch, a spine switch, or a front-end internet edge device — the role is determined not by the hardware specification but by the P4 program loaded onto the chip. This dramatically simplifies hardware lifecycle management and sparing.

✅ P4 Business Value For hyperscalers, P4 access means the ability to implement proprietary forwarding and telemetry behaviours that no off-the-shelf ASIC could match — genuine competitive differentiation in AI cluster network performance.

7. SONiC and Disaggregation: Cisco Plays the Open Networking Card

The G300 fully embraces SONiC (Software for Open Networking in the Cloud) — the Linux-based, open-source network operating system originally developed by Microsoft and now maintained under the Linux Foundation. SONiC is the OS of choice for hyperscalers and neoclouds that want full control over their networking stack without vendor lock-in.

Cisco is offering two distinct disaggregation options for the G300:

  • Cisco Silicon + Cisco Switch + SONiC: Customers get the G300's performance advantages in a Cisco-built chassis but replace NX-OS/ACI with SONiC. This is the simplest path for a hyperscaler wanting G300 performance without changing their SONiC-based automation toolchain.
  • Cisco Silicon + Whitebox Hardware + SONiC: The most disaggregated option — Cisco sells the G300 ASIC to an ODM or whitebox hardware partner, which builds a custom chassis, and the customer runs SONiC on top. Cisco benefits from silicon revenue without requiring the customer to adopt any other Cisco hardware or software.

This two-tier disaggregation strategy is a pragmatic acknowledgement that the networking market has changed. Hyperscalers are not going to buy into a fully closed Cisco stack. By offering the G300 silicon for use in SONiC environments and even whitebox hardware, Cisco maximises the addressable market for its custom silicon.

8. Nexus One and AgenticOps: AI-Driven Network Management

The G300 silicon announcement was accompanied by a significant software update to Nexus One — Cisco's unified management platform for data center networking. Nexus One now delivers a unified management plane that brings together silicon, systems, optics, software, and programmable intelligence as a single integrated solution.

The headline feature of the updated Nexus One is AgenticOps — an AI-driven operations capability built around AI Canvas, a natural language interface that allows network operators to troubleshoot data center fabric issues through guided, human-in-the-loop conversations. Instead of manually navigating CLI outputs, syslog streams, and vendor documentation, engineers can describe a problem in natural language and AI Canvas will guide them through the diagnostic and resolution workflow, pulling together telemetry from across the G300 fabric in real time.

📌 AgenticOps Target Market Nexus One with AgenticOps specifically targets enterprises, who have historically been at a disadvantage compared to hyperscalers in terms of available automation tooling and operational expertise for large-scale fabric management.

9. Competitive Landscape: G300 vs Broadcom TH6 vs Nvidia Spectrum-4

Cisco is entering a competitive silicon market where both Broadcom and Nvidia have established, shipping products. Understanding how the G300 fits into this landscape requires an honest assessment of what it offers, what it lacks, and the timing implications.

Competitive comparison: Cisco Silicon One G300 vs Broadcom Tomahawk 6 vs Nvidia Spectrum-4 across switching capacity, SerDes, packet buffer, P4 programmability, load balancing and availability

AI networking silicon comparison — Cisco G300 vs Broadcom TH6 vs Nvidia Spectrum-4. Both G300 and TH6 share 102.4 Tbps capacity; key differentiators lie in buffer architecture, load balancing and programmability.

Broadcom Tomahawk 6

Broadcom's Tomahawk 6 was announced in June 2025 and has been shipping since then — giving it roughly a one-year head start over the G300. It matches the G300's 102.4 Tbps capacity and 512 × 200G SerDes specification. The key architectural difference is buffer architecture: Broadcom uses a fragmented ingress/egress buffer model, while Cisco leads with its fully shared buffer claim. TH6 powers switches from Arista, HPE Juniper, and others in the Broadcom ecosystem — all shipping today.

Nvidia Spectrum-4 (Spectrum-X)

Nvidia's Spectrum-4 chip operates at a lower switching capacity — 51.2 Tbps — placing it in a different market tier than the G300 and TH6. However, Nvidia's differentiation is its deep integration with its own GPU ecosystem: Spectrum-X is designed in tight conjunction with ConnectX-8 NICs, SHARP in-network computing, and the CUDA/NCCL software stack. For pure Nvidia GPU clusters, Spectrum-X can deliver AI workload optimisations that neither Cisco nor Broadcom can match through silicon integration alone.

The Timing Risk

Cisco's biggest competitive vulnerability with the G300 is timing. With Broadcom TH6 already shipping and Nvidia Spectrum-X well-established, the G300's H2 2026 availability means Cisco will be entering a market where customers have already made purchasing decisions for their current generation of AI cluster buildouts.

🚫 Timing Consideration The AI infrastructure spending boom is happening now, not in late 2026. Cisco's bet is that the G300's architectural advantages — particularly ICN and the fully shared buffer — will be compelling enough to be designed into the next wave of AI cluster expansion rather than the current one.
AI Networking Silicon — Detailed Competitive Comparison (2026)
Specification Cisco G300 Broadcom TH6 Nvidia Spectrum-4
Switching Capacity 102.4 Tbps 102.4 Tbps 51.2 Tbps
SerDes 512 × 200G 512 × 200G 256 × 200G
Max Port Speed 1.6 Tbps 1.6 Tbps 800 Gbps
Packet Buffer Fully Shared Fragmented (Ingress/Egress) Fully Shared
Load Balancing ICN Fabric-Wide Standard ECMP + Flowlet RoCE / DCQCN
P4 Programmable Full P4 Limited (BF-RT) Yes (Sonic P4)
SONiC Support Yes (native) Yes (native) Partial
Liquid Cooling Yes (N9364-SG3, 8132) Vendor-dependent Yes (QM9700)
GPU Ecosystem All vendors All vendors Optimised for Nvidia
Available H2 2026 Shipping now (2025) Shipping now (2023)
Key Differentiator ICN + unified buffer Ecosystem breadth Nvidia GPU integration

10. Who Should Consider the G300? Use Cases and Target Markets

Cisco is explicitly targeting four customer segments with the G300, each with different requirements and different reasons to evaluate the platform.

Hyperscalers and Neoclouds

These are customers building the largest GPU clusters in the world. For them, the G300's SONiC support and whitebox disaggregation options are the entry point. The ICN fabric-wide load balancing and unified buffer are the performance differentiators. If the G300 genuinely delivers 33% better link utilisation and 28% faster job completion, the economics of a 128,000-GPU cluster make the switch cost largely irrelevant relative to GPU utilisation gains.

Sovereign Clouds and National AI Infrastructure

Governments and national cloud providers building AI infrastructure under sovereignty requirements need a trusted supply chain, multi-OS flexibility, and deep integration with existing Cisco infrastructure. The G300's NX-OS, IOS-XR, and ACI support — alongside SONiC — makes it arguably the most OS-flexible AI networking silicon on the market.

Large Enterprises Building Private AI Clusters

Enterprise customers deploying on-premises AI clusters at the 1,000–10,000 GPU scale need enterprise-grade management (Nexus One / ACI), operational support, and integration with existing data center infrastructure. The G300's integration with Nexus One and AgenticOps, and its ability to operate within an existing ACI fabric, makes it particularly well-suited for this segment. The air-cooled 3RU model is specifically designed for enterprises that have not yet invested in liquid cooling infrastructure.

Service Providers and Carrier AI Infrastructure

The Cisco 8132 — the IOS-XR variant of the G300 switch — targets service providers building AI-as-a-service and inference cloud infrastructure. For SP operators, IOS-XR compatibility with existing WAN and peering infrastructure is critical. The 8132 positions the G300 at the intersection of the SP data center edge and the AI cluster spine, enabling a single platform to serve both roles.

11. Availability, Pricing and Ecosystem

Cisco confirmed that G300-powered systems are expected to be commercially available in the second half of 2026. The announcement was made at Cisco Live EMEA on February 10, 2026, giving the industry approximately six months of lead time before general availability. Cisco declined to share specific pricing figures at the Cisco Live briefing — a common approach for pre-GA announcements where pricing is still being finalised based on customer engagement and competitive positioning.

The G300 is supported by Cisco's full partner and reseller ecosystem. Cisco offers complete validated designs, professional services, TAC support, and lifecycle management across the entire G300 stack — a significant consideration for enterprise and sovereign cloud customers who lack the in-house expertise of a hyperscaler.

📌 Related Articles on The Network DNA For context on the broader Cisco data center portfolio, see our deep-dive guides on Cisco ACI architecture and our comparison of data center networking platforms.

12. Frequently Asked Questions

Q: What is the Cisco Silicon One G300?

The G300 is a 102.4 Tbps full-duplex standalone switching ASIC from Cisco's Silicon One family, announced February 10, 2026 at Cisco Live EMEA. It integrates 512 × 200G in-house SerDes, supports 64 × 1.6T port configurations, and introduces Intelligent Collective Networking (ICN) — a combination of fully shared packet buffer, path-based fabric-wide load balancing, and proactive in-band telemetry.

Q: How does the G300 compare to Broadcom Tomahawk 6?

Both offer 102.4 Tbps with 512 × 200G SerDes and 1.6T port support. The key differences: the G300 uses a fully shared unified packet buffer (TH6 uses fragmented ingress/egress buffers), implements ICN fabric-wide load balancing (TH6 uses standard ECMP), and provides full P4 programmability (TH6 offers more limited BF-RT programmability). However, TH6 has been shipping since 2025 — the G300 is not available until H2 2026.

Q: What operating systems does the G300 support?

The G300 supports NX-OS (Nexus 9000 series), IOS-XR (Cisco 8000 series), ACI (Cisco Application Centric Infrastructure), and SONiC (disaggregated deployments). It also supports whitebox hardware running SONiC for fully disaggregated deployments targeting hyperscalers and neoclouds.

Q: How many GPUs can a G300 fabric support?

A G300 fabric can support up to 128,000 GPUs using 750 switches — compared to 2,500 switches needed with the prior generation of silicon. The high-radix design enables a flatter, simpler network topology that directly reduces latency and switch count at scale.

Q: What is P4 programmability and why does it matter for the G300?

P4 is an open-source language for defining packet processing pipelines. On the G300, P4 support means the forwarding behaviour of the chip — parsing, matching, encapsulation, load balancing algorithms, telemetry collection — can all be reprogrammed in the field without hardware replacement. This allows a single G300 platform to serve as leaf, spine, or edge switch depending on the P4 program loaded.

Q: What is the 800G LPO and why is it significant?

LPO (Linear Pluggable Optics) removes the DSP retimer chip from optical transceiver modules, converting the electrical signal directly to optical without digital re-timing. This eliminates approximately 50% of the power consumption of conventional retimed modules. At the scale of a large AI cluster with thousands of optical connections, this represents a very significant reduction in total power consumption.

Q: When is the Cisco G300 available and what does it cost?

The G300 is expected to be commercially available in the second half of 2026. Cisco has not publicly disclosed pricing. The announcement was made on February 10, 2026 at Cisco Live EMEA in Amsterdam.


Sources: Cisco Newsroom press release (Feb 10, 2026), Cisco Silicon One G300 Data Sheet, Cisco Live EMEA 2026 technical briefing. All performance figures as stated by Cisco. The 28% job completion time and 33% network utilization improvements are measured against simulated non-optimized path selection as defined by Cisco.