Friday, November 7, 2025

BIG BEAUTIFUL JENSEN HUANG'S KEYNOTE ADDRESS and the HUANG LAW

HUANG LAW and SUMMARY OF HIS KEYNOTE ADDRESS

Here are some detailed notes and conceptual analysis of Jensen Huang's keynote address at the NVIDIA GTC in Washington, D.C., on October 28, 2025.

The central business announcement, which subsequently propelled NVIDIA to a $5 trillion market capitalization, was Huang's statement that the company now has "visibility into half a trillion dollars" ($500 billion) in cumulative orders for its Blackwell and upcoming Rubin platforms through 2026.

Oh, wait… GTC stands for: GPU Technology Conference in the next Industrial Revolution.  Jensen Huang is the High Priest of this conference. The conference is a major platform for unveiling new Nvidia technologies and setting the direction for future advancements in AI and computing. 

Detailed Summary of GTC Keynote (October 28, 2025)

The keynote, themed around "America's AI Infrastructure" and the "Next Industrial Revolution." It was a strategic presentation focused on NVIDIA's role in building national AI capabilities.

The central business announcement, which subsequently propelled NVIDIA to a $5 trillion market capitalization, was Huang's statement that the company now has "visibility into half a trillion dollars" ($500 billion) in cumulative orders for its Blackwell and upcoming Rubin platforms through 2026.

Here are the key announcements by category:

1. Next-Generation Platform Roadmap (The "One-Year Rhythm")

Huang reaffirmed NVIDIA's aggressive one-year release beat, moving beyond the chip to full-platform co-design.

  • Blackwell ( - Current Generation): The Grace Blackwell platform (e.g., GB200 NVL72) is in full production at facilities in the U.S. (Arizona), reinforcing the "Made in America" theme. The Blackwell Ultra chip will be  released later in 2025. (Not much time left).
  • Vera Rubin ( - Next Generation): The next major platform, named after the astrophysicist Dr. Vera Rubin, is scheduled for 2026. This is not just a GPU but it is an entire system architecture.

(Vera Rubin was an American astronomer whose work provided convincing evidence for the existence of dark matter.)

2. U. S. National AI Infrastructure & Supercomputing

This is a core theme, positioning NVIDIA as a national strategic asset.

  • U.S. Department of Energy (DOE) Partnership: NVIDIA announced it is powering seven new AI supercomputers for the DOE.
  • "Solstice" Supercomputer: The largest of these, built in partnership with Oracle, will be one of the world's most powerful AI systems, featuring 100,000 NVIDIA Blackwell GPUs to support national security, energy, and science applications.
  • Los Alamos National Lab (LANL):  The LANL's next-generation systems will be among the first to be built on the upcoming Vera Rubin platform.

3. New Frontiers: 6G and Quantum Computing

Huang detailed NVIDIA's expansion into two new, highly complex compute domains.

  • 6G Telecommunications:
    • Hello! - Nokia Partnership: A $1 billion strategic partnership with Nokia to develop an "AI-native 6G" platform.
    • NVIDIA Arc Aerial RAN Computer: A new, 6G-ready computing platform designed to infuse AI services directly into the mobile network. (For me and you…)
    • "All-American AI-RAN Stack": A collaboration with T-Mobile, Cisco, and MITRE to build a U.S.-based 6G development stack.
  • Quantum Computing:
    • NVIDIA NVQLink: A new interconnect architecture designed to bridge the gap between classical and quantum computing. It allows NVIDIA GPUs to be directly linked to Quantum Processing Units (QPUs), enabling a hybrid quantum-classical system for complex simulations.

4. Physical AI (Robotics & Autonomous Systems)

Huang declared "Physical AI" as the next major wave, where AI agents perceive and interact with the physical world.

  • NVIDIA "Groot N1" Foundation Model: A new, general-purpose foundation model for humanoid robots.
  • Newton Simulation Platform: A new high-fidelity physics engine (an evolution of Omniverse) designed to simulate robots and their environments for training.
  • "Project Blue": A collaborative humanoid robot project demonstrated with partners Google DeepMind and Disney Research.
  • Autonomous Vehicles:
    • Uber Partnership: A major partnership to deploy 100,000 autonomous robotaxis, powered by NVIDIA DRIVE, starting in 2027.
    • DRIVE Hyperion Platform: Expanded adoption by automakers including GM, Stellantis, Lucid, and Mercedes-Benz.

5. Geopolitical and Market Context

Hey, there is money to be here…

  • $5 Trillion Valuation: The keynote's $500B order visibility statement was the primary catalyst for NVIDIA's stock surge, making it the first company to achieve this valuation.
  • China Market: NVIDIA's market share in China has fallen from 95% to "effectively zero" due to U.S. export controls and Beijing's policies.
  • "America First" Alignment! Huang explicitly praised the Trump administration's "America First" policies for incentivizing and revitalizing U.S. manufacturing, which he stated enabled NVIDIA's new U.S.-based production.

Here are PhD-Level Lecture Notes and Conceptual Analysis of All That Said

Below are the core theoretical theses presented by Huang, abstracted from the product announcements.

Thesis 1: The End of Classical Scaling Paradigms

  • Concept: The death of Moore's Law and, more importantly, Dennard Scaling (which stated power density remains constant as transistors get smaller) is now an accepted industry fact.
  • Argument: Sequential processing (CPU-centric) can no longer deliver the performance gains required. The only path forward is Accelerated Computing, a hybrid model where parallel processors (GPUs) work in tandem with sequential processors (CPUs).
  • Evidence: The entire keynote was a demonstration of this thesis. The core software foundation, CUDA-X, is the "operating system" for this new computing model, and every new hardware platform is designed to accelerate this specific paradigm. (you need to follow the link if you want to get an idea what is CUDA)

Thesis 2: The New Computing Model: "Generative" vs. "Retrieval"

  • Concept: Huang articulated a fundamental shift in the purpose of computing.
    • Retrieval Computing ( - The Past): The old internet and all prior computing were based on retrieval. A user requests information, and the system fetches a pre-written, pre-stored piece of data (a webpage, a document, a video).
    • Generative Computing ( - The Future): The novel AI models do not retrieve. They receive a prompt, understand the context and meaning, and generate a novel, never-before-seen answer (a "token").  (Hello Agentic…)
  • Financial Implication: This new model is computationally far more expensive and requires a new infrastructure. The basic unit of this new infrastructure is the "AI Factory."

Thesis 3: The New Scaling Law: "Extreme Co-Design"

  • A new Concept is born: If single-chip performance (Moore's Law) is no longer the primary driver, gains must come from a new scaling law.

Huang's Law is: "Extreme Co-Design."

  • Argument: Performance "X-factors" (multiplicative gains) are now achieved by co-designing the entire stack as a single product. This includes:

1.    Silicon: The GPU and CPU (Grace Blackwell).

2.    Interconnects: High-speed chip-to-chip links (NVLink).

(NVLink means providing high speed connectivity between two GPUs to increase performance).

3.    Networking: The data center fabric (Spectrum-X Ethernet).

4.    Power & Cooling: Liquid-cooling and power delivery systems.

5.    Software: Optimized libraries (CUDA) and inference engines (NVIDIA Dynamo).

  • Evidence: The Grace Blackwell NVL72 is the canonical example. It's not sold as 72 separate GPUs but as a single, liquid-cooled "thinking machine," a single computational unit. The Vera Rubin platform continues this by integrating the BlueField-4 DPU directly into the system design.

Thesis 4: The Next Wave: From Digital AI to "Agentic & Physical AI"

  • Concept: Huang defined the next evolution of AI.
    • Agentic AI: Ha…  It’s the AI that possesses all that ai  agency.  Can do.  BUT it can perceive its environment, understand the context, reason, create a plan, and act to accomplish a goal. You can find some more on Agentic here.
    • Physical AI: The application of Agentic AI to the physical world, which is robotics.
  • Argument: To create Physical AI, models must be trained to understand physics, 3D space, and cause-and-effect.
  • Evidence: This thesis justifies the new product stack:
    • Groot N1: The "brain" or foundation model for the robot.
    • Newton: The "gym" or virtual world where the brain is trained (via simulation and reinforcement learning) before being deployed in the physical world.

Thesis 5: The Software-Defined Physical Stack (6G & Quantum)

  • Concept: NVIDIA's strategy is to turn specialized, hardware-defined industries into software-defined ones running on NVIDIA GPUs.
  • Argument (6G): A 5G/6G base station is currently a complex box of fixed-function hardware (ASICs, FPGAs). The NVIDIA Arc platform turns it into a software-defined radio (SDR) running on a GPU. This allows telcos to push AI services (like AI-RAN) to the network edge as a software update.
  • Argument (Quantum): Quantum computers (QPUs) are brilliant at certain problems but useless at others. The NVQLink interconnect treats the QPU as a "quantum accelerator" in the same way a GPU is a "parallel accelerator," allowing developers to write hybrid algorithms (within CUDA) that pass workloads between the CPU, GPU, and QPU.

Subscribe to This Blog. It’s Free.

A video from NVIDIA's YouTube channel provides the full keynote address from GTC in Washington, D.C., where these notes were made.

My Take:

If you understand Huang's Law of "Extreme Co-Design," and you understand where Agentic AI is going, you get a sense on how close we are to AGI.

www.mandylender.net  www.attractome.com  www.lendercombinations.com   www.mandylender.com

 Tags: #AI #GTC #NVIDIA #HuangLaw #extremecodesign #agentic   #JensenHuang  #Blackwell #VeraRubin  #CUDA

No comments:

Post a Comment