NVIDIA Jetson T5000: This is the compact 'brain' for physical AI.

  • Blackwell GPU with 2.560 cores and 96 Tensor Cores: up to 2.070 TFLOPS FP4 (sparse) and MIG to isolate loads.
  • Arm Neoverse‑V3AE 14-core, 128GB LPDDR5X and integrated NVMe CPU; Industrial I/O with 4 × 25 GbE and cameras via QSFP/MIPI.
  • Multimedia engine for multiple 4K/8K streams, generative framework support (Llama, Gemini, Qwen) and Isaac GR00T N1.5.
  • T5000/T4000 options for different power budgets; AGX Thor kit from $3.499 and modules from $2.999.

NVIDIA Jetson T5000

NVIDIA's Jetson family adds a new protagonist designed for edge AI and robotics modern: the Jetson T5000, the linchpin of the AGX Thor development kit. Simply put, it's a system-on-module with workstation performance which fits into a surprisingly compact form factor and is designed to run generative models, multimodal perception, and real-time control without relying on the cloud.

Beyond the marketing, what makes the difference is the combination of a Blackwell architecture GPU, a latest-generation Arm Neoverse CPU, and generous LPDDR5X memory. This base allows for up to 2.070 TFLOPS (FP4, with sparsity), figures that set the bar very high for humanoid robots, industrial manipulators, drones or autonomous vehicles that require millisecond latencies.

What is the Jetson T5000 and why it matters for physical AI?

Jetson T5000 is the computing module that NVIDIA integrates into the Jetson AGX Thor Developer Kit, a platform focused on developers working with robots and AI systems that interact with the real world. The kit's format and dimensions resemble a Mini-PC, but its raison d'être lies in the Edge AI and robotics: process sensor data, make decisions and operate motors without going through external servers.

The company refers to this new wave of machines that perceive, understand, and interact in unstructured environments as "physical AI." T5000 addresses this need. high-performance local computing which allows running multimodal models (language, vision, action) while combining cameras, lidar and microphones with precision control.

The launch comes with the support of a broad ecosystem: Amazon Robotics, Meta, Caterpillar, Agility Robotics, Figure AI, Hexagon, Medtronic, Boston Dynamics or laboratories at Stanford, Carnegie Mellon and the University of Zurich are already testing or adopting Jetson Thor for prototypes and deployments.

For those wondering about barriers to entry, NVIDIA already markets both the development kit and production modules. The former is shipped “ready to go,” with connectivity, storage and cooling integrated, designed to accelerate laboratory testing and its subsequent transfer to production.

Jetson AGX Thor Developer Kit

Architecture and performance: Blackwell at the service of robotics

The graphical heart of the T5000 is a GPU based on the architecture Blackwell with 2.560 CUDA cores and 96 fifth-generation Tensor Cores. This combination enables advanced inference capabilities, lightweight training at the edge, and, very importantly, Multi-Instance GPU (MIG) with up to 10 texture processing clusters to isolate workloads and allocate resources precisely.

In numbers, the module reaches up to 2.070 TFLOPS (FP4, Sparse) and achieves 1.035 TFLOPS in FP8 in theoretical scenarios. Translated into practice: the ability to run state-of-the-art generative AI and visual reasoning models with millisecond latencies, coordinating multiple sensors and actuators in parallel.

The CPU accompanies the height: a Arm Neoverse‑V3AE 64-bit with 14 cores, 2 MB L1 cache per core and a shared L3 of 16 MB, with a maximum frequency of 2,6 GHz. Compared to Jetson AGX Orin, NVIDIA estimates the jump in 7,5x more AI computing, 3,1× more CPU performance and 3,5× more energy efficiency.

To sustain this data flow, the T5000 mounts 128GB LPDDR5X at 256 bits, with a bandwidth of up to 273 GB/s. This memory allows large models and batches of frames from multiple cameras to be stored in RAM without throttling the vision pipeline.

Jetson T5000 Technical Features

Memory, storage and PCIe expansion

The development kit includes an SSD as standard. 2TB NVMe M.1, perfect for local datasets, logs, maps, and models. Expandable, it offers M.2 Type M (NVMe) slots with PCIe Gen5 x4 and M.2 Type E slot (e.g. WLAN/Bluetooth) with 1 PCIe Gen4 link, plus exposed USB2.0, UART and I2C.

At the level of deeper PCIe topologies, the module supports up to Gen5 (8 lanes) and can function as a root port or endpoint in various combinations (1 C2, 8 C4, 4 C5), as well as root-only port options (1 C1 and 2 C3). This flexibility is ideal for adding PCIe accelerators, sensors, or communication cards.

The set of USB ports covers both high-performance and maintenance peripherals: 2 USB‑A 3.2 Gen2, 2 USB‑C 3.1, and an xHCI host controller with up to 3 USB 3.2 and 4 USB 2.0 ports. Ideal for USB cameras, debugging interfaces, or hubs.

For embedded systems, there is a wide assortment of I/O: I2C, SPI, UART, PWM and CAN (two 13-pin headers), plus JTAG connectors, audio headers, fan connector (12V, PWM and tachometer), and Microfit power with RTC battery backup.

Video, cameras and screen outputs

One of the strong points of the AGX Thor kit with T5000 is its multimedia engine. In terms of encoding, it achieves 6 × 4Kp60 (H.265), 12 × 4Kp30 (H.265), 24 × 1080p60 (H.265), 50 × 1080p30 (H.265), 48 × 1080p30 (H.264) and 6 × 4Kp60 (H.264). In decoding it reaches 4 × 8Kp30 (H.265), 10 × 4Kp60 (H.265), 22 × 4Kp30 (H.265), 46 × 1080p60 (H.265), 92 × 1080p30 (H.265), 82 × 1080p30 (H.264) and 4 × 4Kp60 (H.264). These figures are more than enough for 4K/8K multi-camera and complex vision pipelines.

In cameras, the ecosystem is broad: entry of HSB camera via QSFP, USB cameras, up to 20 cameras via HSB, up to 6 cameras via 16 MIPI CSI‑2 lanes, and up to 32 logical cameras via virtual channels. It supports C‑PHY 2.1 (10,25 Gb/s) and D‑PHY 2.1 (40 Gb/s), which fits with robot designs loaded with sensors and IMU.

For direct display, the kit offers 1 port HDMI 2.0b and 1 Display Port 1.4a. In T4000 module configurations, additional outputs are mentioned (up to 4 shared HDMI 2.1 and DP VESA 1.4a HBR2 with MST), designed for other product and signage profiles, very useful in HMI displays.

The result is a platform capable of capture, preprocess, infer, and visualize locally, with robust video pipeline and tight timing for critical applications.

Networks and connectivity for sensor swarms

The T5000 is geared towards scenarios with a lot of input/outputThe kit's networking includes a QSFP28 interface with four 25GbE channels (4 × 25GbE), plus a 45GbE RJ5 connector for conventional networks. This combination enables both high-throughput backbones and direct links to existing infrastructure.

The package is completed with Built-in Wi-Fi 6E on the reference carrier board and support for NVMe storage via PCIe. This simplifies the deployment of mobile robots, collaborative arms, or inspection platforms that combine wired and wireless networking and local storage.

By including Multi-Instance GPU (MIG), it is feasible to separate services: perception, path planning and language models can run in distinct “slices” of the GPU, each with its own QoS, even when bursts of sensor traffic arrive over 25 GbE.

This purely industrial approach makes its niche clear: it is not a typical home Mini-PC, but a base of robotics and edge AI with network throughput and I/O for demanding environments.

Consumption, thermals and physical format

In power, the Jetson T5000 operates in a nominal range of 40 to 130W, ranging from laboratory tests with contained profiles to deployments requiring a full complement of artillery. Its T4000 brother targets tighter energy budgets, moving between 40 and 70 W (some listings show 75W, depending on configuration and thermal limits).

The AGX Thor kit measures 243,19 × 112,40 × 56,88 mm, compact for its class, and includes a thermal transfer plate (TTP) and active cooling with a fan, with the option of alternative heatsinks. The T4000 module, meanwhile, reduces its size to 100 × 87 mm with a 2-pin B699B connector and a thermal plate with an integrated heat pipe.

In real-world installations, this thermal and power headroom translates into more freedom: from 24/7 warehouse robots with sustained loads to mobile platforms that prioritize hours of autonomy with scalable consumption profiles.

The presence of Fan headers with PWM control and tachometer, along with thermal profiling support, helps maintain stability when the inference engine is running at full capacity.

Software, GR00T and compatible models

NVIDIA accompanies the hardware with its Jetson stack optimized for low latency and high performance in inference. The ecosystem supports popular generative AI and reasoning frameworks and models, including Cosmos Reason, DeepSeek, Llama, Gemini, and Qwen, as well as robotics-specific components such as Isaac GR00T N1.5.

GR00T (Generalist Robot 00 Technology) adds a key ingredient: it allows robots to learn by observingAn operator executes a task, the robot "sees" it, and the model translates that sequence into instructions that the machine can replicate and adapt. The need for step-by-step programming is reduced, and the learning of complex skills is accelerated.

Jensen Huang sums it up with a powerful idea: we are on the threshold of physical artificial intelligenceRunning multiple models in real time on the robot itself, with shared resources and task prioritization, marks a turning point for humanoids and autonomous agents.

For R&D teams, support for multiple frameworks and the ability to infer locally Cloud-free isn't just a privacy bonus; it also eliminates connectivity dependence and reduces latency, which is critical for human-robot collaboration.

Use cases and industry adoption

The list of applications covers virtually the entire spectrum of advanced robotics: humanoid that collaborate in factories, smart tractors, surgical assistants, delivery robots, industrial manipulators, visual agents in unstructured environments, and inspection drones with onboard processing.

Companies like Figure AI, Boston Dynamics, Sanctuary AI, Agility Robotics, and even Tesla (each with its own hardware strategy) are in the race for the perfect mechanical “body”; NVIDIA, with the Thor T5000, is putting the high-performance “brain” that many need to close the circle.

There is also interest from giants such as Amazon Robotics, Meta, Caterpillar or John Deere, and from academia (Stanford, Carnegie Mellon and the University of Zurich). For some, Jetson Thor may represent the fast track from prototype to pilot plant.

The cost factor is not minor: the development kit is based on $3.499 and production T5000 modules are priced at around $2.999 for orders of 1.000 units. For bulk purchases, NVIDIA has mentioned prices of around $3.000 per kit, which clarifies the initial investment for robot fleets.

Jetson T5000 vs. T4000: Which One to Choose?

Along with the T5000, NVIDIA offers a more modest option: Jetson T4000. The first specifications speak of 1.200 TFLOPS (FP4, Sparse), a Blackwell GPU with 1.536 cores and 64 Tensor Cores, in addition to 64GB LPDDR5X at 256 bits (273 GB/s). That is, half the memory and significantly lower inference power, in exchange for lower consumption.

In MIG, the T4000 offers 6 clusters texture processing, and at the network level up to 3 × 25 GbE are quoted, compared to 4 × 25 GbE of the T5000. It is designed for robots and systems where the thermal and energy budget is tight, but is required connectivity and latency in the big league.

If the project demands “all the throttle” possible in multimodal perception and generative models, the T5000 is the natural candidate. If, on the other hand, efficiency and cost are prioritized under a average workload, the T4000 fits better.

At the display output level, some T4000 configurations list up to 4 shared HDMI 2.1 in addition to DP 1.4a, a clue to its vocation for signage or multi-display environments when the full T5000 artillery is not needed.

Prices, availability and related formats

El Jetson AGX Thor Developer Kit, which includes the T5000 module, a motherboard with extensive connectivity, a 1TB NVMe SSD, Wi-Fi 6E, USB ports and video outputs, is available for pre-order for $3.499. Shipments are expected from November 20th 2025 at selected retailers.

The production modules Jetson T5000 start at $2.999 per unit for orders of 1.000 pieces. For the T4000, prices have been indicated starting at $1.999, aimed at deployments with more contained energy and cost.

In parallel, NVIDIA is marketing an automotive development kit (Drive AGX Thor) based on the same technology for autonomous vehicles, with tiered availability by region. This reinforces the idea of ​​a cross-platform capable of adapting to multiple industries.

With this pricing and options scheme, teams can quickly validate with the kit and migrate to production modules, maintaining software and I/O compatibility to reduce industrialization times.

What the Jetson T5000 offers is a solid and scalable foundation for bringing next-gen AI to the physical world: plenty of power (2.070 TFLOPS FP4 sparse), a 14-core Arm Neoverse CPU, 128GB of LPDDR5X, massive video encode/decode, QSFP/MIPI camera, 4x25GbE networking, modern USB ports, and a low-latency-oriented software stack; if you add to that the GR00T and support for leading-edge models, you have A “brain” ready for humanoids, industrial robots, and autonomous agents who want to work in real time and without being tied to the cloud.

importance of ota updates in iot-1
Related article:
The importance of OTA updates in IoT and how they influence security