NVIDIA DGX H100 with 8x NVIDIA H100 Tensor Core GPUs, Dual Intel® Xeon® Platinum 8480C Processors, 2TB Memory, 2x 1.92TB NVMe M.2 & 8x 3.84TB NVMe U.2.
Buy with confidence knowing all Broadberry CyberServe rack servers are backed up by our 3 year warranty, with further warranty upgrade options available.
The Broadberry range of CyberServe® rackmount are trusted as the server of choice for thousands of companies from SMBs to enterprise's across the globe.
GPU SuperComputing servers offer massive processing power and HPC performance, considerably accelerating applications.
This system can be configured with NVMe drives, which are capable of significantly better performance than SATA and SAS.
The NVIDIA DGX H100 boasts high density storage and ultimate energy efficiency, catering for up to 0x High-Performance 2.5" SSD or Hard Drives
Ready for the most demanding enterprise applications, the Broadberry NVIDIA DGX H100 can be configured with up to 0 memory modules.
The NVIDIA DGX H100 is a 8U rackmount server configurable with 0x Intel Xeon Scalable Processor Gen 4/5 series range of processors. Built using the latest enterprise-class server technology, the NVIDIA DGX H100 has 0x NVMe and M.2 Fixed Drives and is ideal for those requiring a combination of high performance and density with its 0x memory banks providing up to 0GB of high-performance server memory. The NVIDIA DGX H100 boasts as just a few of its core features.
The NVIDIA DGX H100 rackmount server with 0 fixed hard drives offers a cheaper alternative to servers with hot-swappable drives, bringing the total system cost down when hot-swappable hard drives are not required. The NVIDIA DGX H100 rackmount server with 0 fixed hard drives offers a cheaper alternative to servers with hot-swappable drives, bringing the total system cost down when hot-swappable hard drives are not required.Storage Technology
Originally designed and utilised in mobile applications such as laptops and notebooks, 2.5" hard drives are becoming ever increasingly used in servers due to their lower power consumption and space-saving characteristics. The NVIDIA DGX H100 rackmount server with 0 fixed hard drives offers a cheaper alternative to servers with hot-swappable drives, bringing the total system cost down when hot-swappable hard drives are not required.Configure for your Network
The NVIDIA DGX H100’s ready to be deployed into your network environment with a wide-range of high-throughput connectivity options.
Configure the NVIDIA DGX H100 to match your application I/O demands and network infrastructure with up to 0x Broadcom Ethernet Adaptors.
Whether your application demands the highest networking throughput, or you’re looking for a more modest enterprise-grade networking option such as Gb/E, the NVIDIA DGX H100 is totally customisable with a wide range of robust I/O connectivity options.
IPMI 2.0 & KVM over IP
Whereas most server manufacturers charge ongoing licence fees for IPMI, powerful, industry-standard server management is built in to the NVIDIA DGX H100 at no extra cost.
Using the dedicated 3rd LAN port or on-board Gigabit Ethernet ports, you can take full control of the server as though you were standing in front of it through your web browser from anywhere in the world through a pre-defined IP address and password.
Check the health of the NVIDIA DGX H100 components, fan speeds, temperatures, update firmware, check logs & set SMTP alerts for you pre-configure thresholds, and list the health of all your servers on one simple screen.Â
The NVIDIA DGX H100 IPMI feature also allows complete Keyboard Video and Mouse (KVM) control, and send CD/DVD Images over IP to install software & operating systems remotely.
Expand with PCI-Express Expansion
Configure the NVIDIA DGX H100 with up to 0x Broadcom Ethernet Adaptors. With a wide range of enterprise-class PCI-Express add-on options available, the NVIDIA DGX H100 is a flexible platform for a wide-range of applications.
The NVIDIA DGX H100 rackmount server with 0 fixed hard drives offers a cheaper alternative to servers with hot-swappable drives, bringing the total system cost down when hot-swappable hard drives are not required.We're so confident in our great value solutions, If you find a cheaper genuine price for the same specifications elsewhere we'll match it.
You can order by either calling us and speaking to one of our experienced US technical sales team, or pressing "Email Quote".
Leading Security
We take security seriously. Broadberry servers offer the latest technologies including TPM, self-encryption and military-grade FIPS certification.
Best Price Guarantee
Broadberry offer a Best Price Guarantee on all server and storage solutions. You won't find the same spec cheaper anywhere else.
Ease of Management
Feature-rich IPMI with out of band management and redfish API to allow easy management and monitoring of your Broadberry systems.
Customer Service
Unlike other leading manufacturers, Broadberry assign all customers a dedicated account manager with direct access. There’s no call centres!
Trouble-Free Integration
Our trained engineers will pre-configure your server to your requirements, and are here to help over the phone when your server arrives.
3-Year Warranty
All Broadberry solutions come with a comprehensive 3-5 year warranty, with a range of upgraded warranty options available.
Easy Installation
From clip-in slide rails to pre-configured RAID and pre-installed OS, our engineers ensure installation fast and simple!
Standardisation
Our open-platform approach offers high compatibility with your existing infrastructure and no vendor-lockin.
Flexibility
Every business is different. Configure every element of your solution to your requirements using our powerful online configurator.
Scalability
Our servers are built to grow with your organization, whether that be by purchasing additional drives, or additional scale-out nodes.
Innovation
Since the introduction of the first Broadberry PC in 1989, we’ve continually innovated and responded to emerging technologies and markets.
GSA Schedule Holder
We offer rapid GSA scheduling for custom configurations. For specific hardware requirements we can have your configuration posted on the GSA Schedule within 2-4 weeks.
Artificial intelligence has evolved into the preferred method for tackling complex business obstacles.
Whether it's enhancing customer support, fine-tuning supply chains, extracting valuable business insights, or creating cutting-edge products and services using generative AI and other transformer models, AI equips organizations across a wide range of industries with the means to achieve innovation. As a pioneer in AI infrastructure, NVIDIA DGX™ offers the most powerful and complete AI platform to turn these pivotal concepts into reality.
The NVIDIA DGX H100 serves as a catalyst for business innovation and optimisation. As a vital component of the DGX platform and the latest evolution of NVIDIA's legendary DGX systems, the DGX H100 stands as the AI powerhouse at the core of NVIDIA DGX SuperPOD™, empowered by the revolutionary performance of the NVIDIA H100 Tensor Core GPU. This system is meticulously engineered to maximise AI throughput, providing enterprises with a meticulously designed, standardised, and scalable foundation to drive advancements in natural language processing, recommender systems, data analytics, and more. Available for on-site deployment and accessible through a diverse range of access and deployment options, the DGX H100 offers the performance essential for enterprises to conquer their most significant AI challenges.
The DGX H100 serves as a comprehensive hardware and software platform to establish your AI Center of Excellence. It incorporates NVIDIA Base Command™ along with the complete NVIDIA AI Enterprise software suite, complemented by guidance from NVIDIA DGXperts.
The NVIDIA DGX H100 features 6X more performance, 2X faster networking capabilities, and exceptional high-speed scalability. It's architecture is optimised to excel in handling the most demanding workloads, including generative AI, natural language processing, and deep learning recommendation models.
Experience the capabilities of the DGX H100 through various flexible options that align with your business needs, whether it's on-site, co-located, rented through managed service providers, and more. Plus, with DGX-Ready Lifecycle Management, organisations can benefit from a reliable financial framework to maintain their deployments at the forefront of technology.
Unprecedented Performance, Scalability, and Security for Every Data Center
Highest AI AND HPC Performance
4PF FP8 (6X)| 2PF FP16 (3X)| 1PF TF32 (3X)| 60TF FP64 (3X) 3TB/s (1.5X), 80GB HBM3 memory
Transformer Model Optimisations
6X faster on largest transformer models
Highest Utilisation Efficiency and Security
7 Fully isolated & secured instances, guaranteed QoS 2nd Gen MIG | Confidential Computing
Fastest, Scalable Interconnect
900 GB/s GPU-2-GPU connectivity (1.5X) up to 256 GPUs with NVLink Switch | 128GB/s PCI Gen5
The gold standard for AI infrastructure
Specifications | Details |
---|---|
GPUs | 8x NVIDIA H100 GPUs With 640 Gigabytes of Total GPU Memory |
NVLink Connections | 18x NVIDIA NVLink connections per GPU, 900 gigabytes per second of bidirectional GPU-to-GPU bandwidth |
Memory Bandwidth | 24 TB/s memory bandwidth |
NVSwitches | 4x NVIDIA NVSwitches, 7.2 terabytes per second of bidirectional GPU-to-GPU bandwidth, 1.5X more than the previous generation |
Network Interface | 10x NVIDIA ConnectX-7 400 Gigabits-Per-Second Network Interface, 1 terabyte per second of peak bidirectional network bandwidth |
Processors and Memory | Dual Intel® Xeon® Platinum 8480C Processors (112 Cores Total) and 2 TB System Memory, powerful CPUs and massive system memory for the most intensive AI jobs |
Storage | 30 Terabytes NVMe SSD, high-speed storage for maximum performance |
AI Performance | 32 petaFLOPS AI performance |
DGX H100 and DGX A100 | Alternatives | |
---|---|---|
NVES and NVIDIA DGXperts | Multiple vendors, finger-pointoing, DIY support | |
Optimised software to streamline AI Development | Open-source tools and framework without enterprise-grade support | |
Fully-integrated, field-proven | Untested platforms requiring integration effort | |
Enterprise-grade job scheduling and resource orchestration | Separate tools for job scheduling and orchestration | |
Comprehensive cluster management | Homebrew management software | |
Accelerated Infrastructure libraries | Generic, unoptimised infrastructure software | |
Secure, fully optimised operating system | General-purpose OS, not optimised for AI | |
Performance Optimised for AI | Commodity hardware |
GPU | 8x NVIDIA H100 Tensor Core GPUs |
GPU memory | 640GB Total |
Performance | 32 petaFLOPS FP8 |
NVIDIA NVSwitch | 4x |
System power usage | 10.2kW max |
CPU | Dual Intel Xeon Platinum 8480C Processors 112 Cores Total, 2.00Ghz (Base). 3.80Ghz (Max Boost) |
System memory | 2TB |
Networking | 4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 VPI
Up to 400Gb/s InfiniBand/Ethernet 2x dual-port QSFP112 NVIDIA ConnectX-7 VPI Up to 400Gb/s InfiniBand/Ethernet |
Management network | 10Gb/s onboard NIC with RJ45
100Gb/s Ethernet NIC Host baseboard management controller (BMC) with RJ45 |
Storage | OS: 2x 1.92TB NVMe M.2 |
Internal storage | 8x 3.84TB NVMe U.2 |
Software | NVIDIA AI Enterprise - Optimised AI software
NVIDIA Base Command - Orchestration, scheduling, and cluster management DGX OS / Ubuntu / Red Hat Enterprise Linux / Rocky - Operating System |
Support | Comes with 3-year business-standard hardware and software support |
System weight | 130.45kgs / 287.6lbs |
Packaged system weight | 170.45kgs / 376lbs |
System dimensions | Height: 356mm / 14.0in
Width: 482.2mm / 19.0in Length: 897.1mm / 35.3in |
Operating temperature range | 5-30°C / 41-86°F |
Platform Manageability | Intel Server Platform Services (Intel SPS) and Intel Resource Director Technology (Intel RDT) |
Storage Manageability | Intel Volume Management Device (Intel VMD) |
NVIDIA L4 |
NVIDIA RTX6000 ADA |
NVIDIA L40S |
NVIDIA H100 |
NVIDIA GH200 |
|
---|---|---|---|---|---|
Application | Virtualised Desktop, Graphical and Edge Applications | High-end Design, Real-time Rendering, High-performance Compute Workflows | Multi-modal Generative AI, and Graphics and Videos Workflows | LLMs Inference, AI and Data Analytics | Generative AI, LLMs Inference, and Memory Intensive Applications |
Architecture | Ada Lovelace | Ada Lovelace | Ada Lovelace | Hopper | Grace Hopper |
SMs | 60 | 142 | 142 | 114 | 144 |
CUDA Cores | 7,424 | 18,176 | 18,176 | 18,432 | 18,432 |
Tensor Cores | 240 | 568 | 568 | 640 | 576 |
Frequency | 795 Mhz | 915 MHz | 1,110 Mhz | 1,590 MHz | 1,830 MHz |
FP32 TFLOPs | 30.3 | - | 91.6 | 51 | 67 |
FP16 TFLOPs | 242 | 91.1 | 733 | 1,513 | 1,979 |
FP8 TFLOPs | 485 | - | 1,466 | 3,026 | 3,958 |
Cache | 48 MB | 96 MB | 48 MB | 50 MB | 60 MB |
Max. Memory | 24 GB | 48 GB | 48 GB | 80 GB | 512 GB |
Memory B/W | 300 GB/s | 960 GB/s | 864 GB/s | 2,000 Gb/s | 546 GB/s |