
Nvidia has officially launched the DGX Spark, a compact AI supercomputer powered by the Grace Blackwell GB10 Superchip, now available for purchase starting October 15, 2025. Priced at $3,999 (approximately ₹3.5 lakh), the DGX Spark is designed to bring high-performance AI computing to developers, researchers, and students in a desktop-friendly form factor.
⚙️ Key Specifications
- Processor: Nvidia Grace Blackwell GB10 Superchip, featuring a 20-core Arm-based CPU integrated with a Blackwell GPU.
- Memory: 128GB of unified LPDDR5x system memory.
- Storage: Up to 4TB of NVMe SSD storage.
- Performance: Delivers up to 1 petaflop (1,000 trillion operations per second) of AI compute power.
- Connectivity: Includes Wi-Fi 7, 10 GbE, and Nvidia ConnectX-7 networking.
- Form Factor: Compact desktop design measuring 150mm x 150mm x 50.5mm, weighing just 2.6 pounds.
🧠 AI Capabilities
The DGX Spark is engineered to handle advanced AI workloads locally, eliminating the need for cloud-based resources. It supports inference on models with up to 200 billion parameters and fine-tuning of models with up to 70 billion parameters. This capability is crucial for tasks such as developing AI agents, running multimodal applications, and conducting research in fields like healthcare and robotics.
🌍 Availability and Partners
The DGX Spark is available for order directly from Nvidia’s website and through authorized partners, including Acer, ASUS, Dell Technologies, GIGABYTE, HP, Lenovo, and MSI. These partners will offer customized versions of the DGX Spark to cater to various user needs.
🔮 Looking Ahead
Nvidia has also teased the upcoming DGX Station, a more powerful AI supercomputer designed for larger-scale AI model training and research. While details are limited, the DGX Station is expected to feature the GB300 Blackwell Ultra Superchip, offering significantly higher performance than the DGX Spark.
🧾 Final Thoughts
The DGX Spark represents a significant step forward in making high-performance AI computing accessible to a broader audience. Its compact design, powerful specifications, and local AI capabilities make it an attractive option for developers and researchers looking to advance their AI projects without relying on cloud infrastructure.