I/ONX Introduces Symphony SixtyFour: The Host Tax is Over. Save 30-50% on your AI Infrastructure Costs.
I/ONX High Performance Compute (HPC), a leading provider of heterogeneous AI systems, today announced the global launch of the Symphony SixtyFour, a high-density platform designed to collapse the physical and economic footprint of AI inference and fine-tuning infrastructure. By supporting up to 64 accelerators on a single node, I/ONX eliminates the redundant Host Tax—the massive overhead in power, hardware, and licensing that negatively impacts ROI in enterprise AI.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260422485327/en/
While inference now accounts for 90% of enterprise AI workloads, enterprises are entirely limited to deploying inference on training hardware platforms. Symphony SixtyFour provides significant advantages for reduced CapEx and OpEx for inference and fine-tuning workloads. In training comparisons, the I/ONX system recovers the 30kW Host Tax typically wasted on redundant CPUs, memory, and support hardware in multi-node clusters and simplifies ongoing support tasks. For production-scale inference on alternative accelerators, the platform is even more transformative, drawing one-fourth the power of a traditional 64-device cluster—completely eliminating liquid cooling needs for inference only.
“Enterprise AI infrastructure is entering a new phase of maturity,” said I/ONX CEO Justyn Hornor. “The training-centric designs of the past served us well during the experimental phase, but they weren’t optimized for the power-constrained, production-heavy world we live in today. With Symphony SixtyFour, we’ve reimagined the stack to be more fluid and fit for purpose, allowing organizations to master massive-scale inference while finally eliminating the unnecessary infrastructure waste that has hindered ROI.”
The Symphony SixtyFour Advantage: Fit for purpose Silicon. The platform is engineered to maximize every watt and dollar for Enterprise AI.
- Eliminating the Training Host Tax: For large-scale inference and fine-tuning, Symphony SixtyFour collapses the infrastructure stack from eight nodes into one. This consolidation removes up to 30kW of wasted support power, allowing for higher compute density within existing power envelopes.
- Zero-Hop, near-Deterministic Performance: By housing 64 accelerators within a single OS instance, Symphony SixtyFour eliminates the East-West network latency.
- Heterogeneous Flexibility: Symphony SixtyFour is fully vendor-neutral and built for mixed-mode operations. Enterprises can seamlessly pair high-end GPUs (including AMD/NVIDIA) with more purpose built, low power co-processors and layer in specialized inference silicon (Axelera/FuriosaAI/Tenstorrent) future-proofing infrastructure against shifting market dynamics.
- Collapsing OpEx by simplifying the Host Tax: Beyond hardware and power, Symphony SixtyFour provides massive operational relief. By presenting a 64-device fleet through a single management environment, I/ONX collapses the Software Tax, saving enterprises up to $500,000 annually in Enterprise Operating Systems and orchestration licensing per cluster.
I/ONX accelerates the enterprise shift toward systems designed specifically for inference and fine-tuning at scale. Symphony SixtyFour is available now, enabling organizations to reclaim critical power capacity and reduce costs. I/ONX is committed to delivering high-density infrastructure required to unlock the maximum economic and operational potential of production AI.
