gpu computeQuick Start ↓
Get Started with Horovod
Distributed deep learning training for TensorFlow, Keras, PyTorch, and MXNet.
Getting Started
1
Read the official documentation
The Horovod team maintains comprehensive docs that cover installation, configuration, and common patterns.
Open Horovod Docs↗2
Create an account
Visit the Horovod website to create your account and explore pricing options.
Visit Horovod↗3
Review strengths, tradeoffs, and alternatives
Our full tool profile covers Horovod's strengths, weaknesses, pricing, and how it compares to alternatives.
View full profile→Best For
Teams that need to scale up their deep learning training across multiple GPUs or machines without significant code changes.
Developers working with TensorFlow, Keras, PyTorch, and Apache MXNet who want to leverage distributed computing for faster model training.