Hetu is a high-performance distributed deep learning system targeting trillions of parameters DL model training, developed by DAIR Lab at Peking University. It takes account of both high availability in industry and innovation in academia, which has a number of advanced characteristics:
Applicability. DL model definition with standard dataflow graph; many basic CPU and GPU operators; efficient implementation of more than plenty of DL models and at least popular 10 ML algorithms.
Efficiency. Achieve at least 30% speedup compared to TensorFlow on DNN, CNN, RNN benchmarks.
Flexibility. Supporting various parallel training protocols and distributed communication architectures, such as Data/Model/Pipeline parallel; Parameter server & AllReduce.
Scalability. Deployment on more than 100 computation nodes; Training giant models with trillions of model parameters, e.g., Criteo Kaggle, Open Graph Benchmark
Agility. Automatically ML pipeline: feature engineering, model selection, hyperparameter search.
We welcome everyone interested in machine learning or graph computing to contribute codes, create issues or pull requests. Please refer to Contribution Guide for more details.
Clone the repository.
Prepare the environment. We use Anaconda to manage packages. The following command create the conda environment to be used:conda env create -f environment.yml
. Please prepare Cuda toolkit and CuDNN in advance.
We use CMake to compile Hetu. Please copy the example configuration for compilation by cp cmake/config.example.cmake cmake/config.cmake
. Users can modify the configuration file to enable/disable the compilation of each module. For advanced users (who not using the provided conda environment), the prerequisites for different modules in Hetu is listed in appendix.
# modify paths and configurations in cmake/config.cmake
# generate Makefile
mkdir build && cd build && cmake ..
# compile
# make all
make -j 8
# make hetu, version is specified in cmake/config.cmake
make hetu -j 8
# make allreduce module
make allreduce -j 8
# make ps module
make ps -j 8
# make geometric module
make geometric -j 8
# make hetu-cache module
make hetu_cache -j 8
source hetu.exp
.ResNet training on single gpu:
bash examples/cnn/scripts/hetu_1gpu.sh resnet18 CIFAR10
ResNet training with All-Reduce on 8 gpus:
bash examples/cnn/scripts/hetu_8gpu.sh resnet18 CIFAR10
BERT-base training on single gpu:
cd examples/nlp/bert && bash scripts/create_datasets_from_start.sh # Dataset preparing
bash scripts/train_hetu_bert_base.sh
BERT-base training with All-Reduce on 4 gpus:
cd examples/nlp/bert && bash scripts/create_datasets_from_start.sh # Dataset preparing
bash scripts/train_hetu_bert_base_dp.sh
Wide & Deep training on single gpu:
bash examples/ctr/tests/local_wdl_adult.sh
Wide & Deep training on 4 gpus using HET:
bash examples/ctr/tests/hybrid_wdl_adult.sh
Please refer to examples directory, which contains CNN, NLP, CTR, MoE, GNN training scripts. If you want to know more about the communication architecture (parameter server, collective communication) and automatic parallelism (e.g., data parallel, tensor parallel, pipeline parallel, shared data parallel expert parallel) provided by Hetu, please join our community and contact with us!
If you are enterprise users and find Hetu is useful in your work, please let us know, and we are glad to add your company logo here.
The entire codebase is under license
We have proposed numerous innovative optimization techniques around the Hetu system and published several papers, covering a variety of different model workloads and hardware environments.
If you use Hetu in a scientific publication, we would appreciate citations to the following papers:
@article{DBLP:journals/chinaf/MiaoXP22,
author = {Miao, Xupeng and Nie, Xiaonan and Zhang, Hailin and Zhao, Tong and Cui, Bin},
title = {Hetu: A highly efficient automatic parallel distributed deep learning system},
journal = {Sci. China Inf. Sci.},
url = {http://engine.scichina.com/doi/10.1007/s11432-022-3581-9},
doi = {10.1007/s11432-022-3581-9},
year = {2022},
}
@article{miao2021het,
title={HET: Scaling out Huge Embedding Model Training via Cache-enabled Distributed Framework},
author={Miao, Xupeng and Zhang, Hailin and Shi, Yining and Nie, Xiaonan and Yang, Zhi and Tao, Yangyu and Cui, Bin},
journal = {Proc. {VLDB} Endow.},
volume = {15},
number = {2},
pages = {312--320},
year = {2022},
publisher = {VLDB Endowment}
}
We learned and borrowed insights from a few open source projects including TinyFlow, autodist, tf.distribute, FlexFlow and Angel.
The prerequisites for different modules in Hetu is listed as follows:
"*" means you should prepare by yourself, while others support auto-download
Hetu: OpenMP(*), CMake(*)
Hetu (version mkl): MKL 1.6.1
Hetu (version gpu): CUDA 10.1(*), CUDNN 7.5(*), CUB 1.12.1(*), Thrust 1.16.0(*)
Hetu (version all): both
Hetu-AllReduce: MPI 3.1, NCCL 2.8(*), this module needs GPU version
Hetu-PS: Protobuf(*), ZeroMQ 4.3.2
Hetu-Geometric: Pybind11(*), Metis(*)
Hetu-Cache: Pybind11(*), this module needs PS module
##################################################################
Tips for preparing the prerequisites
Preparing CUDA, CUDNN, CUB, NCCL(NCCl is already in conda environment):
1. download from https://developer.nvidia.com
2. download CUB from https://github.com/NVIDIA/cub/releases/tag/1.12.1
3. install
4. modify paths in cmake/config.cmake if necessary
Preparing OpenMP:
Your just need to ensure your compiler support openmp.
Preparing CMake, Protobuf, Pybind11, Metis:
Install by anaconda:
conda install cmake=3.18 libprotobuf pybind11=2.6.0 metis
Preparing OpenMPI (not necessary):
install by anaconda: `conda install -c conda-forge openmpi=4.0.3`
or
1. download from https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.3.tar.gz
2. build openmpi by `./configure /path/to/build && make -j8 && make install`
3. modify MPI_HOME to /path/to/build in cmake/config.cmake
Preparing MKL (not necessary):
install by anaconda: `conda install -c conda-forge onednn`
or
1. download from https://github.com/intel/mkl-dnn/archive/v1.6.1.tar.gz
2. build mkl by `mkdir /path/to/build && cd /path/to/build && cmake /path/to/root && make -j8`
3. modify MKL_ROOT to /path/to/root and MKL_BUILD to /path/to/build in cmake/config.cmake
Preparing ZeroMQ (not necessary):
install by anaconda: `conda install -c anaconda zeromq=4.3.2`
or
1. download from https://github.com/zeromq/libzmq/releases/download/v4.3.2/zeromq-4.3.2.zip
2. build zeromq by 'mkdir /path/to/build && cd /path/to/build && cmake /path/to/root && make -j8`
3. modify ZMQ_ROOT to /path/to/build in cmake/config.cmake