SHADHO: Optimization Optimized

SHADHO stands for Scalable Hardware-Aware Distributed Hyperparameter Optimizer. All hardware is not created equal, and all machine learning models are not created equal: in most cases, a GPU is faster than a CPU, and a Decision Tree is faster to train than a Neural Network. Machine learning models will also (usually) perform differently on the same dataset. SHADHO tries to help you make the best use of your time and hardware by figuring out which hardware should test which models based on their size and performance. SHADHO is optimizing optimization.

Installation

Install with pip

pip install shadho
python -m shadho.installers.install_workqueue

Note The post-install step may look like it hangs, but it is just compiling Work Queue behind the scenes and may take a few minutes.

Install manually

$ git clone https://github.com/jeffkinnison/shadho
$ cd shadho
$ pip install .
$ python -m shadho.installers.install_workqueue

Note The post-install step may look like it hangs, but it is just compiling Work Queue behind the scenes and may take a few minutes.

Dependencies

SHADHO uses parts of the scientific Python stack to manage the search (defining search spaces, learning from the searches, and more!). It also uses Work Queue from the CCTools suite to coordinate your distributed hyperparameter search across any number of compute nodes.

Using SHADHO is Easy