HDL - Introduction to HyperParameter Tuning¶
This tutorial is meant as a description of the structure and features of the template GitHub repository to perform large scale hyperparameter tuning on a SLURM-based cluster using a combination of Pytorch Lightning, Hydra, Ax, MLFlow and Submitit.
The template is not meant to be the definitive way hyperparameter tuning should be performed. Instead, it is meant to be a very good example from which to pick the elements and the structure that make most sense for your own future projects. For example, MLFlow is not very good at comparing images from multiple runs, and if a qualitative evaluation is necessary, then it would be a good idea to include Tensorboard as an additional logging library. The use of SLURM was dictated by the popularity of the system and the fact that it is in use on the surfsara cluster, but the same template would work for other systems. Many more considerations and adaptations could be made at the discretion of the researcher.
. ├── hyperparameter_searcher │ ├── __init__.py │ ├── config │ │ ├── data │ │ │ └── mnist_config.py │ │ ├── launcher │ │ │ └── launcher_config.py │ │ ├── logging │ │ │ └── logging_config.py │ │ ├── model │ │ │ └── mnist_module_config.py │ │ ├── sweeper │ │ │ └── sweeper_config.py │ │ ├── trainer │ │ │ └── trainer_config.py │ │ ├── train_bayesian_config.py │ │ └── train_grid_config.py │ ├── data │ │ ├── __init__.py │ │ ├── dataloaders.py │ │ └── mnist_datamodule.py │ ├── loggers │ │ ├── __init__.py │ │ ├── loggers.py │ │ └── mlflow_utils.py │ ├── networks │ │ ├── __init__.py │ │ ├── components │ │ │ ├── __init__.py │ │ │ └── simple_dense_net.py │ │ └── mnist_lightning_module.py │ ├── utils │ │ ├── __init__.py │ │ └── io_utils.py │ └── training_pipeline.py ├── scripts │ ├── README.md │ ├── hyperparameter_blueprint_bayesian.sh │ └── hyperparameter_blueprint_grid.sh ├── tests │ ├── __init__.py │ └── tests_utils │ ├── __init__.py │ └── test_io_utils.py ├── environment.yml ├── .env └── train.py
The main file of the whole repository is
train.py in the root folder. From here the training procedure can run, and this is also the primary point that will be used for debugging the code. In the
tests folder the eventual tests for the code being written will be placed, the
scripts folder contains the scripts that we will be using for the hyperparameter search, and the
hyperparameter-searcher folder is where the source code for all our experiments resides.
In this template, the source code for the experiments is divided into Python modules, that can be thought of as building components used to run the entire thing. Besides the modules, there is a file called
training_pipeline.py which is where the training is defined, and that will be the only entry point to use the various modules from. In this template, only this file is present, however, often we will want to perform some additional analysis to our models after training, say creating nice
visualizations with samples if we have a generative model, or computing a downstream task using the representations obtained. When this is the case, we will just need to add a new file, e.g. called
evaluation.py, that will benefit from the already-defined modules and be used to load the model and run the evaluation.
The modules. In the experiment source code we have several modules. We can think of them as independent (as much as reasonable) packages that are used to perform a specific and complex task that can be re-used more than once or that is completely logically separate from other modules. This modular approach that separates based on function and not based on experiment, forces us to think of code that can be used immediately by all experiments we will be running, and is easier to maintain in the future.
Next, we will be discussing the different components that we are using for config management, logging and hyperparameter tuning.
Hydra configurations are usually defined through
.yaml files. However, we can also define them manually using Python. By using Python-based config files we have more freedom in the definition of the configurations and in the re-usability of the code. The trade-off is more complexity in the management of the code, as all the configuration needs to be defined manually.
Hydra interfaces with your scripts using a decorator to the main function. This defines where to get the configs from, and which primary config file should be used to parse the arguments. In the case of this repository, the configuration is done through Python, so we don’t need the
The primary config file is the entry point for your configuration. Here, you define the command line parameters that you support and which default configurations are to be used. A default configuration, as the name suggests, is a file that contains some default arguments. In our case, these will be handy for defining default configurations of the various datasets and models that we support.
From command line, all already-define parameters can be changed, as well as new ones added. By default, you need to explicitly ask for an argument to be added, if this is not already defined in the config.
Overall, using Hydra is a very straightforward way to neatly organize you configurations and get closer to reproducible results.
Hydra is used to manage the configuration of your experiments. All command line arguments and their processing can be handled through it. The are several advantages over using the traditional argument parser from Python. The first one is that we can more easily store and restore argument configurations. Another is that target classes can be defined directly in the configuration. A target class can then be initialized with the arguments given in the configuration. Think of what would happen if
you wanted to switch between using
model_B which are defined with the class
ModelB. From the config, you would say
model=model_A and then in the code you would need a long if statement chain to select which class to launch with the given configuration, which in this case would be
ModelA. Then, there would be several default parameters for this class that we would want to use, but would be hidden in the code. Instead, with Hydra you can more simply
define the target class directly in the config file, which will be automatically selected when
model=model_A is called in the command line. This will come automatically with all the parameters defined explicitly in the config file.
This only makes sense because the modular approach to config management allows for simple parameter switching when testing different models, datasets or when running different experiments entirely. Having modular configuration management simplifies the entire file structure as well, removing the need for separating different experiments in different folders, which can quickly become difficult to maintain as time goes on. Instead, each part of your project can be seen as a different packages, one dedicated for data handling, one for model definition, one for the logging, another for visualization and maybe also one for all the metrics that you want to test your models with. Such modularity would be very difficult without also having modular configurations, which Hydra handles very easily.
One thing that is important to highlight is that Hydra is not some magical wand that we can use to solve all our problems. Instead, Hydra takes its root in the elegant configuration management that Omegaconf already provides. Hydra is a handy extension of Omegaconf, with some features tailored for machine learning. When more complicated things need to be done, do not hesitate to put your hands on what Hydra is doing and add your own code to make your workflow faster. Often, trying to work around the issue and use only the features available in the library slows you down more than you think.
MLFlow is a logging library, which is characterized by centralization of the logging, as to ease the process when multiple nodes are being used. Additionally, it provides a simplified way to compare the parameters that have been changed between runs, to get a quick overview of which change has made the most decise impact in the performance of the model.
MLFlow interfaces with the code through PyTorch Lightning. When the
Trainer is instantiated, among the loggers passed is MLFlow. In the code, this will seem a bit opaque, as the initialization of the loggers is done through Hydra’s instantiate (in file
mlflow_logger = hydra.utils.instantiate(logger) loggers.append(mlflow_logger)
Another important aspect is the checkpoint, which allows to store useful information and the model’s weights through MLFlow as well. Using MLFlow in the checkpointing process is useful to centralize all the information (in file
callbacks.append( hydra.utils.instantiate(callback, mlflow_logger=mlflow_logger) )
MLFlow is an excellent logging tool to keep track of your experiments. Where MLFlow shines is in its ability to quickly compare multiple runs of an hyperparameter search. Also, it centralizes everything, putting all of the things you need in a single location, which is extremely handy when running large scale experiments.
There are a few downsides to MLFlow, as it is not excellent with image logging and is overall lacking in the ability to compare different metrics and the qualitative performance of different models. When this is necessary for your models (which is the often the case for computer vision tasks) MLFlow needs to be supported by additional loggers, such as Tensorboard. This is easily done with Pythorch Lightning, just asking for multiple loggers in the Trainer.
Here, we will be using Hydra (with the Ax plugin) and Submitit to generate the jobs and submit them to the SLURM cluster.
These are the most fundamental parameters that need to be configured based on how many resources you have available to perform the search, that will be defined from the
.sh script (code from
@dataclass class SlurmConfig(SlurmQueueConf): partition: str = "gpu_titanrtx_shared" gpus_per_node: int = 1 tasks_per_node: int = 1 cpus_per_task: int = 6 mem_gb: int = 60 nodes: int = 1 timeout_min: int = 1200 # how long can the job run array_parallelism: int = 2 # how many jobs can run simultaneously
The actual parameter search itself can be done in a grid or bayesian form. In the grid form, all combinations of the parameters are searched through, while in the bayesian form, the Ax sweeper will look for the best parameters to optimize a given metric, defined in the
optimized_metric parameter. The syntax for defining what parameters should be sweeped can be found in this Hydra documentation. The most
fundamental ones are the following:
Choice. Select one between different options (here the example is different activation functions):
Range. Sweeps the defined range based on the given step (here the example is a different lambda weight for regularization):
lambda=range(start=0,stop=10,step=2) # 0,2,4,6,8
Hyperparameter tuning is a task that requires an incredible amounts of resources. Always consider the computing time and available resources before starting large computational studies. Try with smaller tests, trying to understand if the model is working and what is the rough range of parameters before continuing.
We have seen how we have used a combination of Hydra, MLFlow, SubmitIt, and the Ax plugin in Hydra to perform bayesian or grid hyperparameter searches in a SLURM-based cluster. We have seen how it interfaces with a simple project and have observed some of the strength and pitfalls of the methods.
It is crucial to remember that this setup is meant as a guide, to introduce the useful tools that you may want to use and how they interface together. Ultimately, the best fit for any case will be determined by the specific circumstances that you are facing.
Reference Repository. https://github.com/NKI-AI/hyperparameter-search-template
Pytorch Lightning. https://www.pytorchlightning.ai/
Ax Sweeper. https://hydra.cc/docs/next/plugins/ax_sweeper/=