Open
Description
Hi,
I was wondering if I could include the ray tune (hyper-parameter search) library as either a callback or in the base trainer class to look for the right hyper-parameters for a model and even stop early.
Can you please tell me how it is possible to integrate it and thereby stop the training midway in case a particular hyper-parameter configuration does not give good performance?
Even if you could suggest a way to just return the logger at every epoch when a training pipeline instance is called, then my job would be done.
This library has been extremely useful in my research. Thank you very much!