You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Any tips on optimizing performance/training time for the Bayesian Gaussian mixture training phase? Could we consider exposing the parameters and perhaps include sampling the training set? This piece doesn't seem to scale well to bigger datasets.
The text was updated successfully, but these errors were encountered:
@kevinykuo I totally agree on exposing as many hyperparameters as possible, but I have some doubts about the subsampling, since this is something that could be easily done by the user outside of CTGAN.
Would you mind editing the issue title and description to make this one focus only on the GM Hyperparams, so we can start working on it right away, and opening another one to discuss the subsampling separately?
I can also do the edits myself, if you prefer. Let me know!
To clarify, the non-scalable piece is the BayesianGaussianMixture, so I'm proposing to subsample the data there for training data transformation, but when we train the actual GAN we still use the full training data.
These hyperparameters seem related enough to be tracked together, but feel free to organize as you see fit!
Uh oh!
There was an error while loading. Please reload this page.
Any tips on optimizing performance/training time for the Bayesian Gaussian mixture training phase? Could we consider exposing the parameters and perhaps include sampling the training set? This piece doesn't seem to scale well to bigger datasets.
The text was updated successfully, but these errors were encountered: