lasdi.gplasdi
Classes
This class runs a full GPLaSDI training. It takes into input the autoencoder defined as a PyTorch object and the |
Functions
|
|
|
Collect n_samples of ROM trajectories on param_grid. |
|
Computes the maximum standard deviation accross the parameter space grid and finds the corresponding parameter location |
|
Module Contents
- lasdi.gplasdi.average_rom(autoencoder, physics, latent_dynamics, gp_dictionary, param_grid)
- lasdi.gplasdi.sample_roms(autoencoder, physics, latent_dynamics, gp_dictionary, param_grid, n_samples)
Collect n_samples of ROM trajectories on param_grid. gp_dictionary: list of Gaussian process regressors (size of n_test) param_grid: numpy 2d array n_samples: integer assert(len(gp_dictionnary) == param_grid.shape[0])
output: np.array of size [n_test, n_samples, physics.nt, autoencoder.n_z]
- lasdi.gplasdi.get_fom_max_std(autoencoder, Zis)
Computes the maximum standard deviation accross the parameter space grid and finds the corresponding parameter location
- lasdi.gplasdi.optimizer_to(optim, device)
- class lasdi.gplasdi.BayesianGLaSDI(physics, autoencoder, latent_dynamics, param_space, config)
This class runs a full GPLaSDI training. It takes into input the autoencoder defined as a PyTorch object and the dictionnary containing all the training parameters. The “train” method with run the active learning training loop, compute the reconstruction and SINDy loss, train the GPs, and sample a new FOM data point.
- X_train
- X_test
- autoencoder
- latent_dynamics
- physics
- param_space
- timer
- n_samples
- lr
- n_iter
- max_iter
- max_greedy_iter
- ld_weight
- coef_weight
- optimizer
- MSE
- path_checkpoint
- path_results
- best_loss
- best_coefs = None
- restart_iter = 0
- training_loss = []
- ae_loss = []
- ld_loss = []
- coef_loss = []
- train()
- get_new_sample_point()
- export()
- load(dict_)