lasdi.latent_space
Attributes
Classes
| A standard multi-layer perceptron (MLP) module. | |
| A standard autoencoder using MLP. | |
| Multi-layer perceptron with additional mask output. | |
| Autoencoder class with additional mask output. | 
Functions
| 
 | Outputs the initial condition in the latent space: Z0 = encoder(U0) | 
Module Contents
- lasdi.latent_space.act_dict
- lasdi.latent_space.initial_condition_latent(param_grid, physics, autoencoder)
- Outputs the initial condition in the latent space: Z0 = encoder(U0) - Parameters:
- param_grid ( - numpy.array) – A 2d array of shape (n_param, param_dim) for parameter points to obtain initial condition.
- physics ( - lasdi.physics.Physics) – Physics class to generate initial condition.
- autoencoder ( - lasdi.latent_space.Autoencoder) – Autoencoder class to encode initial conditions into latent variables.
 
- Returns:
- Z0 – a torch tensor of size (n_param, n_z), where n_z is the latent variable dimension defined by autoencoder. 
- Return type:
- torch.Tensor
 
- class lasdi.latent_space.MultiLayerPerceptron(layer_sizes, act_type='sigmoid', reshape_index=None, reshape_shape=None, threshold=0.1, value=0.0, num_heads=1)
- Bases: - torch.nn.Module- A standard multi-layer perceptron (MLP) module. - n_layers
- Depth of MLP including input, hidden, and output layers. - Type:
- int
 
 - layer_sizes
- Widths of each MLP layer, including input, hidden and output layers. - Type:
- list(int)
 
 - fcs = []
- torch module list of \((self.n\_layers-1)\) linear layers, connecting from input to output layers. - Type:
- torch.nn.ModuleList
 
 - reshape_index = None
- Index of the layer to reshape. - 0: Input data is n-dimensional and will be squeezed into 1d tensor for MLP input. 
- 1: Output data should be n-dimensional and MLP output will be reshaped as such. 
 - Type:
- int
 
 - reshape_shape = None
- Shape of the layer to be reshaped. - \((self.reshape_index=0)\): Shape of the input data that will be squeezed into 1d tensor for MLP input. 
- \((self.reshape_index=1)\): Shape of the output data into which MLP output shall be reshaped. 
 - Type:
- list(int)
 
 - act_type = 'sigmoid'
- type of activation function - Type:
- str
 
 - use_multihead = False
- switch to use multihead attention. - Warning:
- this attribute is obsolete and will be removed in future. 
 - Type:
- bool
 
 - act = None
- activation function - Type:
- torch.nn.Module
 
 - forward(x)
- Pass the input through the MLP layers. - Args:
- x ( - torch.Tensor): n-dimensional torch.Tensor for input data.
- Note:
- If - self.reshape_index == 0, then the last n dimensions of- xmust match- self.reshape_shape. In other words,- list(x.shape[-len(self.reshape_shape):]) == self.reshape_shape
- If - self.reshape_index == -1, then the last layer output- zis reshaped into- self.reshape_shape. In other words,- list(z.shape[-len(self.reshape_shape):]) == self.reshape_shape
 
- Returns:
- n-dimensional torch.Tensor for output data. 
 
 - apply_attention(x, act_idx)
 - init_weight()
- Initialize the weights and biases of the linear layers. - Returns:
- Does not return a value. 
 
 
- class lasdi.latent_space.Autoencoder(physics, config)
- Bases: - torch.nn.Module- A standard autoencoder using MLP. - Args:
- physics ( - lasdi.physics.Physics): Physics class that specifies full-order model solution dimensions.- config: (dict): options for autoencoder. It must include the following keys and values.
- 'hidden_units': a list of integers for the widths of hidden layers.
- 'latent_dimension': integer for the latent space dimension.
- 'activation': string for type of activation function.
 
 
- config: (
 - qgrid_size
 - space_dim
 - n_z
 - encoder
 - decoder
 - forward(x)
 - export()
 - load(dict_)
 
- class lasdi.latent_space.MLPWithMask(mlp)
- Bases: - MultiLayerPerceptron- Multi-layer perceptron with additional mask output. - Args:
- mlp ( - lasdi.latent_space.MultiLayerPerceptron): MultiLayerPerceptron class to copy. The same architecture, activation function, reshaping will be used.
 - n_layers
- Depth of MLP including input, hidden, and output layers. - Type:
- int
 
 - layer_sizes
- Widths of each MLP layer, including input, hidden and output layers. - Type:
- list(int)
 
 - fcs
- torch module list of \((self.n\_layers-1)\) linear layers, connecting from input to output layers. - Type:
- torch.nn.ModuleList
 
 - reshape_index
- Index of the layer to reshape. - 0: Input data is n-dimensional and will be squeezed into 1d tensor for MLP input. 
- 1: Output data should be n-dimensional and MLP output will be reshaped as such. 
 - Type:
- int
 
 - reshape_shape
- Shape of the layer to be reshaped. - \((self.reshape_index=0)\): Shape of the input data that will be squeezed into 1d tensor for MLP input. 
- \((self.reshape_index=1)\): Shape of the output data into which MLP output shall be reshaped. 
 - Type:
- list(int)
 
 - act_type
- type of activation function - Type:
- str
 
 - use_multihead
- switch to use multihead attention. - Warning:
- this attribute is obsolete and will be removed in future. 
 - Type:
- bool
 
 - act
- activation function - Type:
- torch.nn.Module
 
 - bool_d
- additional linear layer to output a mask variable. - Type:
- torch.nn.Linear
 
 - sigmoid
- mask output passes through the sigmoid activation function to ensure \([0, 1]\). - Type:
- torch.nn.Sigmoid
 
 - forward(x)
- Pass the input through the MLP layers. - Args:
- x ( - torch.Tensor): n-dimensional torch.Tensor for input data.
- Note:
- If - self.reshape_index == 0, then the last n dimensions of- xmust match- self.reshape_shape. In other words,- list(x.shape[-len(self.reshape_shape):]) == self.reshape_shape
- If - self.reshape_index == -1, then the last layer outputs- xvaland- xboolare reshaped into- self.reshape_shape. In other words,- list(xval.shape[-len(self.reshape_shape):]) == self.reshape_shape
 
- Returns:
- xval ( - torch.Tensor): n-dimensional torch.Tensor for output data. xbool (- torch.Tensor): n-dimensional torch.Tensor for output mask.
 
 
- class lasdi.latent_space.AutoEncoderWithMask(physics, config)
- Bases: - Autoencoder- Autoencoder class with additional mask output. - Its decoder is - lasdi.latent_space.MLPWithMask, which has an additional mask output.- Note:
- Unlike the standard autoencoder, the decoder output will have two outputs (with the same shape of the input of the encoder). 
 - decoder