lasdi.latent_space

Classes

MultiLayerPerceptron

Vanilla multi-layer perceptron neural networks module.

CNN2D

Two-dimensional convolutional neural networks.

LatentSpace

Autoencoder

Conv2DAutoencoder

Functions

initial_condition_latent(param_grid, physics, autoencoder)

Outputs the initial condition in the latent space: Z0 = encoder(U0)

Package Contents

class lasdi.latent_space.MultiLayerPerceptron(layer_sizes, act_type='sigmoid', reshape_index=None, reshape_shape=None, threshold=0.1, value=0.0)

Bases: torch.nn.Module

Vanilla multi-layer perceptron neural networks module.

Parameters:
  • layer_sizes (list(int)) – List of vector dimensions of layers.

  • act_type (str, optional) – Type of activation functions. By default 'sigmoid' is used. See act_dict for available types.

  • reshape_index (int, optinal) – Index of layer to reshape input/output data. Either 0 or -1 is allowed.

    • 0 : the first (input) layer

    • -1 : the last (output) layer

    By default the index is None, and reshaping is not executed.

  • reshape_shape (list(int), optional) – Target shape from/to which input/output data is reshaped. Reshaping behavior changes by reshape_index. By default the index is None, and reshaping is not executed. For details on reshaping action, see reshape_shape.

Note

numpy.prod(reshape_shape) == layer_sizes[reshape_index]

n_layers

Depth of layers including input, hidden, output layers.

Type:

int

layer_sizes

Vector dimensions corresponding to each layer.

Type:

list(int)

fcs = []

linear features between layers.

Type:

torch.nn.ModuleList

reshape_index

Index of layer to reshape input/output data.

  • 0 : the first (input) layer

  • -1 : the last (output) layer

  • None : no reshaping

Type:

int

reshape_shape

Target shape from/to which input/output data is reshaped. For a reshape_shape \([R_1, R_2, \ldots, R_n]\),

\[[\ldots, R_1, R_2, \ldots, R_n] \longrightarrow [\ldots, \prod_{i=1}^n R_i]\]
\[[\ldots, \prod_{i=1}^n R_i] \longrightarrow [\ldots, R_1, R_2, \ldots, R_n]\]
  • None : no reshaping

Type:

list(int)

act_type

Type of activation functions.

Type:

str

act

Activation function used throughout the layers.

Type:

torch.nn.Module

forward(x)

Evaluate through the module.

Parameters:

x (torch.Tensor) – Input data to pass into the module.

Note

For reshape_index =0, the last \(n\) dimensions of x must match reshape_shape \(=[R_1, R_2, \ldots, R_n]\).

Returns:

Output tensor evaluated from the module.

Return type:

torch.Tensor

Note

For reshape_index =-1, the last dimension of the output tensor will be reshaped as reshape_shape \(=[R_1, R_2, \ldots, R_n]\).

init_weight()

Initialize weights of linear features according to Xavier uniform distribution.

print_architecture()

Print out the architecture of the module.

class lasdi.latent_space.CNN2D(layer_sizes, mode, strides, paddings, dilations, groups=1, bias=True, padding_mode='zeros', act_type='ReLU', data_shape=None)

Bases: torch.nn.Module

Two-dimensional convolutional neural networks.

Parameters:
  • layer_sizes (numpy.array) – 2d array of tensor dimension of each layer. See layer_sizes.

  • mode (str) – Direction of CNN - forward: contracting direction - backward: expanding direction

  • strides (list) – List of strides corresponding to each layer. Each stride is either integer or tuple.

  • paddings (list) – List of paddings corresponding to each layer. Each padding is either integer or tuple.

  • dilations (list) – List of dilations corresponding to each layer. Each dilation is either integer or tuple.

  • groups (int, optional) – Groups that applies to all layers. By default 1

  • bias (bool, optional) – Bias that applies to all layers. By default True

  • padding_mode (str, optional) – Padding_mode that applies to all layers. By default 'zeros'

  • act_type (str, optional) – Activation function applied between all layers. By default 'ReLU'. See act_dict for available types.

  • data_shape (list(int), optional) – Data shape to/from which output/input data is reshaped. See data_shape for details.

Note

len(strides) == layer_sizes.shape[0] - 1

len(paddings) == layer_sizes.shape[0] - 1

len(dilations) == layer_sizes.shape[0] - 1

class Mode

Bases: Enum

Enumeration to specify direction of CNN.

Forward = 1

Contracting direction

Backward

Expanding direction

n_layers

Depth of layers including input, hidden, output layers.

Type:

int

layer_sizes

2d integer array of shape \([n\_layers, 3]\), indicating tensor dimension of each layer. For \(k\)-th layer, the tensor dimension is

\[layer\_sizes[k] = [channels, height, width]\]
Type:

numpy.array

channels

list of channel size that determines architecture of each layer. For details on how architecture is determined, see torch API documentation.

Type:

list(int)

strides

list of strides that determine architecture of each layer. Each stride can be either integer or tuple. For details on how architecture is determined, see torch API documentation.

Type:

list

paddings

list of paddings that determine architecture of each layer. Each padding can be either integer or tuple. For details on how architecture is determined, see torch API documentation.

Type:

list

dilations

list of dilations that determine architecture of each layer. Each dilation can be either integer or tuple. For details on how architecture is determined, see torch API documentation.

Type:

list

groups

groups that determine architecture of all layers. For details on how architecture is determined, see torch API documentation.

Type:

int

bias

bias that determine architecture of all layers. For details on how architecture is determined, see torch API documentation.

Type:

bool

padding_mode

padding mode that determine architecture of all layers. For details on how architecture is determined, see torch API documentation.

Type:

str

act

activation function applied between all layers.

Type:

torch.nn.Module

kernel_sizes = []

list of kernel_sizes that determine architecture of each layer. Each kernel_size can be either integer or tuple. Kernel size is automatically determined so that output of the corresponding layer has the shape of the next layer.

For details on how architecture is determined, see torch API documentation.

Type:

list

fcs = []

module list of torch.nn.Conv2d (forward) or torch.nn.Conv2d (backward).

Type:

torch.nn.ModuleList

data_shape

tensor dimension of the training data that will be passed into/out of the module.

Type:

list(int)

batch_reshape = None

tensor dimension to which input/output data is reshaped.

  • Forward mode: shape of 3d-/4d-array

  • Backward mode: shape of arbitrary nd-array

Determined by set_data_shape().

Type:

list(int)

set_data_shape(data_shape: list)

Set the batch reshape in order to reshape the input/output batches based on given training data shape.

Forward mode:

For data_shape \(=[N_1,\ldots,N_m]\) and the first layer size of \([C_1, H_1, W_1]\),

\[batch\_reshape = [R_1, C_1, H_1, W_1],\]

where \(\prod_{i=1}^m N_i = R_1\times C_1\times H_1\times W_1\).

If \(m=2\) and \(C_1=1\), then

\[batch\_reshape = [C_1, H_1, W_1].\]

Note

For forward mode, data_shape[-2:]==self.layer_sizes[0, 1:] must be true.

Backward mode:

batch_shape is the same as data_shape. Output tensor of the module is reshaped as data_shape.

Parameters:

data_shape (list(int)) – Shape of the input/output data tensor for forward/backward mode.

print_data_shape()

Print out the data shape and architecture of the module.

forward(x)

Evaluate through the module.

Parameters:

x (torch.nn.Tensor) – Input tensor to pass into the module.

  • Forward mode: nd array of shape data_shape

  • Backward mode: Same shape as the output tensor of forward mode

Returns:

Output tensor evaluated from the module.

  • Forward mode: 3d array of shape self.layer_sizes[-1], or 4d array of shape [self.batch_reshape[0]] + self.layer_sizes[-1]

  • Backward mode: nd array of shape data_shape (equal to batch_shape)

Return type:

torch.nn.Tensor

classmethod compute_kernel_size(input_shape, output_shape, stride, padding, dilation, mode)

Compute kernel size that produces desired output shape from given input shape.

The formula is based on torch API documentation for Conv2d and ConvTranspose2d.

Parameters:
  • input_shape (int or tuple(int))

  • output_shape (int or tuple(int))

  • stride (int or tuple(int))

  • padding (int or tuple(int))

  • dilation (int or tuple(int))

  • mode (CNN2D.Mode) – Direction of CNN. Either CNN2D.Mode.Forward or CNN2D.Mode.Backward

Returns:

List of two integers indicating height and width of kernel.

Return type:

list(int)

classmethod compute_input_layer_size(output_shape, kernel_size, stride, padding, dilation, mode)

Compute input layer size that produces desired output shape with given kernel size.

The formula is based on torch API documentation for Conv2d and ConvTranspose2d.

Parameters:
  • output_shape (int or tuple(int))

  • kernel_size (int or tuple(int))

  • stride (int or tuple(int))

  • padding (int or tuple(int))

  • dilation (int or tuple(int))

  • mode (CNN2D.Mode) – Direction of CNN. Either CNN2D.Mode.Forward or CNN2D.Mode.Backward

Returns:

List of two integers indicating height and width of input layer.

Return type:

list(int)

classmethod compute_output_layer_size(input_shape, kernel_size, stride, padding, dilation, mode)

Compute output layer size produced from given input shape and kernel size.

The formula is based on torch API documentation for Conv2d and ConvTranspose2d.

Parameters:
  • input_shape (int or tuple(int))

  • kernel_size (int or tuple(int))

  • stride (int or tuple(int))

  • padding (int or tuple(int))

  • dilation (int or tuple(int))

  • mode (CNN2D.Mode) – Direction of CNN. Either CNN2D.Mode.Forward or CNN2D.Mode.Backward

Returns:

List of two integers indicating height and width of output layer.

Return type:

list(int)

init_weight()

Initialize weights of linear features according to Xavier uniform distribution.

lasdi.latent_space.initial_condition_latent(param_grid, physics, autoencoder)

Outputs the initial condition in the latent space: Z0 = encoder(U0)

class lasdi.latent_space.LatentSpace(physics, config)

Bases: torch.nn.Module

qgrid_size
n_z
forward(x)
export()
load(dict_)

Notes

This abstract class only checks if the variables in restart file are the same as the instance attributes.

class lasdi.latent_space.Autoencoder(physics, config)

Bases: LatentSpace

space_dim
encoder
decoder
forward(x)
export()
load(dict_)

Notes

This abstract class only checks if the variables in restart file are the same as the instance attributes.

class lasdi.latent_space.Conv2DAutoencoder(physics, config)

Bases: LatentSpace

encoder
decoder
forward(x)
export()
load(dict_)

Notes

This abstract class only checks if the variables in restart file are the same as the instance attributes.

set_batch_shape(batch_shape)
print_architecture()