mdlearn.nn.models.vde.symmetric_conv2d_vde

Warning

VDE models are still under development, use with caution!

Classes

SymmetricConv2dVDE(*args, **kwargs)

Convolutional variational autoencoder from the "Deep clustering of protein folding simulations" paper implemented as a time lagged autoencoder.

class mdlearn.nn.models.vde.symmetric_conv2d_vde.SymmetricConv2dVDE(*args: Any, **kwargs: Any)

Convolutional variational autoencoder from the “Deep clustering of protein folding simulations” paper implemented as a time lagged autoencoder. Inherits from mdlearn.nn.models.vae.VDE.

__init__(input_shape: Tuple[int, ...], init_weights: Optional[str] = None, filters: List[int] = [64, 64, 64], kernels: List[int] = [3, 3, 3], strides: List[int] = [1, 2, 1], affine_widths: List[int] = [128], affine_dropouts: List[float] = [0.0], latent_dim: int = 3, activation: str = 'ReLU', output_activation: str = 'Sigmoid')
Parameters
  • input_shape (Tuple[int, …]) – (height, width) input dimensions of input image.

  • init_weights (Optional[str]) – .pt weights file to initial weights with.

  • filters (List[int]) – Convolutional filter dimensions.

  • kernels (List[int]) – Convolutional kernel dimensions (assumes square kernel).

  • strides (List[int]) – Convolutional stride lengths (assumes square strides).

  • affine_widths (List[int]) – Number of neurons in each linear layer.

  • affine_dropouts (List[float]) – Dropout probability for each linear layer. Dropout value of 0.0 will skip adding the dropout layer.

  • latent_dim (int) – Latent dimension for \(mu\) and \(logstd\) layers.

  • activation (str) – Activation function to use between convultional and linear layers.

  • output_activation (str) – Output activation function for last decoder layer.

forward(x: torch.Tensor) Tuple[torch.Tensor, torch.Tensor]

Forward pass of variational autoencoder.

Parameters

x (torch.Tensor) – Input x data to encode and reconstruct.

Returns

  • torch.Tensor\(z\)-latent space batch tensor.

  • torch.Tensorrecon_x reconstruction of x.