eta_utility.eta_x.common.processors module

class eta_utility.eta_x.common.processors.Split1d(in_features: int, sizes: Sequence[None | int], net_arch: Sequence[th.nn.Module])[source]

Bases: ModuleList

Split1d defines a pytorch module which splits the 1D input tensor into multiple parts and passes each of the parts through a separate network. After the pass through the network, the output from all networks is joined together. Thus, Split1d will return a 1d observation vector.

When configuring the network architecture, it is important to ensure that the output of all networks is 1D. Use torch.nn.Flatten to flatten the output of networks where the output is not one dimensional.

Use the parameters ‘sizes’ and ‘net_arch’ to determine how many of the input features should be passed through which network. Each value in sizes must have a correstponding value in net_arch. For the following examples, let’s assume that ‘in_features’ is 15. If ‘sizes’ is [3, 10, None], a valid configuration for net_arch could be [th.nn.Linear(out_features=10), th.nn.Conv1d(out_channels:2), th.nn.Linear(out_features=2)]. The last value of ‘sizes’ will automatically be calculated to be 2 (15 - 3 - 10 = 2). With this, 3 values would be passed to the first Linear layer, 10 values would be passed to the “Conv1d” layer and the final 2 values would be passed to the third layer in net_arch (which is the Linear layer with 2 output features).

If you would like to use dictionaries to configure the net_arch, you can use the function eta_utility.eta_x.common.common.deserialize_net_arch() to create the torch network architecture.

Parameters:
  • in_features – Number of input features for the Module

  • sizes – List of sizes for splitting the input features. This list can contain the value “None” once. If the list contains None, this will be evaluated to contain all remaining input features.

  • net_arch – List of torch.nn Modules. Each value of this list corresponds to one value of the ‘sizes’ list.

extra_repr() str[source]

Add info about the module to its torch representation.

Returns:

String representation of the object.

forward(tensor: Tensor) Tensor[source]

Perform a forward pass through the layer.

Parameters:

tensor – Input tensor

Returns:

Output tensor

static get_full_sizes(in_features: int, sizes: Iterable[None | int]) list[int][source]

Use in_features and the sizes list to determine the missing value in ‘sizes’ in case ‘sizes’ contains a None value (see class description for more information on how a None value in ‘sizes’ is interpreted.

Parameters:
  • in_features – Number of input features for the Module.

  • sizes – List of sizes for splitting the input features. This list can contain the value “None” once. If the list contains None, this will be evaluated to contain all remaining input features.

Returns:

List of sizes without the missing value.

class eta_utility.eta_x.common.processors.Fold1d(out_channels: int)[source]

Bases: Module

Fold a 1D tensor to create a multi-dimensional tensor. The parameter ‘out_channels’ determines, how many dimensions the output tensor will have.

Parameters:

out_channels – Number of dimensions of the output tensor.

extra_repr() str[source]

Add info about the module to its torch representation.

Returns:

String representation of the object.

forward(tensor: Tensor) Tensor[source]

Perform a forward pass through the layer.

Parameters:

tensor – Input tensor

Returns:

Output tensor