nn

Implementation of differentiable vectorization layers for persistent homology barcodes.

For a basic tutorial click here.

class torchph.nn.slayer.SLayerExponential(n_elements: int, point_dimension: int = 2, centers_init: torch.Tensor = None, sharpness_init: torch.Tensor = None)[source]

Proposed input layer for multisets [1].

__init__(n_elements: int, point_dimension: int = 2, centers_init: torch.Tensor = None, sharpness_init: torch.Tensor = None)[source]
Parameters
  • n_elements – Number of structure elements used.

  • point_dimension – D Dimensionality of the points of which the input multi set consists of.

  • centers_init – The initialization for the centers of the structure elements.

  • sharpness_init – Initialization for the sharpness of the structure elements.

forward(input) → torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchph.nn.slayer.SLayerRational(n_elements: int, point_dimension: int = 2, centers_init: torch.Tensor = None, sharpness_init: torch.Tensor = None, exponent_init: torch.Tensor = None, pointwise_activation_threshold=None, share_sharpness=False, share_exponent=False, freeze_exponent=True)[source]
__init__(n_elements: int, point_dimension: int = 2, centers_init: torch.Tensor = None, sharpness_init: torch.Tensor = None, exponent_init: torch.Tensor = None, pointwise_activation_threshold=None, share_sharpness=False, share_exponent=False, freeze_exponent=True)[source]
Parameters
  • n_elements – Number of structure elements used.

  • point_dimension – Dimensionality of the points of which the input multi set consists of.

  • centers_init – The initialization for the centers of the structure elements.

  • sharpness_init – Initialization for the sharpness of the structure elements.

forward(input) → torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class torchph.nn.slayer.SLayerRationalHat(n_elements: int, point_dimension: int = 2, centers_init: torch.Tensor = None, radius_init: float = 1, exponent: int = 1)[source]
__init__(n_elements: int, point_dimension: int = 2, centers_init: torch.Tensor = None, radius_init: float = 1, exponent: int = 1)[source]
Parameters
  • n_elements – Number of structure elements used.

  • point_dimension – Dimensionality of the points of which the input multi set consists of.

  • centers_init – The initialization for the centers of the structure elements.

  • radius_init – Initialization for radius of zero level-set of the hat.

  • exponent – Exponent of the rationals forming the hat.

forward(input) → torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

torchph.nn.slayer.prepare_batch(batch: List[torch.Tensor], point_dim: int = None) → Tuple[torch.Tensor, torch.Tensor, int, int][source]

This method ‘vectorizes’ the multiset in order to take advances of GPU processing. The policy is to embed all multisets in batch to the highest dimensionality occurring in batch, i.e., max(t.size()[0] for t in batch).

Parameters
  • batch – The input batch to process as a list of tensors.

  • point_dim – The dimension of the points the inputs consist of.

Returns

A four-tuple consisting of (1) the constructed batch, i.e., a tensor with size batch_size x n_max_points x point_dim; (2) a tensor not_dummy of size batch_size x n_max_points, where 1 at position (i,j) indicates if the point is a dummy point, whereas 0 indicates a dummy point used for padding; (3) the max. number of points and (4) the batch size.

Example:

>>> from torchph.nn.slayer import prepare_batch
>>> import torch
>>> x = [torch.rand(10,2), torch.rand(20,2)]
>>> batch, not_dummy, max_pts, batch_size = prepare_batch(x)