jax.lax.conv_transpose

jax.lax.conv_transpose(lhs, rhs, strides, padding, rhs_dilation=None, dimension_numbers=None, transpose_kernel=False, precision=None)[source]

Convenience wrapper for calculating the N-d convolution “transpose”.

This function directly calculates a fractionally strided conv rather than indirectly calculating the gradient (transpose) of a forward convolution.

Parameters
  • lhs (Any) – a rank n+2 dimensional input array.

  • rhs (Any) – a rank n+2 dimensional array of kernel weights.

  • strides (Sequence[int]) – sequence of n integers, sets fractional stride.

  • padding (Union[str, Sequence[Tuple[int, int]]]) – ‘SAME’, ‘VALID’ will set as transpose of corresponding forward conv, or a sequence of n integer 2-tuples describing before-and-after padding for each n spatial dimension.

  • rhs_dilation (Optional[Sequence[int]]) – None, or a sequence of n integers, giving the dilation factor to apply in each spatial dimension of rhs. RHS dilation is also known as atrous convolution.

  • dimension_numbers (Union[None, ConvDimensionNumbers, Tuple[str, str, str]]) – tuple of dimension descriptors as in lax.conv_general_dilated. Defaults to tensorflow convention.

  • transpose_kernel (bool) – if True flips spatial axes and swaps the input/output channel axes of the kernel. This makes the output of this function identical to the gradient-derived functions like keras.layers.Conv2DTranspose applied to the same kernel. For typical use in neural nets this is completely pointless and just makes input/output channel specification confusing.

  • precision (Optional[Any]) – Optional. Either None, which means the default precision for the backend, or a lax.Precision enum value (Precision.DEFAULT, Precision.HIGH or Precision.HIGHEST).

Return type

Any

Returns

Transposed N-d convolution, with output padding following the conventions of keras.layers.Conv2DTranspose.