Type promotion semantics
Type promotion semantics#
This document describes JAX’s type promotion rules–i.e., the result of
jax.numpy.promote_types() for each pair of types.
For some background on the considerations that went into the design of what is described below, see Design of Type Promotion Semantics for JAX.
JAX’s type promotion behavior is determined via the following type promotion lattice:
where, for example:
(for more about weak types, see Weakly-typed values in JAX below).
Promotion between any two types is given by their join on this lattice, which generates the following binary promotion table:
Jax’s type promotion rules differ from those of NumPy, as given by
numpy.promote_types(), in those cells highlighted with a green background
in the table above. There are three key classes of differences:
When promoting a weakly typed value against a typed JAX value of the same category, JAX always prefers the precision of the JAX value. For example,
jnp.int16(1) + 1will return
int16rather than promoting to
int64as in NumPy. Note that this applies only to Python scalar values; if the constant is a NumPy array then the above lattice is used for type promotion. For example,
jnp.int16(1) + np.array(1)will return
When promoting an integer or boolean type against a floating-point or complex type, JAX always prefers the type of the floating-point or complex type.
JAX supports the bfloat16 non-standard 16-bit floating point type (
jax.numpy.bfloat16), which is useful for neural network training. The only notable promotion behavior is with respect to IEEE-754
float16, with which
bfloat16promotes to a
The differences between NumPy and JAX are motivated by the fact that accelerator devices, such as GPUs and TPUs, either pay a significant performance penalty to use 64-bit floating point types (GPUs) or do not support 64-bit floating point types at all (TPUs). Classic NumPy’s promotion rules are too willing to overpromote to 64-bit types, which is problematic for a system designed to run on accelerators.
JAX uses floating point promotion rules that are more suited to modern accelerator devices and are less aggressive about promoting floating point types. The promotion rules used by JAX for floating-point types are similar to those used by PyTorch.
Effects of Python operator dispatch#
Keep in mind that Python operators like + will dispatch based on the Python type of
the two values being added. This means that, for example,
np.int16(1) + 1 will
promote using NumPy rules, whereas
jnp.int16(1) + 1 will promote using JAX rules.
This can lead to potentially confusing non-associative promotion semantics when
the two types of promotion are combined;
for example with
np.int16(1) + 1 + jnp.int16(1).
Weakly-typed values in JAX#
Weakly-typed values in JAX can in most cases be thought of as having promotion behavior
equivalent to that of Python scalars, such as the integer scalar
2 in the following:
>>> x = jnp.arange(5, dtype='int8') >>> 2 * x DeviceArray([0, 2, 4, 6, 8], dtype=int8)
JAX’s weak type framework is designed to prevent unwanted type promotion within
binary operations between JAX values and values with no explicitly user-specified type,
such as Python scalar literals. For example, if
2 were not treated as weakly-typed,
the expression above would lead to an implicit type promotion:
>>> jnp.int32(2) * x DeviceArray([0, 2, 4, 6, 8], dtype=int32)
When used in JAX, Python scalars are sometimes promoted to
objects, for example during JIT compilation. To maintain the desired promotion
semantics in this case,
DeviceArray objects carry a
that can be seen in an array’s string representation:
>>> jnp.asarray(2) DeviceArray(2, dtype=int32, weak_type=True)
dtype is specified explicitly, it will instead result in a standard
strongly-typed array value:
>>> jnp.asarray(2, dtype='int32') DeviceArray(2, dtype=int32)