{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "6_117sy0CGEU" }, "source": [ "# JAX As Accelerated NumPy\n", "\n", "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/01-jax-basics.ipynb)\n", "\n", "*Authors: Rosalia Schneider & Vladimir Mikulik*\n", "\n", "In this first section you will learn the very fundamentals of JAX." ] }, { "cell_type": "markdown", "metadata": { "id": "CXjHL4L6ku3-" }, "source": [ "## Getting started with JAX numpy\n", "\n", "Fundamentally, JAX is a library that enables transformations of array-manipulating programs written with a NumPy-like API. \n", "\n", "Over the course of this series of guides, we will unpack exactly what that means. For now, you can think of JAX as *differentiable NumPy that runs on accelerators*.\n", "\n", "The code below shows how to import JAX and create a vector." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "id": "ZqUzvqF1B1TO" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "[0 1 2 3 4 5 6 7 8 9]\n" ] } ], "source": [ "import jax\n", "import jax.numpy as jnp\n", "\n", "x = jnp.arange(10)\n", "print(x)" ] }, { "cell_type": "markdown", "metadata": { "id": "rPBmlAxXlBAy" }, "source": [ "So far, everything is just like NumPy. A big appeal of JAX is that you don't need to learn a new API. Many common NumPy programs would run just as well in JAX if you substitute np for jnp. However, there are some important differences which we touch on at the end of this section.\n", "\n", "You can notice the first difference if you check the type of x. It is a variable of type DeviceArray, which is the way JAX represents arrays." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "3fLtgPUAn7mi" }, "outputs": [ { "data": { "text/plain": [ "DeviceArray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)" ] }, "execution_count": 2, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "x" ] }, { "cell_type": "markdown", "metadata": { "id": "Yx8VofzzoHFH" }, "source": [ "One useful feature of JAX is that the same code can be run on different backends -- CPU, GPU and TPU.\n", "\n", "We will now perform a dot product to demonstrate that it can be done in different devices without changing the code. We use %timeit to check the performance. \n", "\n", "(Technical detail: when a JAX function is called (including jnp.array\n", "creation), the corresponding operation is dispatched to an accelerator to be\n", "computed asynchronously when possible. The returned array is therefore not\n", "necessarily 'filled in' as soon as the function returns. Thus, if we don't\n", "require the result immediately, the computation won't block Python execution.\n", "Therefore, unless we block_until_ready or convert the array to a regular\n", "Python type, we will only time the dispatch, not the actual computation. See\n", "[Asynchronous dispatch](https://jax.readthedocs.io/en/latest/async_dispatch.html#asynchronous-dispatch)\n", "in the JAX docs.)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "id": "mRvjVxoqo-Bi" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The slowest run took 7.39 times longer than the fastest. This could mean that an intermediate result is being cached.\n", "100 loops, best of 5: 7.85 ms per loop\n" ] } ], "source": [ "long_vector = jnp.arange(int(1e7))\n", "\n", "%timeit jnp.dot(long_vector, long_vector).block_until_ready()" ] }, { "cell_type": "markdown", "metadata": { "id": "DKBB0zs-p-RC" }, "source": [ "**Tip**: Try running the code above twice, once without an accelerator, and once with a GPU runtime (while in Colab, click *Runtime* → *Change Runtime Type* and choose GPU). Notice how much faster it runs on a GPU." ] }, { "cell_type": "markdown", "metadata": { "id": "PkCpI-v0uQQO" }, "source": [ "## JAX first transformation: grad\n", "\n", "A fundamental feature of JAX is that it allows you to transform functions.\n", "\n", "One of the most commonly used transformations is jax.grad, which takes a numerical function written in Python and returns you a new Python function that computes the gradient of the original function. \n", "\n", "To use it, let's first define a function that takes an array and returns the sum of squares." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "id": "LuaGUVRUvbzQ" }, "outputs": [], "source": [ "def sum_of_squares(x):\n", " return jnp.sum(x**2)" ] }, { "cell_type": "markdown", "metadata": { "id": "QAqloI1Wvtp2" }, "source": [ "Applying jax.grad to sum_of_squares will return a different function, namely the gradient of sum_of_squares with respect to its first parameter x. \n", "\n", "Then, you can use that function on an array to return the derivatives with respect to each element of the array." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "dKeorwJfvpeI" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "30.0\n", "[2. 4. 6. 8.]\n" ] } ], "source": [ "sum_of_squares_dx = jax.grad(sum_of_squares)\n", "\n", "x = jnp.asarray([1.0, 2.0, 3.0, 4.0])\n", "\n", "print(sum_of_squares(x))\n", "\n", "print(sum_of_squares_dx(x))" ] }, { "cell_type": "markdown", "metadata": { "id": "VfBt5CYbyKUX" }, "source": [ "You can think of jax.grad by analogy to the $\\nabla$ operator from vector calculus. Given a function $f(x)$, $\\nabla f$ represents the function that computes $f$'s gradient, i.e.\n", "\n", "$$\n", "(\\nabla f)(x)_i = \\frac{\\partial f}{\\partial x_i}(x).\n", "$$\n", "\n", "Analogously, jax.grad(f) is the function that computes the gradient, so jax.grad(f)(x) is the gradient of f at x.\n", "\n", "(Like $\\nabla$, jax.grad will only work on functions with a scalar output -- it will raise an error otherwise.)\n", "\n", "This makes the JAX API quite different from other autodiff libraries like Tensorflow and PyTorch, where to compute the gradient we use the loss tensor itself (e.g. by calling loss.backward()). The JAX API works directly with functions, staying closer to the underlying math. Once you become accustomed to this way of doing things, it feels natural: your loss function in code really is a function of parameters and data, and you find its gradient just like you would in the math.\n", "\n", "This way of doing things makes it straightforward to control things like which variables to differentiate with respect to. By default, jax.grad will find the gradient with respect to the first argument. In the example below, the result of sum_squared_error_dx will be the gradient of sum_squared_error with respect to x." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "f3NfaVu4yrQE" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[-0.20000005 -0.19999981 -0.19999981 -0.19999981]\n" ] } ], "source": [ "def sum_squared_error(x, y):\n", " return jnp.sum((x-y)**2)\n", "\n", "sum_squared_error_dx = jax.grad(sum_squared_error)\n", "\n", "y = jnp.asarray([1.1, 2.1, 3.1, 4.1])\n", "\n", "print(sum_squared_error_dx(x, y))" ] }, { "cell_type": "markdown", "metadata": { "id": "1tOztA5zpLWN" }, "source": [ "To find the gradient with respect to a different argument (or several), you can set argnums:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "FQSczVQkqIPY" }, "outputs": [ { "data": { "text/plain": [ "(DeviceArray([-0.20000005, -0.19999981, -0.19999981, -0.19999981], dtype=float32),\n", " DeviceArray([0.20000005, 0.19999981, 0.19999981, 0.19999981], dtype=float32))" ] }, "execution_count": 7, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "jax.grad(sum_squared_error, argnums=(0, 1))(x, y) # Find gradient wrt both x & y" ] }, { "cell_type": "markdown", "metadata": { "id": "yQAMTnZSqo-t" }, "source": [ "Does this mean that when doing machine learning, we need to write functions with gigantic argument lists, with an argument for each model parameter array? No. JAX comes equipped with machinery for bundling arrays together in data structures called 'pytrees', on which more in a [later guide](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/05.1-pytrees.ipynb). So, most often, use of jax.grad looks like this:\n", "\n", "\n", "def loss_fn(params, data):\n", " ...\n", "\n", "grads = jax.grad(loss_fn)(params, data_batch)\n", "" ] }, { "cell_type": "markdown", "metadata": { "id": "oBowiovisT97" }, "source": [ "where params is, for example, a nested dict of arrays, and the returned grads is another nested dict of arrays with the same structure." ] }, { "cell_type": "markdown", "metadata": { "id": "LNjf9jUEsZZ8" }, "source": [ "## Value and Grad\n", "\n", "Often, you need to find both the value and the gradient of a function, e.g. if you want to log the training loss. JAX has a handy sister transformation for efficiently doing that:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "id": "dWg4_-h3sYwl" }, "outputs": [ { "data": { "text/plain": [ "(DeviceArray(0.03999995, dtype=float32),\n", " DeviceArray([-0.20000005, -0.19999981, -0.19999981, -0.19999981], dtype=float32))" ] }, "execution_count": 8, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "jax.value_and_grad(sum_squared_error)(x, y)" ] }, { "cell_type": "markdown", "metadata": { "id": "QVT2EWHJsvvv" }, "source": [ "which returns a tuple of, you guessed it, (value, grad). To be precise, for any f,\n", "\n", "\n", "jax.value_and_grad(f)(*xs) == (f(*xs), jax.grad(f)(*xs)) \n", "" ] }, { "cell_type": "markdown", "metadata": { "id": "QmHTVpAks3OX" }, "source": [ "## Auxiliary data\n", "\n", "In addition to wanting to log the value, we often want to report some intermediate results obtained in computing the loss function. But if we try doing that with regular jax.grad, we run into trouble:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "ffGCEzT4st41", "tags": [ "raises-exception" ] }, "outputs": [ { "ename": "TypeError", "evalue": "ignored", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mFilteredStackTrace\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 4\u001b[0;31m \u001b[0mjax\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgrad\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msquared_error_with_aux\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0my\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;31mFilteredStackTrace\u001b[0m: TypeError: Gradient only defined for scalar-output functions. Output was (DeviceArray(0.03999995, dtype=float32), DeviceArray([-0.10000002, -0.0999999 , -0.0999999 , -0.0999999 ], dtype=float32)).\n\nThe stack trace above excludes JAX-internal frames." ] } ], "source": [ "def squared_error_with_aux(x, y):\n", " return sum_squared_error(x, y), x-y\n", "\n", "jax.grad(squared_error_with_aux)(x, y)" ] }, { "cell_type": "markdown", "metadata": { "id": "IUubno3nth4i" }, "source": [ "This is because jax.grad is only defined on scalar functions, and our new function returns a tuple. But we need to return a tuple to return our intermediate results! This is where has_aux comes in:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "id": "uzUFihyatgiF" }, "outputs": [ { "data": { "text/plain": [ "(DeviceArray([-0.20000005, -0.19999981, -0.19999981, -0.19999981], dtype=float32),\n", " DeviceArray([-0.10000002, -0.0999999 , -0.0999999 , -0.0999999 ], dtype=float32))" ] }, "execution_count": 10, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "jax.grad(squared_error_with_aux, has_aux=True)(x, y)" ] }, { "cell_type": "markdown", "metadata": { "id": "g5s3UiFauwDk" }, "source": [ "has_aux signifies that the function returns a pair, (out, aux). It makes jax.grad ignore aux, passing it through to the user, while differentiating the function as if only out was returned." ] }, { "cell_type": "markdown", "metadata": { "id": "fk4FUXe7vsW4" }, "source": [ "## Differences from NumPy\n", "\n", "The jax.numpy API closely follows that of NumPy. However, there are some important differences. We cover many of these in future guides, but it's worth pointing some out now.\n", "\n", "The most important difference, and in some sense the root of all the rest, is that JAX is designed to be _functional_, as in _functional programming_. The reason behind this is that the kinds of program transformations that JAX enables are much more feasible in functional-style programs.\n", "\n", "An introduction to functional programming (FP) is out of scope of this guide. If you already are familiar with FP, you will find your FP intuition helpful while learning JAX. If not, don't worry! The important feature of functional programming to grok when working with JAX is very simple: don't write code with side-effects.\n", "\n", "A side-effect is any effect of a function that doesn't appear in its output. One example is modifying an array in place:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "id": "o_YBuLQC1wPJ" }, "outputs": [ { "data": { "text/plain": [ "array([123, 2, 3])" ] }, "execution_count": 11, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "\n", "x = np.array([1, 2, 3])\n", "\n", "def in_place_modify(x):\n", " x[0] = 123\n", " return None\n", "\n", "in_place_modify(x)\n", "x" ] }, { "cell_type": "markdown", "metadata": { "id": "JTtUihVZ13F6" }, "source": [ "The side-effectful function modifies its argument, but returns a completely unrelated value. The modification is a side-effect. \n", "\n", "The code below will run in NumPy. However, JAX arrays won't allow themselves to be modified in-place:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "id": "u6grTYIVcZ3f", "tags": [ "raises-exception" ] }, "outputs": [ { "ename": "TypeError", "evalue": "ignored", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0min_place_modify\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mjnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# Raises error when we cast input to jnp.ndarray\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;32m\u001b[0m in \u001b[0;36min_place_modify\u001b[0;34m(x)\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0min_place_modify\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 6\u001b[0;31m \u001b[0mx\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m123\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 7\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m/usr/local/lib/python3.7/dist-packages/jax/_src/numpy/lax_numpy.py\u001b[0m in \u001b[0;36m_unimplemented_setitem\u001b[0;34m(self, i, x)\u001b[0m\n\u001b[1;32m 6594\u001b[0m \u001b[0;34m\"or another .at[] method: \"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 6595\u001b[0m \"https://jax.readthedocs.io/en/latest/jax.ops.html\")\n\u001b[0;32m-> 6596\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mTypeError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmsg\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mformat\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtype\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 6597\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 6598\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_operator_round\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnumber\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mndigits\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mTypeError\u001b[0m: '' object does not support item assignment. JAX arrays are immutable. Instead of x[idx] = y, use x = x.at[idx].set(y) or another .at[] method: https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html" ] } ], "source": [ "in_place_modify(jnp.array(x)) # Raises error when we cast input to jnp.ndarray" ] }, { "cell_type": "markdown", "metadata": { "id": "RGqVfYSpc49s" }, "source": [ "Helpfully, the error points us to JAX's side-effect-free way of doing the same thing via the [jax.numpy.ndarray.at](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html) index update operators (be careful [jax.ops.index_*](https://jax.readthedocs.io/en/latest/jax.ops.html#indexed-update-functions-deprecated) functions are deprecated). They are analogous to in-place modification by index, but create a new array with the corresponding modifications made:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "id": "Rmklk6BB2xF0" }, "outputs": [ { "data": { "text/plain": [ "DeviceArray([123, 2, 3], dtype=int32)" ] }, "execution_count": 13, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "def jax_in_place_modify(x):\n", " return x.at[0].set(123)\n", "\n", "y = jnp.array([1, 2, 3])\n", "jax_in_place_modify(y)" ] }, { "cell_type": "markdown", "metadata": { "id": "91tn_25vdrNf" }, "source": [ "Note that the old array was untouched, so there is no side-effect:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "id": "KQGXig4Hde6T" }, "outputs": [ { "data": { "text/plain": [ "DeviceArray([1, 2, 3], dtype=int32)" ] }, "execution_count": 14, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "y" ] }, { "cell_type": "markdown", "metadata": { "id": "d5TibzPO25qa" }, "source": [ "Side-effect-free code is sometimes called *functionally pure*, or just *pure*.\n", "\n", "Isn't the pure version less efficient? Strictly, yes; we are creating a new array. However, as we will explain in the next guide, JAX computations are often compiled before being run using another program transformation, jax.jit. If we don't use the old array after modifying it 'in place' using indexed update operators, the compiler can recognise that it can in fact compile to an in-place modify, resulting in efficient code in the end.\n", "\n", "Of course, it's possible to mix side-effectful Python code and functionally pure JAX code, and we will touch on this more later. As you get more familiar with JAX, you will learn how and when this can work. As a rule of thumb, however, any functions intended to be transformed by JAX should avoid side-effects, and the JAX primitives themselves will try to help you do that.\n", "\n", "We will explain other places where the JAX idiosyncracies become relevant as they come up. There is even a section that focuses entirely on getting used to the functional programming style of handling state: [Part 7: Problem of State](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/07-state.ipynb). However, if you're impatient, you can find a [summary of JAX's sharp edges](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html) in the JAX docs." ] }, { "cell_type": "markdown", "metadata": { "id": "dFn_VBFFlGCz" }, "source": [ "## Your first JAX training loop\n", "\n", "We still have much to learn about JAX, but you already know enough to understand how we can use JAX to build a simple training loop.\n", "\n", "To keep things simple, we'll start with a linear regression. \n", "\n", "Our data is sampled according to $y = w_{true} x + b_{true} + \\epsilon$." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "id": "WGgyEWFqrPq1" }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXIAAAD4CAYAAADxeG0DAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAVT0lEQVR4nO3df6zddX3H8dfrHk6Xc/3BqeE60wt3ZZt2EQt0XhHXbE5kFp1CbZTphplzWTMzjUxWQoUI23Ql63SaaLY0kSyLREGtVzJ1FYLOaAbz1ttSoNQRI8KpxhK96OgVbm/f++PeA7en5/f3e358v+f5SJrc8+N+z+cEePXD+/v+fD6OCAEAsmts0AMAACRDkANAxhHkAJBxBDkAZBxBDgAZd8YgPvSss86K9evXD+KjASCz9u/f/3hETNQ+P5AgX79+vWZnZwfx0QCQWbYfqfc8pRUAyDiCHAAyjiAHgIwjyAEg4whyAMi4gXStAMComZmraPe+Izo6v6B15ZJ2bNmgrZsmU7k2QQ4APTYzV9HOvYe0sLgkSarML2jn3kOSlEqYU1oBgB7bve/IMyFetbC4pN37jqRyfYIcAHrs6PxCR893KpUgt122/XnbD9k+bPtVaVwXAPJgXbnU0fOdSmtG/nFJ/xkRvyXpAkmHU7ouAGTeji0bVCoWTnmuVCxox5YNqVw/8c1O22dK+j1J75SkiHha0tNJrwsAeVG9odmrrhUnPbPT9oWS9kh6UMuz8f2S3hcRT9a8b7uk7ZI0NTX18kceqbv3CwCgAdv7I2K69vk0SitnSPptSf8SEZskPSnputo3RcSeiJiOiOmJidN2YQQAdCmNIH9M0mMRce/K489rOdgBAH2QOMgj4seSHrVdrdq/VstlFgBAH6S1svO9km61vUbS9yX9WUrXBQC0kEqQR8QBSacV4AEAvcfKTgDIOIIcADKOIAeAjGMbWwBY0cs9w3uJIAcA9X7P8F6itAIA6v2e4b3EjBzASFhdNimPFxUhPbGw+EwJpdd7hvcSQQ4g92rLJj87vvjMa9USypmlouYXFk/73bT2DO8lSisAcq9e2WS1hcUl2erpnuG9RJADyL12yiPzxxe1a9tGTZZLsqTJckm7tm0c+hudEqUVACNgXbmkSoswX1cuaeumyUwEdy1m5AByr1V5JCsllEYIcgC5MTNX0eab79a5131Zm2++WzNzFUnLfeDlUrHu7xTszJRQGiHIAeRCtTOlMr+g0LPdKNUwv+ny8+rezPzIlRdkOsQlghxATrRa0LN102Rmb2a2ws1OALnQzoKerN7MbIUZOYBcKI/Xr4FnYUFPUqkFue2C7Tnb/5HWNQGgHTNzFf3fL0+c9nyx4Ex3o7QrzRn5+yQdTvF6ANCW3fuOaPFknPb8c9ackctSSq1UauS2z5b0h5I+LOn9aVwTwGjrZG/wRvXxJ+rsnZJHac3IPybpWkknU7oegBHWqpWwVqM6+CjUx6UUgtz2GyX9JCL2t3jfdtuztmePHTuW9GMB5Fine4Pv2LIhsxtepSGNGflmSZfb/oGkz0q6xPana98UEXsiYjoipicmJlL4WAB51ene4HnuEW9H4hp5ROyUtFOSbP++pL+JiKuSXhfA6Gq0yVWzUklee8TbQR85gKEz6qWSTqW6sjMiviHpG2leE8Doqc6ss3ii/SCwRB/AUBrlUkmnKK0AQMYR5ACQcQQ5AGQcQQ4AGUeQA0DG0bUCIBXVTa4q8wsq2FqK0CRtg31BkANIrLrJVXV/lKVY3lK2utmVJMK8hyitAEis3iZXVc02u0I6CHIAiTXazKrd15EMQQ4gsTNL9c/LrBqVfcEHhRo5gI6tPr3nzFJRv3jq9PMyV2Ozq94iyAG0ZXVXiiVVT8icb3GcWrlU5EZnjxHkAFqq7Uo5/Zjj+krFgm66/LzeDQySqJEDaEOzrpRGCvZIndIzSAQ5gJY67TopFQv6yJUXEOJ9QpADaGpmrqIxu+l7imPW2vHiSJ6XOQyokQM4TaMbm6tVn2cZ/uAlDnLb50j6d0m/quV/rnsi4uNJrwtgMNq5sVmwKZ0MkTRm5CckXRMR37X9PEn7bd8ZEQ+mcG0AfbC6L3xsZcOrZk5GEOJDJHGQR8SPJP1o5edf2D4saVISQQ5kQKMNr5phpeZwSbVGbnu9pE2S7q3z2nZJ2yVpamoqzY8F0IGZuYpuuuOBZxbyjFk62W5juJY7UlipOVxS61qx/VxJX5B0dUT8vPb1iNgTEdMRMT0xMZHWxwLowMxcRTs+d/CU1ZjthHi1Z4WOlOGUyozcdlHLIX5rROxN45oA0jUzV9E1tx9sq3SyGl0pwy+NrhVL+pSkwxHx0eRDApC2G2YO6dZ7ftj20vqqyXJJ377ukp6MCelJo7SyWdI7JF1i+8DKnzekcF0AKZiZq+jTXYQ4tfDsSKNr5Vt6toQGYEisXtTTjuesKag8vkZH5xe0jnJKprCyE8ihTkspxYL14TdzEzOrCHIgJ2rbCtvFzczsI8iBDOu0fLLaeHFMD/7963swKvQbQQ5kVO2KzE4Ux6x/2HZ+D0aFQSDIgYzq5rAHiVJKHhHkQMZ0W0656uIpfWjrxh6NCoNEkAMZUl1iv9jJ5iiSNv/GCwjxHCPIgSG1emvZal/3TXc80HGIMxPPP4IcGEK1NzIr8wu6+rYDHV2jVCywwdWIIMiBIdTtjcwqbmiOFoIcGDIzc5WOb2SuHS9q7oOv69GIMOxS248cQHLVkkonigXrxjed16MRIQuYkQND4oaZQ/r0PT9s+/2W2NwKkghyYKC67Qkvl4o6cCOlFCwjyIEB6XaJfXHMuulySil4FjVyYEA66Uwpl4qylrtRdr/1AkopOAUzcmBAjrZZTqGMglZSmZHbvsz2EdsP274ujWsCWTczV9Hmm+/Wudd9WZtvvlszc5VTXl9XLrW8xphEGQUtpXH4ckHSJyX9gaTHJH3H9h0R8WDSawNZVXtCT2V+QTv3HtLsIz/V1x86pqPzCzqzVFSxYC0u1V9yXyqOade28ymjoKU0SisXSXo4Ir4vSbY/K+kKSQQ5RtLMXKXuMWsLi0untBfOLyyqOGatHS9q/vgirYToWhpBPinp0VWPH5P0yto32d4uabskTU1NpfCxwHDave9I22dlLp4Mja85g1WZSKRvXSsRsScipiNiemJiol8fC/TNzFxFm/7uax33hLd70xNoJI0ZeUXSOasen73yHDAyZuYq2vH5gw3r3c20c9MTaCaNIP+OpBfbPlfLAf42SX+cwnWBoVW7V/iTT53oKsRLxYJ2bNnQgxFilCQO8og4Yfs9kvZJKki6JSIeSDwyYEjV2yu8G2vHi7rxTedxcxOJpbIgKCK+IukraVwLGHZJ9wqnrRBpY2Un0KFOb04y80avEeRAh9aVS22XUz72RxcS4Og5Ns0COrRjywaVioWW75sslwhx9AUzcqCJeifZV8O52WHIdKOgn5iRAw3MzFX0/tsOqDK/oNByd8r7bzugmbmKtm6a1GSD/u+Czen16CuCHGhg5977dLLmuZMrz0v1SyylYkEfuZL9wtFflFaAGjfMHNJn7n1US1F/gc/C4nK8V8O6UekF6BeCHFil0wOQt26aJLgxcJRWgFU+c++jrd8EDBlm5BhZM3MV3XTHA5pfWJS0vHCnUTllteesad16CPQTQY6RNDNX0Y7PHdTiyWeD+2fHF1v+XmHM+vCbN/ZyaEDHCHKMpN37jpwS4u2Y5GYmhhRBjpHUaol9wdZShAq23v7Kc/ShrczCMbwIcoycmbmKLDU8jm2yXNK3r7ukn0MCEqFrBSOn2ZmaxYJZWo/MYUaO3Gq0T0qzbWh3v4VVmcgeghy5VO8Un517D0lqvA0tuxUiqxKVVmzvtv2Q7ftsf9F2Oa2BAd2Ymato88136+rbDpx2is/C4pJ27zvScI8USirIqqQ18jslvSwizpf0PUk7kw8J6E51Ft6sI6Uyv6Ctmya1a9tGTZZLspZn4uxWiCxLVFqJiK+teniPpLckGw7QvXbO0izYktgjBfmSZtfKuyR9tdGLtrfbnrU9e+zYsRQ/FljWzlma7SzBB7KmZZDbvsv2/XX+XLHqPddLOiHp1kbXiYg9ETEdEdMTExPpjB5YZV2Dgx5Wa3QYBJBlLUsrEXFps9dtv1PSGyW9NoLpDnqj2ZFrVTu2bDilU6UWNzSRV4lq5LYvk3StpFdHxPF0hgScql4r4dW3HdAH9t6nhcWTpwV7NfDL40VFSE8sLHLoA3ItaR/5JyT9iqQ7vXwT6Z6I+MvEowJWaXQT8/jKST2re8S5iYlRlLRr5TfTGghQzw0zzdsJq6o94oQ4RhErOzF0ag98aFc7XStAHhHkGCr1DnxoVztdK0Aesfshhko3Bz5IdKRgtDEjx1BptzxSLhVlS/PH6UgBCHIMlUY7E6521cVTnNgDrEJpBUNlx5YNKo657msWIQ7Uw4wcfdVqhWb159VdK2vHi7rxTedROgEaIMjRN80Oe6gNc0IbaB9Bjp6pnX0/+dSJuoc9XHP7QUkivIEuEeToiXqz70aWIurOzAG0h5ud6Il2DnlYrbrEHkDnCHL0RDfL5VliD3SHIEdPNFouv3a8+Mxxa+3+DoDmqJGja7WbW1XbBCXpyadOnPb+UrHwzOu1B0CwxB7oHkGOrtTb3Opnxxd1zecOakw6bb+Uer3grU78AdAeghxdabS51dLJUL1bnONrzqBXHOgRauToSqc3JrmRCfQOQY6ulIqd/avDjUygd1IJctvX2A7bZ6VxPQy3mbnKM+dltoMbmUBvJa6R2z5H0usk/TD5cJAFrRburB0vanzNGdzIBPokjZud/yzpWklfSuFayIBm9W5L7FQI9Fmi0ortKyRVIuJgG+/dbnvW9uyxY8eSfCwGrFm9+08uniLEgT5rGeS277J9f50/V0j6gKQPtvNBEbEnIqYjYnpiYiLpuDFAO7ZsUKlYOOU5Dn0ABqdlaSUiLq33vO2Nks6VdNDLS67PlvRd2xdFxI9THSWGSnXGzYIeYDh0XSOPiEOSXlh9bPsHkqYj4vEUxoUBanWKj8SCHmCYsLITp2j3FB8AwyO1II+I9WldC/1RnXlX5hdUsLUUoTFLtSvvq3uFE+TAcGJGPqJqZ95LsZzedbZPkcQSe2CYEeQ516je3ekJPiyxB4YXQZ5jzerdnc6wWWIPDC82zcqxerPuar27kxl2uVSkPg4MMYI8xxrNuo/OL9Rd1FNPqVjQTZefl/bQAKSIIM+xRrPudeWStm6a1K5tGzW58p7qOZprx4sql4qypMlySbu2bWQ2Dgw5auQ5tmPLhqZnY7KoB8gHgjwnmq3GZCk9kG8EeQ60Wo1JcAP5Ro08B5p1pwDIP4I8B5p1pwDIP4I8B5p1pwDIP4I8B+r1hHPgMTA6uNmZA3SnAKONIM8JulOA0UVpBQAyjiAHgIxLHOS232v7IdsP2P7HNAYFAGhfohq57ddIukLSBRHxlO0XtvodNNfOwccAsFrSm53vlnRzRDwlSRHxk+RDGl0cfAygG0lLKy+R9Lu277X9X7Zf0eiNtrfbnrU9e+zYsYQfm08stQfQjZYzctt3SXpRnZeuX/n9F0i6WNIrJN1u+9cj4rQjfCNij6Q9kjQ9Pd3giN/RxlJ7AN1oGeQRcWmj12y/W9LeleD+H9snJZ0liSl3F9aVS6rUCW2W2gNoJmlpZUbSayTJ9kskrZH0eNJBjSqW2gPoRtKbnbdIusX2/ZKelvSn9coqo6yTLhSW2gPohgeRu9PT0zE7O9v3z+232i4UaXmGzTmYALphe39ETNc+z14rKVs9Ax+ztVTzF+XC4pKuuf2gJFoKAaSDIE9R7Qy8NsSrliLoDweQGvZaSVG9PvBG6A8HkBZm5F2qdxOz035v+sMBpIEg70KjpfRnloqaX1hs+zr0hwNIA6WVLjRaSm+rbh/4VRdP0R8OoGcI8i40KonMH1/Urm0bNVkuyZImyyXt2rZRH9q6se7z3OgEkAZKK11otpS+0ZFrHMUGoFeYkXeBpfQAhgkz8i6wlB7AMCHIu0SpBMCwoLQCABlHkANAxhHkAJBxBDkAZBxBDgAZR5ADQMYlCnLbF9q+x/YB27O2L0prYACA9iSdkf+jpL+NiAslfXDlMQCgj5IGeUh6/srPZ0o6mvB6AIAOJV3ZebWkfbb/Sct/KfxOozfa3i5puyRNTU0l/FgAQFXLILd9l6QX1XnpekmvlfTXEfEF21dK+pSkS+tdJyL2SNojSdPT0/UPs0yg3ok9LKEHMAocDQ4IbuuX7ScklSMibFvSExHx/Fa/Nz09HbOzs11/bq3aE3uk5d0I2fMbQJ7Y3h8R07XPJ62RH5X06pWfL5H0vwmv15VGJ/ZwuDGAUZC0Rv4Xkj5u+wxJv9RKDbzfGp3Yw+HGAEZBoiCPiG9JenlKY+lasxN7ACDvcrGykxN7AIyyzBws0awrhRN7AIyyTAR5bVdKZX5BO/cekqRTwpzgBjCKMlFaoSsFABrLRJDTlQIAjWUiyBt1n9CVAgAZCXK6UgCgsUzc7KQrBQAay0SQS3SlAEAjmSitAAAaI8gBIOMIcgDIOIIcADKOIAeAjEt0QlDXH2ofk/RI3z+4d86S9PigB9FHfN/8G7XvnJXv+2sRMVH75ECCPG9sz9Y7fimv+L75N2rfOevfl9IKAGQcQQ4AGUeQp2PPoAfQZ3zf/Bu175zp70uNHAAyjhk5AGQcQQ4AGUeQp8D2btsP2b7P9hdtlwc9pl6z/VbbD9g+aTuzbVut2L7M9hHbD9u+btDj6SXbt9j+ie37Bz2WfrB9ju2v235w5d/l9w16TN0iyNNxp6SXRcT5kr4naeeAx9MP90vaJumbgx5Ir9guSPqkpNdLeqmkt9t+6WBH1VP/JumyQQ+ij05IuiYiXirpYkl/ldV/vgR5CiLiaxFxYuXhPZLOHuR4+iEiDkdE3k+/vkjSwxHx/Yh4WtJnJV0x4DH1TER8U9JPBz2OfomIH0XEd1d+/oWkw5IyeegBQZ6+d0n66qAHgVRMSnp01ePHlNH/0NGc7fWSNkm6d7Aj6U5mTggaNNt3SXpRnZeuj4gvrbznei3/79qt/Rxbr7TznYGss/1cSV+QdHVE/HzQ4+kGQd6miLi02eu23ynpjZJeGzlpzm/1nUdARdI5qx6fvfIccsJ2UcshfmtE7B30eLpFaSUFti+TdK2kyyPi+KDHg9R8R9KLbZ9re42kt0m6Y8BjQkpsW9KnJB2OiI8OejxJEOTp+ISk50m60/YB2/866AH1mu03235M0qskfdn2vkGPKW0rN7DfI2mflm+E3R4RDwx2VL1j+zOS/lvSBtuP2f7zQY+pxzZLeoekS1b+uz1g+w2DHlQ3WKIPABnHjBwAMo4gB4CMI8gBIOMIcgDIOIIcADKOIAeAjCPIASDj/h/USuotBmiqlQAAAABJRU5ErkJggg==\n", "text/plain": [ "