This chapter explains the ndarray library which is available in Apache MXNet.
Mxnet.ndarray
Apache MXNet’s NDArray library defines the core DS (data structures) for all the mathematical computations. Two fundamental jobs of NDArray are as follows −
- It supports fast execution on a wide range of hardware configurations.
- It automatically parallelises multiple operations across available hardware.
The example given below shows how one can create an NDArray by using 1-D and 2-D ‘array’ from a regular Python list −
import mxnet as mx from mxnet import nd x = nd.array([1,2,3,4,5,6,7,8,9,10]) print(x)
Output
The output is given below:
[ 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.] <NDArray 10 @cpu(0)>
Example
y = nd.array([[1,2,3,4,5,6,7,8,9,10], [1,2,3,4,5,6,7,8,9,10], [1,2,3,4,5,6,7,8,9,10]]) print(y)
Output
This produces the following output −
[[ 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.] [ 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.] [ 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.]] <NDArray 3x10 @cpu(0)>
Now let us discuss in detail about the classes, functions, and parameters of ndarray API of MXNet.
Classes
Following table consists of the classes of ndarray API of MXNet −
Class | Definition |
---|---|
CachedOp(sym[, flags]) | It is used for Cached operator handle. |
NDArray(handle[, writable]) | It is used as an array object that represents a multi-dimensional, homogeneous array of fixed-size items. |
Functions and their parameters
Following are some of the important functions and their parameters covered by mxnet.ndarray API −
Function & its Parameters | Definition |
---|---|
Activation([data, act_type, out, name]) | It applies an activation function element-wise to the input. It supports relu, sigmoid, tanh, softrelu, softsign activation functions. |
BatchNorm([data, gamma, beta, moving_mean, …]) | It is used for batch normalisation. This function normalises a data batch by mean and variance. It applies a scale gamma and offset beta. |
BilinearSampler([data, grid, cudnn_off, …]) | This function applies bilinear sampling to input feature map. Actually it is the key of “Spatial Transformer Networks”.If you are familiar with remap function in OpenCV, the usage of this function is quite similar to that. The only difference is that it has the backward pass. |
BlockGrad([data, out, name]) | As name specifies, this function stops gradient computation. It basically stops the accumulated gradient of the inputs from flowing through this operator in backward direction. |
cast([data, dtype, out, name]) | This function will cast all elements of the input to a new type. |
Implementation Examples
In the example below, we will be using the function BilinierSampler() for zooming out the data two times and shifting the data horizontally by -1 pixel −
import mxnet as mx from mxnet import nd data = nd.array([[[[2, 5, 3, 6], [1, 8, 7, 9], [0, 4, 1, 8], [2, 0, 3, 4]]]]) affine_matrix = nd.array([[2, 0, 0], [0, 2, 0]]) affine_matrix = nd.reshape(affine_matrix, shape=(1, 6)) grid = nd.GridGenerator(data=affine_matrix, transform_type='affine', target_shape=(4, 4)) output = nd.BilinearSampler(data, grid)
Output
When you execute the above code, you should see the following output:
[[[[0. 0. 0. 0. ] [0. 4.0000005 6.25 0. ] [0. 1.5 4. 0. ] [0. 0. 0. 0. ]]]] <NDArray 1x1x4x4 @cpu(0)>
The above output shows the zooming out of data two times.
Example of shifting the data by -1 pixel is as follows −
import mxnet as mx from mxnet import nd data = nd.array([[[[2, 5, 3, 6], [1, 8, 7, 9], [0, 4, 1, 8], [2, 0, 3, 4]]]]) warp_matrix = nd.array([[[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]]) grid = nd.GridGenerator(data=warp_matrix, transform_type='warp') output = nd.BilinearSampler(data, grid)
Output
The output is stated below −
[[[[5. 3. 6. 0.] [8. 7. 9. 0.] [4. 1. 8. 0.] [0. 3. 4. 0.]]]] <NDArray 1x1x4x4 @cpu(0)>
Similarly, following example shows the use of cast() function −
nd.cast(nd.array([300, 10.1, 15.4, -1, -2]), dtype='uint8')
Output
Upon execution, you will receive the following output −
[ 44 10 15 255 254] <NDArray 5 @cpu(0)>
ndarray.contrib
The Contrib NDArray API is defined in the ndarray.contrib package. It typically provides many useful experimental APIs for new features. This API works as a place for the community where they can try out the new features. The feature contributor will get the feedback as well.
Functions and their parameters
Following are some of the important functions and their parameters covered by mxnet.ndarray.contrib API −
Function & its Parameters | Definition |
---|---|
rand_zipfian(true_classes, num_sampled, …) | This function draws random samples from an approximately Zipfian distribution. The base distribution of this function is Zipfian distribution. This function randomly samples num_sampled candidates and the elements of sampled_candidates are drawn from the base distribution given above. |
foreach(body, data, init_states) | As name implies, this function runs a for loop with user-defined computation over NDArrays on dimension 0. This function simulates a for loop and body has the computation for an iteration of the for loop. |
while_loop(cond, func, loop_vars[, …]) | As name implies, this function runs a while loop with user-defined computation and loop condition. This function simulates a while loop that literately does customized computation if the condition is satisfied. |
cond(pred, then_func, else_func) | As name implies, this function run an if-then-else using user-defined condition and computation. This function simulates an if-like branch which chooses to do one of the two customised computations according to the specified condition. |
isinf(data) | This function performs an element-wise check to determine if the NDArray contains an infinite element or not. |
getnnz([data, axis, out, name]) | This function gives us the number of stored values for a sparse tensor. It also includes explicit zeros. It only supports CSR matrix on CPU. |
requantize([data, min_range, max_range, …]) | This function requantise the given data that is quantised in int32 and the corresponding thresholds, into int8 using min and max thresholds either calculated at runtime or from calibration. |
Implementation Examples
In the example below, we will be using the function rand_zipfian for drawing random samples from an approximately Zipfian distribution −
import mxnet as mx from mxnet import nd trueclass = mx.nd.array([2]) samples, exp_count_true, exp_count_sample = mx.nd.contrib.rand_zipfian(trueclass, 3, 4) samples
Output
You will see the following output −
[0 0 1] <NDArray 3 @cpu(0)>
Example
exp_count_true
Output
The output is given below:
[0.53624076] <NDArray 1 @cpu(0)>
Example
exp_count_sample
Output
This produces the following output:
[1.29202967 1.29202967 0.75578891] <NDArray 3 @cpu(0)>
In the example below, we will be using the function while_loop for running a while loop for user-defined computation and loop condition:
cond = lambda i, s: i <= 7 func = lambda i, s: ([i + s], [i + 1, s + i]) loop_var = (mx.nd.array([0], dtype="int64"), mx.nd.array([1], dtype="int64")) outputs, states = mx.nd.contrib.while_loop(cond, func, loop_vars, max_iterations=10) outputs
Output
The output is shown below −
[ [[ 1] [ 2] [ 4] [ 7] [ 11] [ 16] [ 22] [ 29] [3152434450384] [ 257]] <NDArray 10x1 @cpu(0)>]
Example
States
Output
This produces the following output −
[ [8] <NDArray 1 @cpu(0)>, [29] <NDArray 1 @cpu(0)>]
ndarray.image
The Image NDArray API is defined in the ndarray.image package. As name implies, it typically used for images and their features.
Functions and their parameters
Following are some of the important functions & their parameters covered by mxnet.ndarray.image API−
Function & its Parameters | Definition |
---|---|
adjust_lighting([data, alpha, out, name]) | As name implies, this function adjusts the lighting level of the input. It follows the AlexNet style. |
crop([data, x, y, width, height, out, name]) | With the help of this function, we can crop an image NDArray of shape (H x W x C) or (N x H x W x C) to the size given by user. |
normalize([data, mean, std, out, name]) | It will normalise a tensor of shape (C x H x W) or (N x C x H x W) with mean and standard deviation(SD). |
random_crop([data, xrange, yrange, width, …]) | Similar to crop(), it randomly crop an image NDArray of shape (H x W x C) or (N x H x W x C) to the size given by the user. It will upsample the result if src is smaller than the size. |
random_lighting([data, alpha_std, out, name]) | As name implies, this function adds the PCA noise randomly. It also follows the AlexNet style. |
random_resized_crop([data, xrange, yrange, …]) | It also crops an image randomly NDArray of shape (H x W x C) or (N x H x W x C) to the given size. It will upsample the result, if src is smaller than the size. It will randomise the area and aspect ration as well. |
resize([data, size, keep_ratio, interp, …]) | As name implies, this function will resize an image NDArray of shape (H x W x C) or (N x H x W x C) to the size given by user. |
to_tensor([data, out, name]) | It converts an image NDArray of shape (H x W x C) or (N x H x W x C) with the values in the range [0, 255] to a tensor NDArray of shape (C x H x W) or (N x C x H x W) with the values in the range [0, 1]. |
Implementation Examples
In the example below, we will be using the function to_tensor to convert image NDArray of shape (H x W x C) or (N x H x W x C) with the values in the range [0, 255] to a tensor NDArray of shape (C x H x W) or (N x C x H x W) with the values in the range [0, 1].
import numpy as np img = mx.nd.random.uniform(0, 255, (4, 2, 3)).astype(dtype=np.uint8) mx.nd.image.to_tensor(img)
Output
You will see the following output −
[[[0.972549 0.5058824 ] [0.6039216 0.01960784] [0.28235295 0.35686275] [0.11764706 0.8784314 ]] [[0.8745098 0.9764706 ] [0.4509804 0.03529412] [0.9764706 0.29411766] [0.6862745 0.4117647 ]] [[0.46666667 0.05490196] [0.7372549 0.4392157 ] [0.11764706 0.47843137] [0.31764707 0.91764706]]] <NDArray 3x4x2 @cpu(0)>
Example
img = mx.nd.random.uniform(0, 255, (2, 4, 2, 3)).astype(dtype=np.uint8) mx.nd.image.to_tensor(img)
Output
When you run the code, you will see the following output −
[[[[0.0627451 0.5647059 ] [0.2627451 0.9137255 ] [0.57254905 0.27450982] [0.6666667 0.64705884]] [[0.21568628 0.5647059 ] [0.5058824 0.09019608] [0.08235294 0.31764707] [0.8392157 0.7137255 ]] [[0.6901961 0.8627451 ] [0.52156866 0.91764706] [0.9254902 0.00784314] [0.12941177 0.8392157 ]]] [[[0.28627452 0.39607844] [0.01960784 0.36862746] [0.6745098 0.7019608 ] [0.9607843 0.7529412 ]] [[0.2627451 0.58431375] [0.16470589 0.00392157] [0.5686275 0.73333335] [0.43137255 0.57254905]] [[0.18039216 0.54901963] [0.827451 0.14509805] [0.26666668 0.28627452] [0.24705882 0.39607844]]]] <NDArgt;ray 2x3x4x2 @cpu(0)>
In the example below, we will be using the function normalize to normalise a tensor of shape (C x H x W) or (N x C x H x W) with mean and standard deviation(SD).
img = mx.nd.random.uniform(0, 1, (3, 4, 2)) mx.nd.image.normalize(img, mean=(0, 1, 2), std=(3, 2, 1))
Output
This produces the following output −
[[[ 0.29391178 0.3218054 ] [ 0.23084386 0.19615503] [ 0.24175143 0.21988946] [ 0.16710812 0.1777354 ]] [[-0.02195817 -0.3847335 ] [-0.17800489 -0.30256534] [-0.28807247 -0.19059572] [-0.19680339 -0.26256624]] [[-1.9808068 -1.5298678 ] [-1.6984252 -1.2839255 ] [-1.3398265 -1.712009 ] [-1.7099224 -1.6165378 ]]] <NDArray 3x4x2 @cpu(0)>
Example
img = mx.nd.random.uniform(0, 1, (2, 3, 4, 2)) mx.nd.image.normalize(img, mean=(0, 1, 2), std=(3, 2, 1))
Output
When you execute the above code, you should see the following output −
[[[[ 2.0600514e-01 2.4972327e-01] [ 1.4292289e-01 2.9281738e-01] [ 4.5158025e-02 3.4287784e-02] [ 9.9427439e-02 3.0791296e-02]] [[-2.1501756e-01 -3.2297665e-01] [-2.0456362e-01 -2.2409186e-01] [-2.1283737e-01 -4.8318747e-01] [-1.7339960e-01 -1.5519112e-02]] [[-1.3478968e+00 -1.6790028e+00] [-1.5685816e+00 -1.7787373e+00] [-1.1034534e+00 -1.8587360e+00] [-1.6324382e+00 -1.9027401e+00]]] [[[ 1.4528830e-01 3.2801408e-01] [ 2.9730779e-01 8.6780310e-02] [ 2.6873133e-01 1.7900752e-01] [ 2.3462953e-01 1.4930873e-01]] [[-4.4988656e-01 -4.5021546e-01] [-4.0258706e-02 -3.2384416e-01] [-1.4287934e-01 -2.6537544e-01] [-5.7649612e-04 -7.9429924e-02]] [[-1.8505517e+00 -1.0953522e+00] [-1.1318740e+00 -1.9624406e+00] [-1.8375070e+00 -1.4916846e+00] [-1.3844404e+00 -1.8331525e+00]]]] <NDArray 2x3x4x2 @cpu(0)>
ndarray.random
The Random NDArray API is defined in the ndarray.random package. As name implies, it is random distribution generator NDArray API of MXNet.
Functions and their parameters
Following are some of the important functions and their parameters covered by mxnet.ndarray.random API −
Function and its Parameters | Definition |
---|---|
uniform([low, high, shape, dtype, ctx, out]) | It generates random samples from a uniform distribution. |
normal([loc, scale, shape, dtype, ctx, out]) | It generates random samples from a normal (Gaussian) distribution. |
randn(*shape, **kwargs) | It generates random samples from a normal (Gaussian) distribution. |
exponential([scale, shape, dtype, ctx, out]) | It generates samples from an exponential distribution. |
gamma([alpha, beta, shape, dtype, ctx, out]) | It generates random samples from a gamma distribution. |
multinomial(data[, shape, get_prob, out, dtype]) | It generates concurrent sampling from multiple multinomial distributions. |
negative_binomial([k, p, shape, dtype, ctx, out]) | It generates random samples from a negative binomial distribution. |
generalized_negative_binomial([mu, alpha, …]) | It generates random samples from a generalised negative binomial distribution. |
shuffle(data, **kwargs) | It shuffles the elements randomly. |
randint(low, high[, shape, dtype, ctx, out]) | It generates random samples from a discrete uniform distribution. |
exponential_like([data, lam, out, name]) | It generates random samples from an exponential distribution according to the input array shape. |
gamma_like([data, alpha, beta, out, name]) | It generates random samples from a gamma distribution according to the input array shape. |
generalized_negative_binomial_like([data, …]) | It generates random samples from a generalised negative binomial distribution, according to the input array shape. |
negative_binomial_like([data, k, p, out, name]) | It generates random samples from a negative binomial distribution, according to the input array shape. |
normal_like([data, loc, scale, out, name]) | It generates random samples from a normal (Gaussian) distribution, according to the input array shape. |
poisson_like([data, lam, out, name]) | It generates random samples from a Poisson distribution, according to the input array shape. |
uniform_like([data, low, high, out, name]) | It generates random samples from a uniform distribution,according to the input array shape. |
Implementation Examples
In the example below, we are going to draw random samples from a uniform distribution. For this will be using the function uniform().
mx.nd.random.uniform(0, 1)
Output
The output is mentioned below −
[0.12381998] <NDArray 1 @cpu(0)>
Example
mx.nd.random.uniform(-1, 1, shape=(2,))
Output
The output is given below −
[0.558102 0.69601643] <NDArray 2 @cpu(0)>
Example
low = mx.nd.array([1,2,3]) high = mx.nd.array([2,3,4]) mx.nd.random.uniform(low, high, shape=2)
Output
You will see the following output −
[[1.8649333 1.8073189] [2.4113967 2.5691009] [3.1399727 3.4071832]] <NDArray 3x2 @cpu(0)>
In the example below, we are going to draw random samples from a generalized negative binomial distribution. For this, we will be using the function generalized_negative_binomial().
mx.nd.random.generalized_negative_binomial(10, 0.5)
Output
When you execute the above code, you should see the following output −
[1.] <NDArray 1 @cpu(0)>
Example
mx.nd.random.generalized_negative_binomial(10, 0.5, shape=(2,))
Output
The output is given herewith −
[16. 23.] <NDArray 2 @cpu(0)>
Example
mu = mx.nd.array([1,2,3]) alpha = mx.nd.array([0.2,0.4,0.6]) mx.nd.random.generalized_negative_binomial(mu, alpha, shape=2)
Output
Given below is the output of the code −
[[0. 0.] [4. 1.] [9. 3.]] <NDArray 3x2 @cpu(0)>
ndarray.utils
The utility NDArray API is defined in the ndarray.utils package. As name implies, it provides the utility functions for NDArray and BaseSparseNDArray.
Functions and their parameters
Following are some of the important functions and their parameters covered by mxnet.ndarray.utils API −
Function and its Parameters | Definition |
---|---|
zeros(shape[, ctx, dtype, stype]) | This function will return a new array of given shape and type, filled with zeros. |
empty(shape[, ctx, dtype, stype]) | It will returns a new array of given shape and type, without initialising entries. |
array(source_array[, ctx, dtype]) | As name implies, this function will create an array from any object exposing the array interface. |
load(fname) | It will load an array from file. |
load_frombuffer(buf) | As name implies, this function will load an array dictionary or list from a buffer |
save(fname, data) | This function will save a list of arrays or a dict of str->array to file. |
Implementation Examples
In the example below, we are going to return a new array of given shape and type, filled with zeros. For this, we will be using the function zeros().
mx.nd.zeros((1,2), mx.cpu(), stype='csr')
Output
This produces the following output −
<CSRNDArray 1x2 @cpu(0)>
Example
mx.nd.zeros((1,2), mx.cpu(), 'float16', stype='row_sparse').asnumpy()
Output
You will receive the following output −
array([[0., 0.]], dtype=float16)
In the example below, we are going to save a list of arrays and a dictionary of strings. For this, we will be using the function save().
Example
x = mx.nd.zeros((2,3)) y = mx.nd.ones((1,4)) mx.nd.save('list', [x,y]) mx.nd.save('dict', {'x':x, 'y':y}) mx.nd.load('list')
Output
Upon execution, you will receive the following output −
[ [[0. 0. 0.] [0. 0. 0.]] <NDArray 2x3 @cpu(0)>, [[1. 1. 1. 1.]] <NDArray 1x4 @cpu(0)>]
Example
mx.nd.load('my_dict')
Output
The output is shown below −
{'x': [[0. 0. 0.] [0. 0. 0.]] <NDArray 2x3 @cpu(0)>, 'y': [[1. 1. 1. 1.]] <NDArray 1x4 @cpu(0)>}
Next Topic:-Click Here
Pingback: Apache MXNet - KVStore and Visualization - Adglob Infosystem Pvt Ltd