# numpy - matrix multiple 3x3 and 100x100x3 arrays?

I have the following:

``````import numpy as np

XYZ_to_sRGB_mat_D50 = np.asarray([
[3.1338561, -1.6168667, -0.4906146],
[-0.9787684, 1.9161415, 0.0334540],
[0.0719453, -0.2289914, 1.4052427],
])

XYZ_1 = np.asarray([0.25, 0.4, 0.1])
XYZ_2 = np.random.rand(100,100,3)

np.matmul(XYZ_to_sRGB_mat_D50, XYZ_1) # valid operation
np.matmul(XYZ_to_sRGB_mat_D50, XYZ_2) # makes no sense mathematically
``````

How do I perform the same operation on XYZ_2 that I would on XYZ_2? Do I somehow reshape the array first?

2

It seems you are trying to `sum-reduce` the last axis of `XYZ_to_sRGB_mat_D50` `(axis=1)` with the last one of `XYZ_2` `(axis=2)`. So, you can use `np.tensordot` like so -

``````np.tensordot(XYZ_2, XYZ_to_sRGB_mat_D50, axes=((2),(1)))
``````

Related post to understand `tensordot`.

For completeness, we can surely use `np.matmul` too after swappping last two axes of `XYZ_2`, like so -

``````np.matmul(XYZ_to_sRGB_mat_D50, XYZ_2.swapaxes(1,2)).swapaxes(1,2)
``````

This won't be as efficient as `tensordot` one.

Runtime test -

``````In : XYZ_to_sRGB_mat_D50 = np.asarray([
...:     [3.1338561, -1.6168667, -0.4906146],
...:     [-0.9787684, 1.9161415, 0.0334540],
...:     [0.0719453, -0.2289914, 1.4052427],
...: ])
...:
...: XYZ_1 = np.asarray([0.25, 0.4, 0.1])
...: XYZ_2 = np.random.rand(100,100,3)

# @Julien's soln
In : %timeit XYZ_2.dot(XYZ_to_sRGB_mat_D50.T)
1000 loops, best of 3: 450 µs per loop

In : %timeit np.tensordot(XYZ_2, XYZ_to_sRGB_mat_D50, axes=((2),(1)))
10000 loops, best of 3: 73.1 µs per loop
``````

Generally speaking, when it comes to `sum-reductions` on tensors, `tensordot` is much more efficient. Since, the axis of `sum-reduction` is just one, we can make the tensor a `2D` array by reshaping, use `np.dot`, get the result and reshape back to `3D`.

Saturday, August 7, 2021

1

The `@` operator calls the array's `__matmul__` method, not `dot`. This method is also present in the API as the function `np.matmul`.

``````>>> a = np.random.rand(8,13,13)
>>> b = np.random.rand(8,13,13)
>>> np.matmul(a, b).shape
(8, 13, 13)
``````

From the documentation:

`matmul` differs from `dot` in two important ways.

• Multiplication by scalars is not allowed.
• Stacks of matrices are broadcast together as if the matrices were elements.

The last point makes it clear that `dot` and `matmul` methods behave differently when passed 3D (or higher dimensional) arrays. Quoting from the documentation some more:

For `matmul`:

If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.

For `np.dot`:

For 2-D arrays it is equivalent to matrix multiplication, and for 1-D arrays to inner product of vectors (without complex conjugation). For N dimensions it is a sum product over the last axis of a and the second-to-last of b

Monday, June 7, 2021

4

You are looking for `np.argwhere` -

``````np.argwhere(arr)
``````

Sample run -

``````In : arr
Out:
array([[False, False,  True],
[ True, False, False],
[ True,  True, False]], dtype=bool)

In : np.argwhere(arr)
Out:
array([[0, 2],
[1, 0],
[2, 0],
[2, 1]])
``````
Friday, July 30, 2021

3

Option 1: Reshape your initial `All` array to 3 columns so that the number of columns match `h`:

``````All=np.array([]).reshape((0,3))

for i in data:
h=i*Weights
All=np.concatenate((All,h))

All
#array([[  8.,   8.,   8.],
#       [  8.,   8.,   8.],
#       [  8.,   8.,   8.],
#       [ 12.,  12.,  12.],
#       [ 12.,  12.,  12.],
#       [ 12.,  12.,  12.]])
``````

Option 2: Use a if-else statement to handle initial empty array case:

``````All=np.array([])
for i in data:
h=i*Weights
if len(All) == 0:
All = h
else:
All=np.concatenate((All,h))

All
#array([[ 8,  8,  8],
#       [ 8,  8,  8],
#       [ 8,  8,  8],
#       [12, 12, 12],
#       [12, 12, 12],
#       [12, 12, 12]])
``````

Option 3: Use `itertools.product()`:

``````import itertools
np.array([i*j for i,j in itertools.product(data, Weights)])

#array([[ 8,  8,  8],
#       [ 8,  8,  8],
#       [ 8,  8,  8],
#       [12, 12, 12],
#       [12, 12, 12],
#       [12, 12, 12]])
``````
Thursday, September 2, 2021

1

You can use extend dimensions of `A` and `B` with `np.newaxis/None` to bring in `broadcasting` for a vectorized solution like so -

``````A[...,None]*B[:,None,:]
``````

Explanation : `np.outer(np.transpose(A[i]), B[i])` basically does elementwise multiplications between a columnar version of `A[i]` and `B[i]`. You are repeating this for all rows in `A` against corresoinding rows in `B`. Please note that the `np.transpose()` doesn't seem to make any impact as `np.outer` takes care of the intended elementwise multiplications.

I would describe these steps in a vectorized language and thus implement, like so -

1. Extend dimensions of `A` and `B` to form `3D` shapes for both of them such that we keep `axis=0` aligned and keep as `axis=0` in both of those extended versions too. Thus, we are left with deciding the last two axes.
2. To bring in the elementwise multiplications, push `axis=1` of A in its original 2D version to `axis=1` in its `3D` version, thus creating a singleton dimension at `axis=2` for extended version of `A`.
3. This last singleton dimension of `3D` version of `A` has to align with the elements from `axis=1` in original `2D` version of `B` to let `broadcasting` happen. Thus, extended version of `B` would have the elements from `axis=1` in its 2D version being pushed to `axis=2` in its `3D` version, thereby creating a singleton dimension for `axis=1`.

Finally, the extended versions would be : `A[...,None]` & `B[:,None,:]`, multiplying whom would give us the desired output.

Sunday, October 24, 2021