Wherein discover yet again that it’s turtles all the way down. It’s always turtles all the way down…

#### THE WEEKLY CHALLENGE – PERL & RAKU #170 – Task 2

**“Only those who attempt the absurd…will achieve the impossible. I think …I think it’s in my basement…Let me go upstairs and check.”**

**— M.C. Escher**

## Kronecker Product

**Submitted by:** Mohammad S Anwar

You are given 2 matrices.

Write a script to implement the Kronecker Product on the given 2 matrices.

For more information, please refer **wikipedia page**.

For example,

```
A = [ 1 2 ]
[ 3 4 ]
B = [ 5 6 ]
[ 7 8 ]
A x B = [ 1 x [ 5 6 ] 2 x [ 5 6 ] ]
[ [ 7 8 ] [ 7 8 ] ]
[ 3 x [ 5 6 ] 4 x [ 5 6 ] ]
[ [ 7 8 ] [ 7 8 ] ]
= [ 1x5 1x6 2x5 2x6 ]
[ 1x7 1x8 2x7 2x8 ]
[ 3x5 3x6 4x5 4x6 ]
[ 3x7 3x8 4x7 4x8 ]
= [ 5 6 10 12 ]
[ 7 8 14 16 ]
[ 15 18 20 24 ]
[ 21 24 28 32 ]
```

#### Discussion

The Kronecker Product is an interesting operation on two matrices, **A** and **B**, where the the results of multiplying the element pairs produced from the Cartesian Product are arranged in an expanded matrix, with one cell for each operation. In other words each position in the matrix **A** is paired with and multiplied by every element in matrix **B** and the results are used to populate a new matrix according to a specific bijective mapping, where the position is replaced by a sub-matrix match matrix **B** with its elements multiplied by the value.

The operation is not commutative. That is:

**A **⊗ **B** ≢ **B** ⊗ **A**

They are not the same, but as both versions of these result matrices contain all the products of every pairing between elements in the two matrices A and B we can conclude they will contain exactly the same products in a different arrangement.

The size of the Kronecker Product matrix for two matrices [m,n] and [p,q] will be

`[m x p, n x q]`

which is to say the corresponding axes are multiplied to get the new height and width. The number of elements will be the product of these dimensions, or the product of all four coefficients from the original matrices.

#### METHOD

As we know the size of the output matrix based on the dimensions of the two inputs, the problem becomes a question of deriving the Cartesian pairings between elements in the arrays and then producing a subsequent mapping based on the source coordinates to place each new element in the output.

The elements in each array are addressed left-to-right across individual rows, then top to bottom in a raster pattern. Each selected element from the first array is then paired with each element in the second, requiring a total of four loops to ratchet through each position exactly once. Even though there are a lot of loops, each position in the final matrix is only visited once, so there in no inherant inefficiency or duplication of effort. The product matrix will get large quickly, however, given large input. The number of cells in each input matrix is the height times the width, so the size of the product matrix will be these two sizes multipled, or the product of all four coefficients:

size( **A**[*m*,*n*] ⊗ **B**[*p*,*q*] ) = *m* × *n* × *p* × *q*

Conceivably there could be quite a large number of positions to compute, and we can’t get around that.

The actual positions of the new element placements follow a simple formula. Each pairing is placed according to the its row and column indices in of its components:

**A**_{[m][n]}, **B**_{[p][q] }→ **K**_{[m x Brows + p][n x Bcol + q]}

Each row and column position in **A** creates and entirely new sub-matrix the size of **B**, so the new position, then, is the row index for **A**, times the number of rows in **B**, plus the row index for **B**, and the column index for **A**, times the number of columns in **B**, plus the column index for **B**.

Easy peasy.

##### PERL 5 SOLUTION

Seeing four nested loops is somewhat frightening, computationally-speaking, but as each product populating the output is a unique pairing from the input data this complexity is unavoidable and simply the required result if the source data does not have some other quality we can optimize on that’s outside of the scope of today’s discussion.

The tricky part is calculating the new row and column coordinates for each piece of the Kronecker matrix, but this again breaks down into the application of a straightforward formula.

```
use warnings;
use strict;
use utf8;
use feature ":5.26";
use feature qw(signatures);
no warnings 'experimental::signatures';
my @A = ( [1,2], [3,4], [5,6] );
my @B = ( [5,6], [7,8] );
my @K;
for my $m (0..$#A) {
for my $n (0..$A[0]->$#*) {
for my $p (0..$#B) {
for my $q (0..$B[0]->$#*) {
$K[$m * @B + $p][$n * $B[0]->@* + $q] = $A[$m][$n] * $B[$p][$q];
}
}
}
}
say "@$_" for @K;
```

##### Raku Solution

*The Perl Weekly Challenge, that idyllic glade wherein we stumble upon the holes for these sweet descents, is now known as *

**The Weekly Challenge – Perl and Raku**

*It is the creation of the lovely Mohammad Sajid Anwar and a veritable swarm of contributors from all over the world, who gather, as might be expected, weekly online to solve puzzles. Everyone is encouraged to visit, learn and contribute at *

I can imagine what is $B[0]->$#*: give the last index of element $B[0]. But I could not find any documentation about $#* or @*. Any advice? (A perl user but not a perl expert)

LikeLike

The post-dereferencing syntax is at first a little weird, but I was an early adopter and now find it much clearer than piling on sigils with and without brackets, which can get confusing quickly. In the modern way, the thing of the left is the reference, of course, with its sigil unchanged. This is right, as we aren’t changing it in any way. It’s still a pointer held in a scalar variable, which is perfectly fine.

The arrow points to what information we want out of it: an element, all the elements, a slice of elements, the last element or whatever. That last is a little play on words, as the star, the splat, *, is a wildcard for “whatever”. So $ref->@* is “an array of whatever elements are there” or the complete array. And $ref->$#* is the same standard sigil change on this array that finds the last element index. It’s easy to think of as just a sigil.

Super handy is slicing right out of a reference, with something like $ref->@[2..5], giving you an array with just the elements indexed in the range 2 through 5. Lists also work here all according to standard slice rules. In short, the thing on the left side of the arrow is the reference, and the thing on the right is the data you want from the thing it refers to.

This notation can be used anywhere — on the left or right of an assignment, and will even work on anonymous arrays, if you’re into that:

$ref->@[2..5] = [13..16]->@*;

drops the new values into the array referred to without complaint.

Here I wanted to just use the last index directly instead of taking the scalar value and subtracting 1, but accessing that index value is difficult using the old notation. I like building big crazy data-structures and so this flexibility to redefine exactly what I want at a given stage if indirection is very welcome.

https://perldoc.perl.org/perlref#Arrow-Notation

LikeLike

Thanks a lot for this explanation. This will be a reference for me.

LikeLike