- Explanation of tensor contraction via summation over shared indices.
- Introduction to Einstein summation notation for simplified tensor expressions.
- Distinction between original and modern Einstein notation.
- Application in machine learning and linear algebra with Einsum.
How was this episode?
Overall
Good
Average
Bad
Engaging
Good
Average
Bad
Accurate
Good
Average
Bad
Tone
Good
Average
Bad
TranscriptA tensor contraction is essentially the summation over pairs of repeated indices. For example, to contract two tensors A subscript i j k and B subscript k l n that share one index k, the contraction is computed by summing over the common index k. Mathematically, this tensor contraction is the computation of the following expression: C subscript i j l n equals the sum over k of A subscript i j k dot B subscript k l n.
The expression can be simplified using the Einstein summation notation, introduced in 1916 by Einstein, which allows for the succinct representation of tensor expressions. The summation is implied over indices that occur more than once, enabling the same contraction to be expressed as C subscript i j l n equals A subscript i j k dot B subscript k l n.
This notation differentiates between the original notation and modern Einstein notation, which is widely used in machine learning and linear algebra libraries and frameworks such as Optimized Einsum or Numpy. The same expression can be written, simplifying the representation where the indices of the output tensor are specified after the arrow, and all indices that are not included are used for summation. To simplify naming, Einstein summation expressions are referred to as Einsum expressions in this paper, and both the traditional and modern notation styles are used to represent Einsum expressions, depending on the context.
Get your podcast on AnyTopic