“Wake up, Neo …” scene, The Matrix Trilogy.

Informational Transducers

Aesth
Bootcamp
Published in
6 min readDec 21, 2021

--

Amid the pandemic. It is February 2020. Everything had to close and the Master project that I decided to conduct on my own became even more lonely. My request of not dragging incompetence was answered by, What about no supervision? Perfect. I had freedom of conduct and discovery. I wrapped up the project by the end of May and what I will tell you about is a glimpse of the extra.

My work was an affinity showcase of a relatively novel formalism to perform a next-to-leading order type of calculation in high-energy physics. As a part of the theoretical toolkit, Feynman diagrams were on the list. They are graphical mnemonics of amplitude expressions. The technicalities aside, these diagrams are the first realization of “abstract” circuits I’ve ever encountered.

Feynman graphics are a descriptive technology. They have translation rules that match each component of the graphic to its formulaic representation (in terms of symbols or tensors).To most, the purpose of such visual technique is offering a simpler way to retain complicated formulae. To others, there is more; the possibility of topological manipulations that can’t be effected on linear expressions.

Two months in, my thesis was taking shape. At this point, I had to figure out the ropes of handling group-theoretic weights and I was doing so. On a whim, something about “spinors”, “algebra” and “binor calculus” clicked. I already knew the terms. Yet, I had to see it through. After few clicks, I find myself scanning the preface of this book that is going to change everything. What I had in mind all these years was right in front of me; or at least, a good part of it.

The book contains group-theoretic calculus done diagrammatically. In the first half, the author lays the basic grammar of such graphical approach. The rest of the manuscript is pure application on classical and exceptional semi-simple Lie algebra; two important mathematical constructions in modern physics. And on various occasions we are presented with arguments as to why the showcased technique is superior to the canonical ways of conducting the same calculus.

While scanning over the content, I experienced multiple “Aha” moments. The grammar described matches in-the-thick my recipe for “abstract” circuits. The theoretical weights, when cast in graphical format, match in-structure the Feynman diagrams they are part of. That and many other stimulating connections motivated me to proceed in my search of visual languages.

To make this discussion both self-contained and efficient, I will present you with my conceptualization of tensors and introduce the associated graphical language all along.

The invention of the tensor technology started with the usage of alphabet letters to represent quantities. A letter works as a placeholder (one slot) for values. It can contain one value at a time and is most often called a variable when the value in question can vary.

The completing step was the addition of indices either as sub- or superscripts. As a consequence, we obtained multi-slot variables which can hold more than one value. The indices are yet another iteration of using alphabet letters to stand for values. Only this time, the various values indices can take serve to identify the slots that exist within the variable, pretty much like addresses.

Both one-slot and multi-slot variables serve the same purpose and only differ in size. They both generalize to the concept of a tensor. A container of information offering an interface to interact with. The interface in question is realized via the possibility of identifying the various components of a tensors and our ability to alter them.

It is common to hear about what I expressed as difference in size more in term of difference in rank. A rank being the number of indices a tensor possesses. Simple variables are then, zero-rank tensor.

Graphically, we represent tensors as vertices. Each index is a line and all lines converge to a single point.

Tensor Vertex, author, December 2021

For a long while I took the habit of calling the point where the lines converge, the kernel. Later I realized that such distinction wasn’t meaningless, if we recall that the tensor content is supposed to be generated by an underlying structure. The kernel stands for that structure and is then the core of the tensor.

Taking into consideration the fact that, it can happen that two tensors might share the same index signature while having distinct underlying structures. The usage of shapes (like circles, squares and triangles) to represent various kernels is advised to add more distinction.

Note that, a zero-rank tensor is represented just as a kernel with no lines.

Kernels, author, December 2021.

Indices belong to spaces. The possible values an index can take are equal to the tensor’s degrees of freedom within that space. It can be found that a tensor might hold indices from different origins, and thus have freedoms in all respective spaces. The tensor in this case exists as a link between them. In other cases, a tensor might contain indices from one origin only and acts as a container of information within that space.

I want warn here from confusing the degrees of freedom of our tensor object that is the number components (or slots) we obtain via the various combination of index values and the freedom in-value (the variable aspect) of the component. That is, after specifying a component, we just selected a slot in which we still didn’t put any value, the freedom of the slot are the values it can take.

In order to differentiate indices from various spaces, we associate a certain line styling to each space.

To Each Space a Different Line Styling, author, December 2021.

Lines drawn on their own are called propagators. They are used to connect two indices from the same space enabling information flow between them — index contraction.

Indices, being channels of communication, can be of two types, covariant and contravariant. In tensor notation, they are denoted by sub- and superscripts respectively. Graphically they are represented by an arrow defining a direction on the line. A passage from my article on variance clarifies the choice of direction for each type ; associating covariance with outputs — that is, the arrow flows outward from the kernel — and contravariance with inputs.

Directed vertex, author, December 2021.

Note: If the <contra/covariant> distinction is dropped, so will the usage of the arrow.

Armed with the visual grammar I’ve laid so far, anyone can start using graphical calculus. The unique outlook I presented here and in previous articles (and possibly future ones) combined with this technique furnishes unparalleled leverage.

Being an index-free notation, this approach takes care of highly-indexed tensor expression — a nightmare for most of us — in a seemless manner. While a formulaic expression is readable from a specific orientation and along a specific direction, the graphical alternative can be drawn in any way — as long as we don’t alter the structure — and it would still stand for the same constraint.

The lines and vertices have more than a stylistic role, they model the informational circuitry within the constraint. The lines propagate while vertices convert information. Calculus is then analog to rewiring the inside structure.

Dealing with constraints as non-linear structures promises a wealth of insights yet to uncover.

Of course this is just an introduction to the matter and an answer to the possibility of alternative graphical languages. An attempt to cover unexplored paths; whether humanity missed more powerful means of expression and inquiry.

Whether theory is nothing but abstract engineering.

--

--

Knowledge Representationalist | «Seek #power through nonother than beauty of #expression»