Required Reading

Mathematics and physics principles required to understand the theory described on this site.
The scientific notation I will be using throughout the rest of the site, are rows or lower indices for covectors (covariant vectors) and columns or upper indices for vectors (contravarient vectors) as per Einstein index notation. You create the covector by taking the complex conjugate transpose of the vector. For in-depth explanation, please expand the various subjects below:

**Einstein Notation** In mathematics, especially in applications of linear algebra to physics, the Einstein notation or Einstein summation convention is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving notational brevity. As part of mathematics it is a notational subset of Ricci calculus; however, it is often used in applications in physics that do not distinguish between tangent and cotangent spaces. It was introduced to physics by Albert Einstein in 1916.

Einstein notation will be what I will use throughout the rest of the document, which will employ the standard of lower indices for covectors (rows) and upper indices for contravariant vectors (columns) as per Einstein index notation. You get the covector by taking the complex conjugate transpose of the vector.

**Bra-Ket Notation** In quantum mechanics, bra–ket notation is a common notation for quantum states, i.e. vectors in a \CC Hilbert space on which an algebra of observables acts. More generally the notation uses the angle brackets (the \rangle and \langle symbols) and a vertical bar (the | symbol), for a ket (for example, \ket{A}) to denote a vector in an abstract usually \CC vector space A and a bra, (for example, \bra{f}) to denote a linear functional f on A.

The natural pairing of a linear function f = \bra{f} with a vector v = \ket{v} is then written as \braket{f \vert v}. On Hilbert spaces, the scalar product (\ ,\ ) (with anti linear first argument) given an (anti-linear) identification of a vector ket \psi = \ket{\psi} with a linear functional bra (\phi,\ ) = \bra{\phi}. Using this notation, the scalar product (\phi,\psi) = \braket{\phi \vert \psi}. For the vector space \CC^n, kets can be identified with column vectors, and bras with row vectors.

**Wikipedia Articles**

**Example Math**__Einstein Notation__

A_{\mu} B^{\nu} = A \cdot B

A^{\mu} B_{\nu} = A \otimes B

__Bra-Ket Notation__

\braket{A \vert B} = A \cdot B

\ket{A} \bra{B} = A \otimes B

**Inner, Dot & Scalar Product** Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. In the case of vector spaces, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle of two vectors is the quotient of their dot product by the product of their lengths).

An inner product space is a vector space with an additional structure called an inner product. This additional structure associates each pair of vectors in the space with a scalar quantity known as the inner product of the vectors. Inner products allow the rigorous introduction of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors (zero inner product). Inner product spaces generalize Euclidean spaces (in which the inner product is the dot product, also known as the scalar product) to vector spaces of any (possibly infinite) dimension, and are studied in functional analysis.

More precisely, for a real vector space, an inner product A_{\mu} B^{\nu} satisfies the following properties shown.

The dot and inner product are commutative, meaning:

A \cdot B = \overline{B \cdot A}

For all vectors A and B.

*Inner Product*

**Wikipedia Articles**

**Example Math** \begin{aligned} A_{\mu} B^{\nu} &= \braket{A \vert B} \\ &= A \cdot B \\ &= {A}^\dagger{B} \end{aligned}

\begin{aligned} A_{\mu} B^{\nu} &= B_{\nu} A^{\mu} \\ &= \overline{B \cdot A} \\ &= {B}^\dagger{A} \end{aligned}

\begin{aligned} A_{\mu} B^{\nu} &= \begin{bmatrix} a & b \end{bmatrix} \begin{bmatrix} c \\ d \end{bmatrix} \\ &= ac + bd \end{aligned}

**Cross Product** The cross product or vector product (occasionally directed area product to emphasize the geometric significance) is a binary operation on two vectors in three-dimensional space (\RR^3) and is denoted by the symbol

**\times**. Given two linearly independent vectors A and B, the cross product A \times B is defined as a vector C that is perpendicular (orthogonal) to both A and B, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the vectors span.

**Exterior & Wedge Product** The exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogues. The exterior product of two vectors A and B, denoted by A \wedge B, is called a bivector and lives in a space called the exterior square, A vector space that is distinct from the original space of vectors. The magnitude of A \wedge B can be interpreted as the area of the parallelogram with sides A and B, which in three dimensions can also be computed using the cross product of the two vectors.

Both the cross product and wedge product are anti-commutative, meaning:

A \times B = - B \times A

A \wedge B = - B \wedge A

For all vectors A and B. Unlike the cross product, the wedge product is associative.

*Cross & Wedge Product*

**Wikipedia Articles**

**Example Math** \begin{aligned} A^{\mu} \times B^{\nu} &= \begin{bmatrix} a \\ b \end{bmatrix} e_i \times \begin{bmatrix} c \\ d \end{bmatrix} e_j \\ &= det \begin{bmatrix} a & c \\ b & d \end{bmatrix} e_k \\ &= |ad - bc| e_k \end{aligned}

\begin{aligned} A^{\mu} \wedge B^{\nu} &= \begin{bmatrix} a \\ b \end{bmatrix} e_i \wedge \begin{bmatrix} c \\ d \end{bmatrix} e_j \\ &= det \begin{bmatrix} a & c \\ b & d \end{bmatrix} \\ &= |ad - bc| \end{aligned}

**Outer & Tensor Product** The outer product of two coordinate vectors is a matrix. If the two vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors (multidimensional arrays of numbers), their outer product is a tensor. The outer product of tensors is also referred to as their tensor product and can be used to define the tensor algebra.

**Kronecker Product** The Kronecker product, sometimes denoted by \otimes is an operation on two matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor product with respect to a standard choice of basis. The Kronecker product should not be confused with the usual matrix multiplication, which is an entirely different operation.

Like the cross product, the outer product is anti-commutative, meaning that.

A \otimes B = - B \otimes A

For all vectors A and B.

**Wikipedia Articles**

**Example Math** \begin{aligned} A^{\mu} B_{\nu} &= \ket{A} \bra{B} \\ &= A \otimes B \\ &= {A}{B}^\dagger \end{aligned}

\begin{aligned} A^{\mu} B_{\nu} &= - B^{\nu} A_{\mu} \\ &= - B \otimes A \\ &= - {B}{A}^\dagger \end{aligned}

\begin{aligned} A^{\mu} B_{\nu} &= \begin{bmatrix} a \\ b \end{bmatrix} \begin{bmatrix} c & d \end{bmatrix} \\ &= \begin{bmatrix} ac & ad \\ bc & bd \end{bmatrix} \end{aligned}

**Hadamard Product** The Hadamard product (also known as the element-wise, entrywise or Schur product) is a binary operation that takes two matrices of the same dimensions and produces another matrix of the same dimension as the operands where each element i, j is the product of elements i, j of the original two matrices. It should not be confused with the more common matrix product.

The Hadamard product is associative and distributive. Unlike the matrix product, it is also commutative.

**Wikipedia Articles**

**Example Math** \begin{aligned} A^{\mu} \odot B^{\nu} &= \ket{A} \odot \ket{B} \\ &= B^{\nu} \odot A^{\mu} \\ &= {A}{B} \end{aligned}

\begin{aligned} A^{\mu} \odot B^{\nu} &= \begin{bmatrix} a \\ b \end{bmatrix} \odot \begin{bmatrix} c \\ d \end{bmatrix} \\ &= \begin{bmatrix} ac \\ bd \end{bmatrix} \end{aligned}

\begin{aligned} A_{\mu} \odot B_{\nu} &= \bra{A} \odot \bra{B} \\ &= B_{\nu} \odot A_{\mu} \\ &= {B}{A} \end{aligned}

\begin{aligned} A_{\mu} \odot B_{\nu} &= \begin{bmatrix} a & b \end{bmatrix} \odot \begin{bmatrix} c & d \end{bmatrix} \\ &= \begin{bmatrix} ac & bd \end{bmatrix} \end{aligned}

**Hodge Star Operator** In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

For example, in an oriented 3-dimensional Euclidean space, an oriented plane can be represented by the exterior product of two basis vectors, and its Hodge dual is the normal vector given by their cross product; conversely, any vector is dual to the oriented plane perpendicular to it, endowed with a suitable bivector.

**Wikipedia Articles**

**Example Math**__Two Dimensions__

- \star 1 = dx \wedge dy
- \star dx = dy
- \star dy = -dx
- \star (dx \wedge dy) = 1

__Three Dimensions__

A common example of the Hodge star operator is the case n = 3, when it can be taken as the correspondence between vectors and bivectors. Specifically, for Euclidean \RR^3 with the basis dx, dy, dz of one-forms often used in vector calculus, one finds that:

- \star dx = dy \wedge dz
- \star dy = dz \wedge dx
- \star dz = dx \wedge dy

The Hodge star relates the exterior and cross product in three dimensions:

- \star (u \wedge v) = u \times v
- \star (u \times v) = u \wedge v

Applied to three dimensions, the Hodge star provides an isomorphism between axial vectors and bivectors, so each axial vector a is associated with a bivector A and vice versa, that is: A = \star a, a = \star A.

__Four Dimensions__

In case n = 4, the Hodge star acts as an endomorphism of the second exterior power (i.e. it maps 2-forms to 2-forms, since 4 − 2 = 2). If the signature of the metric tensor is all positive, i.e. on a Riemannian manifold, then the Hodge star is an involution; if the signature is mixed, then application twice will return the argument up to a sign – see § Duality. For example, in Minkowski spacetime where n = 4 with metric signature [+,-,-,-] and coordinates [t,x,y,z] where (using \varepsilon_{0123} = 1):

- \star dt = dx \wedge dy \wedge dz
- \star dx = dt \wedge dy \wedge dz
- \star dy = dx \wedge dt \wedge dz
- \star dz = dx \wedge dy \wedge dt

For one-forms and for two-forms:

- \star (dt \wedge dx) = dz \wedge dy
- \star (dt \wedge dy) = dx \wedge dz
- \star (dt \wedge dz) = dy \wedge dx
- \star (dx \wedge dy) = dt \wedge dz
- \star (dx \wedge dz) = dy \wedge dt
- \star (dy \wedge dz) = dt \wedge dz

Because their determinants are the same in both [+,-,-,-] and [-,+,+,+], the signs of the Minkowski space 2-form duals depend only on the chosen orientation.

An easy rule to remember for the above Hodge operations is that given a form \alpha, its Hodge dual \star \alpha may be obtained by writing the components not involved in \alpha in an order such that \alpha \wedge \star \alpha = dt \wedge dx \wedge dy \wedge dz. An extra minus sign will enter only if \alpha does not contain dt. The latter convention stems from the choice [+,-,-,-] for the metric signature. For [-,+,+,+], one puts in a minus sign only if \alpha involves dt.

**Pauli Matrices** In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 \CC matrices which are Hermitian and unitary. Usually indicated by the Greek letter sigma (\sigma), they are occasionally denoted by tau (\tau) when used in connection with isospin symmetries.

- \sigma_0 = \sigma_t = I_2
- \sigma_1 = \sigma_x
- \sigma_2 = \sigma_y
- \sigma_3 = \sigma_z
- \HH = \sigma_0 -(\sigma_j)i

**Gell-Mann Matrices** The Gell-Mann matrices, developed by Murray Gell-Mann, are a set of eight linearly independent 3 \times 3 traceless Hermitian matrices used in the study of the strong interaction in particle physics. They span the Lie algebra of the SU(3) group in the defining representation.

**Gamma Matrices** In mathematical physics, the gamma matrices, also known as the Dirac matrices, are a set of conventional matrices with specific anti-commutation relations that ensure they generate a matrix representation of the Clifford algebra Cℓ

_{1,3}(\RR). It is also possible to define higher-dimensional gamma matrices. When interpreted as the matrices of the action of a set of orthogonal basis vectors for contravariant vectors in Minkowski space, the column vectors on which the matrices act become a space of spinors, on which the Clifford algebra of spacetime acts. This in turn makes it possible to represent infinitesimal spatial rotations and Lorentz boosts. Spinors facilitate spacetime computations in general, and in particular are fundamental to the Dirac equation for relativistic spin-½ particles.- \gamma^0 = \gamma^t = \sigma_3 \otimes \sigma_0
- \gamma^1 = \gamma^x = i\sigma_2 \otimes \sigma_1
- \gamma^2 = \gamma^y = i\sigma_2 \otimes \sigma_2
- \gamma^3 = \gamma^z = i\sigma_2 \otimes \sigma_3
- \gamma^5 = \sigma_1 \otimes \sigma_0 = i \gamma^0 \gamma^1 \gamma^2 \gamma^3

\gamma^0 is the time-like, hermitian matrix.

\gamma^j are space-like, anti-hermitian matrices.

**Wikipedia Articles**

**Example Math**__Pauli Matrices__

\sigma_0 = \begin{bmatrix} +1 & 0 \\ 0 & +1 \end{bmatrix}

\sigma_1 = \begin{bmatrix} 0 & +1 \\ +1 & 0 \end{bmatrix}

\sigma_2 = \begin{bmatrix} 0 & -i \\ +i & 0 \end{bmatrix}

\sigma_3 = \begin{bmatrix} +1 & 0 \\ 0 & -1 \end{bmatrix}

__Gell-Mann Matrices__

\lambda_1 = \begin{bmatrix} 0 & +1 & 0 \\ +1 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}

\lambda_2 = \begin{bmatrix} 0 & -i & 0 \\ +i & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}

\lambda_3 = \begin{bmatrix} +1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{bmatrix}

\lambda_4 = \begin{bmatrix} 0 & 0 & +1 \\ 0 & 0 & 0 \\ +1 & 0 & 0 \end{bmatrix}

\lambda_5 = \begin{bmatrix} 0 & 0 & -i \\ 0 & 0 & 0 \\ +i & 0 & 0 \end{bmatrix}

\lambda_6 = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & +1 \\ 0 & +1 & 0 \end{bmatrix}

\lambda_7 = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & -i \\ 0 & +i & 0 \end{bmatrix}

\lambda_8 = \begin{bmatrix} +1 & 0 & 0 \\ 0 & +1 & 0 \\ 0 & 0 & -2 \end{bmatrix} \frac{1}{\sqrt{3}}

__Gamma Matrices__

\gamma^0 = \begin{bmatrix} +\sigma_0 & 0 \\ 0 & -\sigma_0 \end{bmatrix}

\gamma^1 = \begin{bmatrix} 0 & +\sigma_1 \\ -\sigma_1 & 0 \end{bmatrix}

\gamma^2 = \begin{bmatrix} 0 & +\sigma_2 \\ -\sigma_2 & 0 \end{bmatrix}

\gamma^3 = \begin{bmatrix} 0 & +\sigma_3 \\ -\sigma_3 & 0 \end{bmatrix}

\gamma^5 = \begin{bmatrix} 0 & +\sigma_0 \\ +\sigma_0 & 0 \end{bmatrix}

__Identity Matrices__

I_2 = \begin{bmatrix} +1 & 0 \\ 0 & +1 \end{bmatrix}

I_4 = \begin{bmatrix} +\sigma_0 & 0 \\ 0 & +\sigma_0 \end{bmatrix}

**Complex Numbers** A complex number is a number that can be expressed in the form a + b\hat{i}, where a and b are \RR numbers, and i represents the imaginary unit, satisfying the equation \hat{i}^2 = -1. Because no real number satisfies this equation, i is called an imaginary number. For the complex number a + b\hat{i}, a is called the real part, and b is called the imaginary part. The set of complex numbers is denoted using the symbol \CC. Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers, and are fundamental in many aspects of the scientific description of the natural world.

**Quaternion Numbers** In mathematics, the quaternions are a number system that extends the compex numbers denoted using the letter \HH. They were first described by Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. A feature of \HH is that multiplication of two \HH is none-commutative. Hamilton defined a quaternion as the quotient of two directed lines in a three-dimensional space or equivalently as the quotient of two vectors.

\HH are generally represented in the form: a + \hat{i}b + \hat{j}c + \hat{k}d where a, b, c, and d are \RR numbers, and \hat{i}, \hat{j}, \hat{k} are the fundamental quaternion unit vectors.

**Octonion Numbers** In mathematics, the octonions represented by the letter \OO are a normed division algebra over the real numbers, meaning it is a hypercomplex number system. Octonions have eight dimensions; twice the number of dimensions of the quaternions, of which they are an extension. They are noncommutative and nonassociative, but satisfy a weaker form of associativity; namely, they are alternative. They are also power associative.

Octonions are not as well known as the quaternions and complex numbers, which are much more widely studied and used. Octonions are related to exceptional structures in mathematics, among them the exceptional Lie groups. Octonions have applications in fields such as string theory, special relativity and quantum logic. Applying the Cayley–Dickson construction to the octonions produces the sedenions.

**Wikipedia Articles**- \CC :\rightarrow Complex
- \HH :\rightarrow Quaternion
- \OO :\rightarrow Octonion
- \SS :\rightarrow Sedenion

**Example Math**__Complex Numbers__

- \CC = \RR \oplus \RR \star
- \CC = \RR \oplus \RR i
- \CC = 1 + i

- i^2 = -1

\CC \otimes \CC = \left[ \def \arraystretch{1.2} \begin{array}{c:c} +1 & +i \\ \hdashline +i & -1 \end{array} \right]

\CC \otimes \CC = \left[ \def \arraystretch{1.2} \begin{array}{c:c} \RR \times 1 \otimes \RR \times 1 & \RR \times 1 \otimes \RR \times \star \\ \hdashline \RR \times \star \otimes \RR \times 1 & \RR \times \star \otimes \RR \times \star \end{array} \right]

__Quaternion Numbers__

- \HH = \CC \oplus \CC \star
- \HH = \CC \oplus \CC j
- \HH = 1 + i + j + k

- i = jk = -kj
- j = ki = -ik
- k = ij = -ji
- i^2 = j^2 = k^2 = ijk =-1

\HH \otimes \HH = \left[ \def \arraystretch{1.2} \begin{array}{c:ccc} +1 & +i & +j & +k \\ \hdashline +i & -1 & +k & -j \\ +j & -k & -1 & +i \\ +k & +j & -i & -1 \end{array} \right]

\HH \otimes \HH = \left[ \def \arraystretch{1.2} \begin{array}{c:c} \CC \times 1 \otimes \CC \times 1 & \CC \times 1 \otimes \CC \times \star \\ \hdashline \CC \times \star \otimes \CC \times 1 & \CC \times \star \otimes \CC \times \star \end{array} \right]

__Octonion Numbers__

- \OO = \HH \oplus \HH \star
- \OO = \HH \oplus \HH m
- \OO = 1 + i + j + k + m + I + J + K

- I = kJ = Kj i = AI = KJ
- J = iK = Ik j = AJ = IK
- K = jI = Ji k = AK = JI
- m = Ii = Jj m = Kk = IJK
- m^2 = I^2 = J^2 = K^2 = -1

\OO \otimes \OO = \left[ \def \arraystretch{1.2} \begin{array}{c:ccc:c:ccc} +1 & +i & +j & +k & +m & +I & +J & +K \\ \hdashline +i & -1 & +k & -j & +I & -m & -K & +J \\ +j & -k & -1 & +i & +J & +K & -m & -I \\ +k & +j & -i & -1 & +K & -J & +I & -m \\ \hdashline +m & -I & -J & -K & -1 & +i & +j & +k \\ \hdashline +I & +m & -K & +J & -i & -1 & -k & +j \\ +J & +K & +m & -I & -j & +k & -1 & -i \\ +K & -J & +I & +m & -k & -j & +i & -1 \end{array} \right]

\OO \otimes \OO = \left[ \def \arraystretch{1.2} \begin{array}{c:c} \HH \times 1 \otimes \HH \times 1 & \HH \times 1 \otimes \HH \times \star \\ \hdashline \HH \times \star \otimes \HH \times 1 & \HH \times \star \otimes \HH \times \star \end{array} \right]

**Penrose Spacetime** In theoretical physics, a Penrose diagram (named after mathematical physicist Roger Penrose) is a two-dimensional diagram capturing the causal relations between different points in spacetime through a conformal treatment of infinity. It is an extension of a Minkowski diagram where the vertical dimension represents time, and the horizontal dimension represents a space dimension, and diagonal lines at an angle of ±45° correspond to real and virtual light rays. The biggest difference is that locally, the metric on a Penrose diagram is conformally equivalent to the actual metric in spacetime. The conformal factor is chosen such that the entire infinite spacetime is transformed into a Penrose diagram of finite size, with infinity on the boundary of the diagram. For spherically symmetric spacetime, every point in the Penrose diagram corresponds to a 2-dimensional sphere.

**Formulation** Penrose diagrams are formulated using the transform tan(u \pm v) = t \pm x which has roots of \pm \pi and \infin at \pm \pi/2. So the outer box represents \pm \infin for both light and vacuum energy (real and virtual bosons), and the inner cross the 0 point or event horizon between what is a time-like causally connected event and space-like event, showing us the special relativity light cone.

*Penrose Spacetime*

**Wikipedia Articles**