In random matrix theory, the Gaussian ensembles are specific probability distributions over self-adjoint matrices whose entries are independently sampled from the gaussian distribution. They are among the most-commonly studied matrix ensembles, fundamental to both mathematics and physics. The three main examples are the Gaussian orthogonal (GOE), unitary (GUE), and symplectic (GSE) ensembles. These are classified by the Dyson index β, which takes values 1, 2, and 4 respectively, counting the number of real components per matrix element (1 for real elements, 2 for complex elements, 4 for quaternions). The index can be extended to take any real positive value.
The gaussian ensembles are also also called the Wigner ensembles,[1] or the Hermite ensembles.[2]
: the eigenvalues of the matrix, which are all real, since the matrices are always assumed to be self-adjoint.
: the variance of on-diagonal matrix entries. We assume that for each , all on-diagonal matrix entries have the same variance. It is always defined as .
: the variance of off-diagonal matrix entries. We assume that for each , all off-diagonal matrix entries have the same variance. It is always defined as where .
When referring to the main reference works, it is necessary to translate the formulas from them, since each convention leads to different constant scaling factors for the formulas.
For all cases, the GβE(N) ensemble is defined with density function where the partition function is .
The Gaussian orthogonal ensemble GOE(N) is defined as the probability distribution over symmetric matrices with density functionwhere the partition function is .
Explicitly, since there are only degrees of freedom, the parameterization is as follows:where we pick the upper diagonal entries as the degrees of freedom.
The Gaussian unitary ensemble GUE(N) is defined as the probability distribution over Hermitian matrices with density functionwhere the partition function is .
Explicitly, since there are only degrees of freedom, the parameterization is as follows:
where we pick the upper diagonal entries as the degrees of freedom.
The Gaussian symplectic ensemble GSE(N) is defined as the probability distribution over self‑adjoint quaternionic matrices with density functionwhere the partition function is .
Explicitly, since there are only degrees of freedom, the parameterization is as follows:where we write and pick the upper diagonal entries as the degrees of freedom.
For all cases, the GβE(N) ensemble is uniquely characterized (up to affine transform) by its symmetries, or invariance under appropriate transformations.[3]
For GOE, consider a probability distribution over symmetric matrices satisfying the following properties:
Invariance under orthogonal transformation: For any fixed (not random) orthogonal matrix, let be a random sample from the distribution. Then has the same distribution as .
Independence: The entries are independently distributed.
For GUE, consider a probability distribution over Hermitian matrices satisfying the following properties:
Invariance under unitary transformation: For any fixed (not random) unitary matrix, let be a random sample from the distribution. Then has the same distribution as .
Independence: The entries are independently distributed.
For GSE, consider a probability distribution over self-adjoint quaternionic matrices satisfying the following properties:
Invariance under symplectic transformation: For any fixed (not random) symplectic matrix, let be a random sample from the distribution. Then has the same distribution as .
Independence: The entries are independently distributed.
In all 3 cases, these conditions force the distribution to have the form , where and . Thus, with the further specification of , we recover the GOE, GUE, GSE.[4] Notably, if mere invariance is demanded, then any spectral distribution can be produced by multiplying with a function of form .[5]
More succinctly stated, each of GOE, GUE, GSE is uniquely specified by invariance, independence, the mean, and the variance.
In this way, the GβE(N) ensemble may be defined after the spectral density is defined first, so that any method to motivate the spectral density then motivates the GβE(N) ensemble, and vice versa.
For all cases, the GβE(N) ensemble is uniquely characterized as the absolutely continuous probability distribution over real/complex/quaternionic symmetric/orthogonal/symplectic matrices that maximizes entropy, under the constraint of .[6]
The spectrum of GUE(N) is a determinantal point process with kernel , and by the Christoffel–Darboux formula,Using the confluent form of Christoffel–Darboux and the three-term recurrence of Hermite polynomials, the spectral density of GUE(N) for finite values of :[8]The spectral distribution of can also be written as a quaternionic determinantal point process involving skew-orthogonal polynomials.[9][10]
For all cases, given a sampled matrix from the GβE(N) ensemble, we can perform a Householder transformation tridiagonalization on it to obtain a tridiagonal matrix, which has the same distribution aswhere each is gaussian-distributed, and each is chi-distributed, and all are independent. The case was first noted in 1984,[11] and the general case was noted in 2002.[12] Like how the Laplac differential operator can be discretized to the Laplacian matrix, this tridiagonal form of the gaussian ensemble allows a reinterpretation of the gaussian ensembles as an ensemble over not matrices, but over differential operators, specifically, a "stochastic Airy operator". This leads more generally to the study of random matrices as stochastic operators.[13]
Computationally, this allows efficient sampling of eigenvalues, from on the full matrix, to just on the tridiagonal matrix. If one only requires a histogram of the eigenvalues with bins, the time can be further decreased to , by using the Sturm sequences.[14] Theoretically, this definition allows extension to all cases, leading to the gaussian beta ensembles,[15][12] and "anti-symmetric" gaussian beta ensembles.[16]
Relatedly, let be a matrix, with all entries IID sampled from the corresponding standard normal distribution – for example, if , then . Then applying repeated Housholder transform on only the left side of a results in , where each is a Householder matrix, and is an upper triangular matrix with independent entries, such that each for , and each for .[17]
The requirement that the matrix ensemble to be a gaussian ensemble is too strong for the Wigner semicircle law. Indeed, the theorem applies generally for much more generic matrix ensembles.
The joint density can be written as a Gibbs measure:with the energy function (also called the Hamiltonian) . This can be interpreted physically as a Boltzmann distribution of a physical system consisting of identical unit electric charges constrained to move on the real line, repelling each other via the two-dimensional Coulomb potential, while being attracted to the origin via a quadratic potential . This is the Coulomb gas model for the eigenvalues.
In the macroscopic limit, one rescales and defines the empirical measure , obtaining , where the mean-field functional .
The minimizer of over probability measures is the Wigner semicircle law , which gives the limiting eigenvalue density.[20] The value yields the leading order term in , termed the Coulomb gas free energy.
Alternatively, suppose that there exists a , such that the quadratic electric potential can be recreated (up to an additive constant) viaThen, imposing a fixed background negative electric charge of density exactly cancels out the electric repulsion between the freely moving positive charges. Such a function does exist: , which can be found by solving an integral equation. This indicates that the Wigner semicircle distribution is the equilibrium distribution.[21][22][23]
Gaussian fluctuations about obtained by expanding to second order produce the sine kernel in the bulk and the Airy kernel at the soft edge after proper rescaling.
The largest eigenvalue for GβE(N) follows the Tracy–Widom distribution after proper translation and scaling.[24] It can be efficiently sampled by the shift-invert Lanczos algorithm on the upper left corner of the tridiagonal matrix form.[25]
From ordered eigenvalues , define normalized spacings with mean spacing . This normalizes the spacings by:With this, the approximate spacing distributions are
The Gaussian ensemble was first motivated in theoretical physics. In the 1940s, Eugene Wigner studied the irregular spacings of slow-neutron resonances in heavy nuclei. Working with the few dozen levels then available, he noticed a pronounced repulsion between neighbouring lines.
In 1951, he modelled the Hamiltonian of a compound-nucleus in a minimal way.[26] He noted that by symmetry considerations, it must be a real symmetric operator, so he modelled it as a random sample from the GOE(N). He solved the 2×2 case and found the two-level spacing law , which matched well with the data. He disseminated his guess ("the Wigner surmise") during a conference on Neutron Physics by Time-of-Flight in 1956:[27][28][29]
Perhaps I am now too courageous when I try to guess the distribution of the distances between successive levels (of energies of heavy nuclei). Theoretically, the situation is quite simple if one attacks the problem in a simpleminded fashion. The question is simply what are the distances of the characteristic values of a symmetric matrix with random coefficients.
— Eugene Wigner, Results and theory of resonance absorption
Freeman Dyson stated the project as a statistical theory of nuclear energy levels, to be contrasted with precise calculations based on an analytic model of the nucleus. He argued that a statistical theory is necessary, because the energy levels then measured were on the order of millions, and for such a high order, precise calculations was simply impossible. The idea was different from the then-understood form of statistical mechanics, for instead of having a system with precisely stated dynamical laws, with too many particles interacting under it, thus the particles need to be treated statistically, he would model the dynamical laws themselves as unknown, and thus treated statistically.[30]
In 1962, Dyson proposed the "Threefold Way" to motivate the three ensembles, by showing that in 3 fields (group representation, quantum mechanics, random matrix theory), there is a 3-fold disjunction, which he traced back to the Frobenius theorem stating that there are only 3 real division algebras: the real, the complex, and the quaternionic.[31] A random matrix representing a Hamiltonian can be classified by an anti-unitary operator that describes time-reversal symmetry. The classification depends on whether exists present and, if so, the value of . Each symmetry class produces a constraint on the possible form of , and the corresponding gaussian ensemble can then be motivated as a maximal entropy distribution, as described previously.
If , the Hamiltonian must be real symmetric. This typically occurs in systems with no magnetic field and either spinless particles or integer spin particles with negligible spin–orbit interaction. This occurs in level spacing distribution in nuclear compound states, the original motivation for Wigner.
If does not exist, then is only required to be Hermitian. Time-reversal symmetry can be broken by a homogeneous magnetic field, random magnetic fluxes, or spin-selective lasers. In these cases, the off-diagonal matrix elements acquire independent complex phases.
Chaotic microwave cavities with a ferrite: Adding a strong axial magnetic field causes the level statistics to transition continuously from GOE to GUE, which was a confirmation of the BGS conjecture.[32]
Quantum Hall effect: The physics of quantum Hall edge states and Landau levels is modelled by the GUE due to the strong perpendicular magnetic field breaking time-reversal symmetry.
If , then this is a consequence of Kramers' theorem for systems with half-integer spin and significant spin–orbit interaction. The resulting Hamiltonians are naturally described by quaternion-Hermitian matrices. It has been observed in Kramers doublet[34] and many quantum chaotic systems. It is also possible to construct such a system without spin.[35]
^Chiani M (2014). "Distribution of the largest eigenvalue for real Wishart and Gaussian random matrices and a simple approximation for the Tracy–Widom distribution". Journal of Multivariate Analysis. 129: 69–81. arXiv:1209.3394. doi:10.1016/j.jmva.2014.04.002. S2CID15889291.
Deift, Percy (2000). Orthogonal polynomials and random matrices: a Riemann-Hilbert approach. Courant lecture notes in mathematics. Providence, R.I: American Mathematical Society. ISBN978-0-8218-2695-9.
Mehta, M.L. (2004). Random Matrices. Amsterdam: Elsevier/Academic Press. ISBN0-12-088409-7.
Deift, Percy; Gioev, Dimitri (2009). Random matrix theory: invariant ensembles and universality. Courant lecture notes in mathematics. New York : Providence, R.I: Courant Institute of Mathematical Sciences ; American Mathematical Society. ISBN978-0-8218-4737-4.
Forrester, Peter (2010). Log-gases and random matrices. London Mathematical Society monographs. Princeton: Princeton University Press. ISBN978-0-691-12829-0.
Anderson, G.W.; Guionnet, A.; Zeitouni, O. (2010). An introduction to random matrices. Cambridge: Cambridge University Press. ISBN978-0-521-19452-5.
Akemann, G.; Baik, J.; Di Francesco, P. (2011). The Oxford Handbook of Random Matrix Theory. Oxford: Oxford University Press. ISBN978-0-19-957400-1.
Tao, Terence (2012). Topics in random matrix theory. Graduate studies in mathematics. Providence, R.I: American Mathematical Society. ISBN978-0-8218-7430-1.
Mingo, James A.; Speicher, Roland (2017). Free Probability and Random Matrices. Fields Institute Monographs. New York, NY: Springer. ISBN978-1-4939-6942-5.
Potters, Marc; Bouchaud, Jean-Philippe (2020-11-30). A First Course in Random Matrix Theory: for Physicists, Engineers and Data Scientists. Cambridge University Press. doi:10.1017/9781108768900. ISBN978-1-108-76890-0.