Uncertainty Principle

In quantum mechanics, the Heisenberg uncertainty principle states a fundamental limit on the accuracy with which certain pairs of physical properties of a particle, such as position and momentum, can be simultaneously known. In layman's terms, the more precisely one property is measured, the less precisely the other can be controlled, determined, or known. In his Nobel Laureate speech, Max Born said: “ ...To measure space coordinates and instants of time, rigid measuring rods and clocks are required. On the other hand, to measure momenta and energies, devices are necessary with movable parts to absorb the impact of the test object and to indicate the size of its momentum. Paying regard to the fact that quantum mechanics is competent for dealing with the interaction of object and apparatus, it is seen that no arrangement is possible that will fulfill both requirements simultaneously... ”

Published by Werner Heisenberg in 1927, the uncertainty principle was a key discovery in the early development of quantum theory. It implies that it is impossible to simultaneously measure the present position while also determining the future motion of a particle, or of any system small enough to require quantum mechanical treatment. Intuitively, the principle can be understood by considering a typical measurement of a particle. It is impossible to determine both momentum and position by means of the same measurement, as indicated by Born above. Assume that its initial momentum has been accurately calculated by measuring its mass, the force applied to it, and the length of time it was subjected to that force. Then to measure its position after it is no longer being accelerated would require another measurement to be done by scattering light or other particles off of it. But each such interaction will alter its momentum by an unknown and indeterminable increment, degrading our knowledge of its momentum while augmenting our knowledge of its position. So Heisenberg argues that every measurement destroys part of our knowledge of the system that was obtained by previous measurements.The uncertainty principle states a fundamental property of quantum systems, and is not a statement about the observational success of current technology. The principle states specifically that the product of the uncertainties in position and momentum is always equal to or greater than one half of the reduced Planck constant ħ, which is defined as the re-scaling h/(2π) of the Planck constant h. Mathematically, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding bases are Fourier transforms of one another (i.e., position and momentum are conjugate variables). In the mathematical formulation of quantum mechanics, any non-commuting operators are subject to similar uncertainty limits.

Harmonic analysis
In the context of harmonic analysis, a branch of mathematics, the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds:


 * $$\left(\int_{-\infty}^\infty x^2 |f(x)|^2\,dx\right)\left(\int_{-\infty}^\infty \xi^2 |\hat{f}(\xi)|^2\,d\xi\right)\ge \frac{\|f\|_2^4}{16\pi^2}.$$

Other purely mathematical formulations of uncertainty exist between a function f and its Fourier transform – see Fourier transform. A variety of such results can be found in or ; for a short survey, see.

Signal processing
In the context of signal processing, particularly time–frequency analysis, uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from Benedicks's theorem, below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain) – see bandlimited versus timelimited. Stated alternatively, "one cannot simultaneously localize a signal (function) in both the time domain (f) and frequency domain (Fourier transform)". When applied to filters, the result is that one cannot achieve high temporal resolution and frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform – if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off.

Alternative theorems give more precise quantitative results, and in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other.

Benedicks's theorem
Amrein-Berthier and Benedicks's theorem  intuitively says that the set of points where f is non-zero and the set of points where $$\hat{f}$$ is nonzero cannot both be small. Specifically, it is impossible for a function f in L2(R) and its Fourier transform to both be supported on sets of finite Lebesgue measure. A more quantitative version is due to Nazarov and :


 * $$\|f\|_{L^2(\mathbf{R}^d)}\leq Ce^{C|S||\Sigma|} \bigl(\|f\|_{L^2(S^c)} + \| \hat{f} \|_{L^2(\Sigma^c)} \bigr)$$

One expects that the factor $$Ce^{C|S||\Sigma|}$$ may be replaced by $$Ce^{C(|S||\Sigma|)^{1/d}}$$ which is only known if either $$S$$ or $$\Sigma$$ is convex.

Hardy's uncertainty principle
The mathematician G. H. Hardy formulated the following uncertainty principle: it is not possible for f and $$\hat{f}$$ to both be "very rapidly decreasing." Specifically, if f is in L2(R), is such that


 * $$|f(x)|\leq C(1+|x|)^Ne^{-a\pi x^2}$$

and


 * $$|\hat{f}(\xi)|\leq C(1+|\xi|)^Ne^{-b\pi \xi^2}$$ ($$C>0,N$$ an integer)

then, if $$ab>1, f=0$$ while if $$ab=1$$ then there is a polynomial $$P$$ of degree $$\leq N$$ such that


 * $$f(x)=P(x)e^{-a\pi x^2}. \, $$

This was later improved as follows: if $$f\in L^2(\mathbf{R}^d)$$ is such that


 * $$\int_{\mathbf{R}^d}\int_{\mathbf{R}^d}|f(x)||\hat{f}(\xi)|\frac{e^{\pi|\langle x,\xi\rangle|}}{(1+|x|+|\xi|)^N} \, dx \, d\xi < +\infty$$

then


 * $$f(x)=P(x)e^{-\pi\langle Ax,x\rangle}$$

where $$P$$ is a polynomial of degree $$<\frac{N-d}{2}$$ and $$A$$ is a real $$d\times d$$ positive definite matrix.

This result was stated in Beurling's complete works without proof and proved in Hörmander (the case $$d=1,N=0$$) and Bonami–Demange–Jaming  for the general case. Note that Hörmander–Beurling's version implies the case $$ab>1$$ in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem.

A full description of the case $$ab<1$$ as well as the following extension to Schwarz class distributions appears in Demange :

Theorem. If a tempered distribution $$f\in\mathcal{S}'(\R^d)$$ is such that


 * $$e^{\pi|x|^2}f\in\mathcal{S}'(\R^d)$$

and


 * $$e^{\pi|\xi|^2}\hat f\in\mathcal{S}'(\R^d)$$

then


 * $$f(x)=P(x)e^{-\pi\langle Ax,x\rangle}$$

for some convenient polynomial $$P$$ and real positive definite matrix $$A$$ of type $$d\times d$$.