Statistical description for the behavior of bosons
In quantum statistics, Bose–Einstein (B–E) statistics describe one of two possible ways in which a collection of noninteracting, indistinguishable particles may occupy a set of available discrete energy states at thermodynamic equilibrium. The aggregation of particles in the same state, which is a characteristic of particles obeying Bose–Einstein statistics, accounts for the cohesive streaming of laser light and the frictionless creeping of superfluid helium. The theory of this behaviour was developed (1924–25) by Satyendra Nath Bose, who recognized that a collection of identical and indistinguishable particles can be distributed in this way. The idea was later adopted and extended by Albert Einstein in collaboration with Bose.
The Bose–Einstein statistics apply only to those particles not limited to single occupancy of the same state—that is, particles that do not obey the Pauli exclusion principle restrictions. Such particles have integer values of spin and are named bosons, after the statistics that correctly describe their behaviour. There must also be no significant interaction between the particles.
Comparison of average occupancy of the ground state for three statistics
Bose–Einstein distribution
At low temperatures, bosons behave differently from fermions (which obey the Fermi–Dirac statistics) in a way that an unlimited number of them can "condense" into the same energy state. This apparently unusual property also gives rise to the special state of matter – the Bose–Einstein condensate. Fermi–Dirac and Bose–Einstein statistics apply when quantum effects are important and the particles are "indistinguishable". Quantum effects appear if the concentration of particles satisfies
 ${\frac {N}{V}}\geq n_{q},$
where N is the number of particles, V is the volume, and n_{q} is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are barely overlapping.
Fermi–Dirac statistics apply to fermions (particles that obey the Pauli exclusion principle), and Bose–Einstein statistics apply to bosons. As the quantum concentration depends on temperature, most systems at high temperatures obey the classical (Maxwell–Boltzmann) limit, unless they also have a very high density, as for a white dwarf. Both Fermi–Dirac and Bose–Einstein become Maxwell–Boltzmann statistics at high temperature or at low concentration.
B–E statistics was introduced for photons in 1924 by Bose and generalized to atoms by Einstein in 1924–25.
The expected number of particles in an energy state i for B–E statistics is:
${\bar {n}}_{i}={\frac {g_{i}}{e^{(\varepsilon _{i}\mu )/k_{\text{B}}T}1}}$
with ε_{i} > μ and where n_{i} is the number of particles in state i over total number of particles of all energy states. $g_{i}$ is the degeneracy of energy level i, ε_{i} is the energy of the ith state, μ is the chemical potential, k_{B} is the Boltzmann constant, and T is absolute temperature.
The variance of this distribution $V(n)$ is calculated directly from the expression above for the average number.^{[1]}.
 $V(n)=kT{\frac {\partial }{\partial \mu }}{\bar {n}}_{i}$ $=<n>(1+<n>)={\bar {n}}+{\bar {n}}^{2}$
For comparison, the average number of fermions with energy $\varepsilon _{i}$ given by Fermi–Dirac particleenergy distribution has a similar form:
 ${\bar {n}}_{i}(\varepsilon _{i})={\frac {g_{i}}{e^{(\varepsilon _{i}\mu )/k_{\text{B}}T}+1}}.$
As mentioned above, both the Bose–Einstein distribution and the Fermi–Dirac distribution approaches the Maxwell–Boltzmann distribution in the limit of high temperature and low particle density, without the need for any ad hoc assumptions:
 In the limit of low particle density, ${\bar {n}}_{i}={\frac {g_{i}}{e^{(\varepsilon _{i}\mu )/k_{\text{B}}T}\pm 1}}\ll 1$, therefore $e^{(\varepsilon _{i}\mu )/k_{\text{B}}T}\pm 1\gg 1$ or equivalently $e^{(\varepsilon _{i}\mu )/k_{\text{B}}T}\gg 1$. In that case, ${\bar {n}}_{i}\approx {\frac {g_{i}}{e^{(\varepsilon _{i}\mu )/k_{\text{B}}T}}}={\frac {1}{Z}}e^{\varepsilon _{i}/k_{\text{B}}T}$, which is the result from MaxwellBoltzmann statistics.
 In the limit of high temperature, the particles are distributed over a large range of energy values, therefore the occupancy on each state (especially the high energy ones with $\varepsilon _{i}\mu \gg k_{\text{B}}T$) is again very small, ${\bar {n}}_{i}={\frac {g_{i}}{e^{(\varepsilon _{i}\mu )/k_{\text{B}}T}\pm 1}}\ll 1$. This again reduces to MaxwellBoltzmann statistics.
In addition to reducing to the Maxwell–Boltzmann distribution in the limit of high $T$ and low density, B–E statistics also reduce to Rayleigh–Jeans law distribution for low energy states with
$\varepsilon _{i}\mu \ll k_{\text{B}}T$, namely
 ${\begin{aligned}{\bar {n}}_{i}&={\frac {g_{i}}{e^{(\varepsilon _{i}\mu )/k_{\text{B}}T}1}}\\&\approx {\frac {g_{i}}{(\varepsilon _{i}\mu )/k_{\text{B}}T}}={\frac {g_{i}k_{\text{B}}T}{\varepsilon _{i}\mu }}.\end{aligned}}$
History
While presenting a lecture at the University of Dhaka (in what was then British India and is now Bangladesh) on the theory of radiation and the ultraviolet catastrophe, Satyendra Nath Bose intended to show his students that the contemporary theory was inadequate, because it predicted results not in accordance with experimental results. During this lecture, Bose committed an error in applying the theory, which unexpectedly gave a prediction that agreed with the experiment. The error was a simple mistake—similar to arguing that flipping two fair coins will produce two heads onethird of the time—that would appear obviously wrong to anyone with a basic understanding of statistics (remarkably, this error resembled the famous blunder by d'Alembert known from his Croix ou Pile article^{[2]}^{[3]}). However, the results it predicted agreed with experiment, and Bose realized it might not be a mistake after all. For the first time, he took the position that the Maxwell–Boltzmann distribution would not be true for all microscopic particles at all scales. Thus, he studied the probability of finding particles in various states in phase space, where each state is a little patch having phase volume of h^{3}, and the position and momentum of the particles are not kept particularly separate but are considered as one variable.
Bose adapted this lecture into a short article called Planck's Law and the Hypothesis of Light Quanta^{[4]}^{[5]} and submitted it to the Philosophical Magazine. However, the referee's report was negative, and the paper was rejected. Undaunted, he sent the manuscript to Albert Einstein requesting publication in the Zeitschrift für Physik. Einstein immediately agreed, personally translated the article from English into German (Bose had earlier translated Einstein's article on the theory of General Relativity from German to English), and saw to it that it was published. Bose's theory achieved respect when Einstein sent his own paper in support of Bose's to Zeitschrift für Physik, asking that they be published together. The paper came out in 1924.^{[6]}
The reason Bose produced accurate results was that since photons are indistinguishable from each other, one cannot treat any two photons having equal quantum numbers (e.g., polarization and momentum vector) as being two distinct identifiable photons. By analogy, if in an alternate universe coins were to behave like photons and other bosons, the probability of producing two heads would indeed be onethird, and so is the probability of getting a head and a tail which equals onehalf for the conventional (classical, distinguishable) coins. Bose's "error" leads to what is now called Bose–Einstein statistics.
Bose and Einstein extended the idea to atoms and this led to the prediction of the existence of phenomena which became known as Bose–Einstein condensate, a dense collection of bosons (which are particles with integer spin, named after Bose), which was demonstrated to exist by experiment in 1995.
Derivation
Derivation from the microcanonical ensemble
In the microcanonical ensemble, one considers a system with fixed energy, volume, and number of particles. We take a system composed of $N=\sum _{i}n_{i}$ identical bosons, $n_{i}$ of which have energy $\varepsilon _{i}$ and are distributed over $g_{i}$ levels or states with the same energy $\varepsilon _{i}$, i.e. $g_{i}$ is the degeneracy associated with energy $\varepsilon _{i}$ of total energy $E=\sum _{i}n_{i}\varepsilon _{i}$. Calculation of the number of arrangements of $n_{i}$ particles distributed among $g_{i}$ states is a problem of combinatorics. Since particles are indistinguishable in the quantum mechanical context here, the number of ways for arranging $n_{i}$ particles in $g_{i}$ boxes (for the $i$th energy level) would be (see image on the right)
The image represents one possible distribution of bosonic particles in different boxes. The box partitions (green) can be moved around to change the size of the boxes and as a result of the number of bosons each box can contain.
 $w_{i,{\text{BE}}}={\frac {(n_{i}+g_{i}1)!}{n_{i}!(g_{i}1)!}}=C_{n}^{n_{i}+g_{i}1},$
where $C_{k}^{m}$ is the kcombination of a set with m elements. The total number of arrangements in an ensemble of bosons is simply the product of the binomial coefficients $C_{n}^{n_{i}+g_{i}1}$ above over all the energy levels, i.e.
 $W_{\text{BE}}=\prod _{i}w_{i,{\text{BE}}}=\prod _{i}{\frac {(n_{i}+g_{i}1)!}{(g_{i}1)!n_{i}!}},$
The maximum number of arrangements determining the corresponding occupation number $n_{i}$ is obtained by maximizing the entropy, or equivalently, setting $\mathrm {d} (\ln W_{\text{BE}})=0$ and taking the subsidiary conditions $N=\sum n_{i},E=\sum _{i}n_{i}\varepsilon _{i}$ into account (as Lagrange multipliers).^{[7]} The result for $n_{i}\gg 1$, $g_{i}\gg 1$, $n_{i}/g_{i}=O(1)$ is the Bose–Einstein distribution.
Derivation from the grand canonical ensemble
The Bose–Einstein distribution, which applies only to a quantum system of noninteracting bosons, is naturally derived from the grand canonical ensemble without any approximations.^{[8]} In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential µ fixed by the reservoir).
Due to the noninteracting quality, each available singleparticle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir. That is, the number of particles within the overall system that occupy a given single particle state form a subensemble that is also grand canonical ensemble; hence, it may be analysed through the construction of a grand partition function.
Every singleparticle state is of a fixed energy, $\varepsilon$. As the subensemble associated with a singleparticle state varies by the number of particles only, it is clear that the total energy of the subensemble is also directly proportional to the number of particles in the singleparticle state; where $N$ is the number of particles, the total energy of the subensemble will then be $N\varepsilon$. Beginning with the standard expression for a grand partition function and replacing $E$ with $N\varepsilon$, the grand partition function takes the form
 ${\mathcal {Z}}=\sum _{N}\exp((N\mu N\varepsilon )/k_{\text{B}}T)=\sum _{N}\exp(N(\mu \varepsilon )/k_{\text{B}}T)$
This formula applies to fermionic systems as well as bosonic systems. FermiDirac statistics arise when considering the effect of the Pauli exclusion principle: whilst the number of fermions occupying the same singleparticle state can only be either 1 or 0, the number of bosons occupying a single particle state may be any integer. Thus, the grand partition function for bosons can be considered a geometric series and may be evaluated as such:
 ${\begin{aligned}{\mathcal {Z}}&=\sum _{N=0}^{\infty }\exp(N(\mu \varepsilon )/k_{\text{B}}T)=\sum _{N=0}^{\infty }[\exp((\mu \varepsilon )/k_{\text{B}}T)]^{N}\\&={\frac {1}{1\exp((\mu \varepsilon )/k_{\text{B}}T)}}.\end{aligned}}$
Note that the geometric series is convergent only if $e^{(\mu \varepsilon )/k_{\text{B}}T}<1$, including the case where $\epsilon =0$. This implies that the chemical potential for the Bose gas must be negative, i.e., $\mu <0$, whereas the Fermi gas is allowed to take both positive and negative values for the chemical potential.^{[9]}
The average particle number for that singleparticle substate is given by
 $\langle N\rangle =k_{\text{B}}T{\frac {1}{\mathcal {Z}}}\left({\frac {\partial {\mathcal {Z}}}{\partial \mu }}\right)_{V,T}={\frac {1}{\exp((\varepsilon \mu )/k_{\text{B}}T)1}}$
This result applies for each singleparticle level and thus forms the Bose–Einstein distribution for the entire state of the system.^{[10]}^{[11]}
The variance in particle number (due to thermal fluctuations) may also be derived, the result can be expressed in terms of the $\langle N\rangle$ value just derived:
 $\langle \sigma _{N}^{2}\rangle =k_{\text{B}}T\left({\frac {d\langle N\rangle }{d\mu }}\right)_{V,T}={\frac {\exp((\varepsilon \mu )/k_{\text{B}}T)}{(\exp((\varepsilon \mu )/k_{\text{B}}T)1)^{2}}}=\langle N\rangle (1+\langle N\rangle ).$
As a result, for highly occupied states the standard deviation of the particle number of an energy level is very large, slightly larger than the particle number itself: $\sigma _{N}\approx \langle N\rangle$. This large uncertainty is due to the fact that the probability distribution for the number of bosons in a given energy level is a geometric distribution; somewhat counterintuitively, the most probable value for N is always 0. (In contrast, classical particles have instead a Poisson distribution in particle number for a given state, with a much smaller uncertainty of $\sigma _{N,{\rm {classical}}}={\sqrt {\langle N\rangle }}$, and with the mostprobable N value being near $\langle N\rangle$.)
Derivation in the canonical approach
It is also possible to derive approximate Bose–Einstein statistics in the canonical ensemble.
These derivations are lengthy and only yield the above results in the asymptotic limit of a large number of particles.
The reason is that the total number of bosons is fixed in the canonical ensemble. The Bose–Einstein distribution in this case can be derived as in most texts by maximization, but the mathematically best derivation is by the Darwin–Fowler method of mean values as emphasized by Dingle.^{[12]} See also MüllerKirsten.^{[7]} The fluctuations of the ground state in the condensed region are however markedly different in the canonical and grandcanonical ensembles.^{[13]}
Derivation
Suppose we have a number of energy levels, labeled by index
$\displaystyle i$, each level
having energy $\displaystyle \varepsilon _{i}$ and containing a total of
$\displaystyle n_{i}$ particles. Suppose each level contains
$\displaystyle g_{i}$
distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta, in which case they are distinguishable from each other, yet they can still have the same energy.
The value of
$\displaystyle g_{i}$ associated with level $\displaystyle i$ is called the "degeneracy" of that energy level. Any number of bosons can occupy the same sublevel.
Let $\displaystyle w(n,g)$ be the number of ways of distributing
$\displaystyle n$ particles among the
$\displaystyle g$ sublevels of an energy level. There is only one way of distributing
$\displaystyle n$ particles with one sublevel, therefore
$\displaystyle w(n,1)=1$. It is easy to see that
there are $\displaystyle (n+1)$ ways of distributing
$\displaystyle n$ particles in two sublevels which we will write as:
 $w(n,2)={\frac {(n+1)!}{n!1!}}.$
With a little thought
(see Notes below)
it can be seen that the number of ways of distributing
$\displaystyle n$ particles in three sublevels is
 $w(n,3)=w(n,2)+w(n1,2)+\cdots +w(1,2)+w(0,2)$
so that
 $w(n,3)=\sum _{k=0}^{n}w(nk,2)=\sum _{k=0}^{n}{\frac {(nk+1)!}{(nk)!1!}}={\frac {(n+2)!}{n!2!}}$
where we have used the following theorem involving binomial coefficients:
 $\sum _{k=0}^{n}{\frac {(k+a)!}{k!a!}}={\frac {(n+a+1)!}{n!(a+1)!}}.$
Continuing this process, we can see that
$\displaystyle w(n,g)$
is just a binomial coefficient
(See Notes below)
 $w(n,g)={\frac {(n+g1)!}{n!(g1)!}}.$
For example, the population numbers for two particles in three sublevels are 200, 110, 101, 020, 011, or 002 for a total of six which equals 4!/(2!2!). The number of ways that a set of occupation numbers $\displaystyle n_{i}$ can be realized is the product of the ways that each individual energy level can be populated:
 $W=\prod _{i}w(n_{i},g_{i})=\prod _{i}{\frac {(n_{i}+g_{i}1)!}{n_{i}!(g_{i}1)!}}\approx \prod _{i}{\frac {(n_{i}+g_{i})!}{n_{i}!(g_{i})!}}$
where the approximation assumes that $n_{i}\gg 1$.
Following the same procedure used in deriving the Maxwell–Boltzmann statistics, we wish to find the set of $\displaystyle n_{i}$ for which W is maximised, subject to the constraint that there be a fixed total number of particles, and a fixed total energy. The maxima of $\displaystyle W$ and $\displaystyle \ln(W)$ occur at the same value of $\displaystyle n_{i}$ and, since it is easier to accomplish mathematically, we will maximise the latter function instead. We constrain our solution using Lagrange multipliers forming the function:
 $f(n_{i})=\ln(W)+\alpha (N\sum n_{i})+\beta (E\sum n_{i}\varepsilon _{i})$
Using the $n_{i}\gg 1$ approximation and using Stirling's approximation for the factorials $\left(x!\approx x^{x}\,e^{x}\,{\sqrt {2\pi x}}\right)$ gives
 $f(n_{i})=\sum _{i}(n_{i}+g_{i})\ln(n_{i}+g_{i})n_{i}\ln(n_{i})+\alpha \left(N\sum n_{i}\right)+\beta \left(E\sum n_{i}\varepsilon _{i}\right)+K.$
Where K is the sum of a number of terms which are not functions of the $n_{i}$. Taking the derivative with respect to $\displaystyle n_{i}$, and setting the result to zero and solving for $\displaystyle n_{i}$, yields the Bose–Einstein population numbers:
 $n_{i}={\frac {g_{i}}{e^{\alpha +\beta \varepsilon _{i}}1}}.$
By a process similar to that outlined in the Maxwell–Boltzmann statistics article, it can be seen that:
 $d\ln W=\alpha \,dN+\beta \,dE$
which, using Boltzmann's famous relationship $S=k_{\text{B}}\,\ln W$ becomes a statement of the second law of thermodynamics at constant volume, and it follows that $\beta ={\frac {1}{k_{\text{B}}T}}$ and $\alpha ={\frac {\mu }{k_{\text{B}}T}}$ where S is the entropy, $\mu$ is the chemical potential, k_{B} is Boltzmann's constant and T is the temperature, so that finally:
 $n_{i}={\frac {g_{i}}{e^{(\varepsilon _{i}\mu )/k_{\text{B}}T}1}}.$
Note that the above formula is sometimes written:
 $n_{i}={\frac {g_{i}}{e^{\varepsilon _{i}/k_{\text{B}}T}/z1}},$
where
$\displaystyle z=\exp(\mu /k_{\text{B}}T)$
is the absolute activity, as noted by McQuarrie.^{[14]}
Also note that when the particle numbers are not conserved, removing the conservation of particle numbers constraint is equivalent to setting $\alpha$ and therefore the chemical potential $\mu$ to zero. This will be the case for photons and massive particles in mutual equilibrium and the resulting distribution will be the Planck distribution.
Notes
A much simpler way to think of Bose–Einstein distribution function is to consider that n particles are denoted by identical balls and g shells are marked by g1 line partitions. It is clear that the permutations of these n balls and g − 1 partitions will give different ways of arranging bosons in different energy levels. Say, for 3 (= n) particles and 3 (= g) shells, therefore (g − 1) = 2, the arrangement might be ●●●, or ●●●, or ●●● , etc. Hence the number of distinct permutations of n + (g1) objects which have n identical items and (g − 1) identical items will be:
 ${\frac {(g1+n)!}{(g1)!n!}}$
See the image on the right for a visual representation of one such distribution of
n particles in
g boxes that can be represented as
g1 partitions.
The image represents one possible distribution of bosonic particles in different boxes. The box partitions (green) can be moved around to change the size of the boxes and as a result of the number of bosons each box can contain.
OR
The purpose of these notes is to clarify some aspects of the derivation of the Bose–Einstein (B–E)
distribution for beginners. The enumeration of cases (or ways) in the B–E distribution can be recast as
follows. Consider a game of dice throwing in which there are
$\displaystyle n$ dice,
with each die taking values in the set
$\displaystyle \{1,\dots ,g\}$, for $g\geq 1$.
The constraints of the game are that the value of a die
$\displaystyle i$, denoted by $\displaystyle m_{i}$, has to be
greater than or equal to the value of die
$\displaystyle (i1)$, denoted by
$\displaystyle m_{i1}$, in the previous throw, i.e.,
$m_{i}\geq m_{i1}$. Thus a valid sequence of die throws can be described by an
ntuple
$\displaystyle (m_{1},m_{2},\dots ,m_{n})$, such that $m_{i}\geq m_{i1}$. Let $\displaystyle S(n,g)$ denote the set of these valid ntuples:
$S(n,g)={\Big \{}(m_{1},m_{2},\dots ,m_{n}){\Big }{\Big .}m_{i}\geq m_{i1},m_{i}\in \left\{1,\ldots ,g\right\},\forall i=1,\dots ,n{\Big \}}.$

(1)

Then the quantity $\displaystyle w(n,g)$ (defined above as the number of ways to distribute
$\displaystyle n$ particles among the
$\displaystyle g$ sublevels of an energy level) is the cardinality of $\displaystyle S(n,g)$, i.e., the number of elements (or valid ntuples) in $\displaystyle S(n,g)$.
Thus the problem of finding an expression for
$\displaystyle w(n,g)$
becomes the problem of counting the elements in $\displaystyle S(n,g)$.
Example n = 4, g = 3:
 $S(4,3)=\left\{\underbrace {(1111),(1112),(1113)} _{(a)},\underbrace {(1122),(1123),(1133)} _{(b)},\underbrace {(1222),(1223),(1233),(1333)} _{(c)},\right.$
 $\left.\underbrace {(2222),(2223),(2233),(2333),(3333)} _{(d)}\right\}$
 $\displaystyle w(4,3)=15$ (there are $\displaystyle 15$ elements in $\displaystyle S(4,3)$)
Subset
$\displaystyle (a)$
is obtained by fixing all indices
$\displaystyle m_{i}$ to
$\displaystyle 1$, except for the last index,
$\displaystyle m_{n}$, which is incremented from
$\displaystyle 1$ to
$\displaystyle g=3$.
Subset
$\displaystyle (b)$
is obtained by fixing
$\displaystyle m_{1}=m_{2}=1$, and incrementing
$\displaystyle m_{3}$ from
$\displaystyle 2$ to
$\displaystyle g=3$. Due to the constraint
$\displaystyle m_{i}\geq m_{i1}$
on the indices in
$\displaystyle S(n,g)$,
the index
$\displaystyle m_{4}$ must
automatically
take values in
$\displaystyle \left\{2,3\right\}$.
The construction of subsets
$\displaystyle (c)$ and
$\displaystyle (d)$
follows in the same manner.
Each element of
$\displaystyle S(4,3)$ can be thought of as a
multiset
of cardinality
$\displaystyle n=4$;
the elements of such multiset are taken from the set
$\displaystyle \left\{1,2,3\right\}$
of cardinality
$\displaystyle g=3$,
and the number of such multisets is the
multiset coefficient
 $\displaystyle \left\langle {\begin{matrix}3\\4\end{matrix}}\right\rangle ={3+41 \choose 31}={3+41 \choose 4}={\frac {6!}{4!2!}}=15$
More generally, each element of
$\displaystyle S(n,g)$
is a
multiset
of cardinality
$\displaystyle n$
(number of dice)
with elements taken from the set
$\displaystyle \left\{1,\dots ,g\right\}$
of cardinality
$\displaystyle g$
(number of possible values of each die),
and the number of such multisets, i.e.,
$\displaystyle w(n,g)$
is the
multiset coefficient
$\displaystyle w(n,g)=\left\langle {\begin{matrix}g\\n\end{matrix}}\right\rangle ={g+n1 \choose g1}={g+n1 \choose n}={\frac {(g+n1)!}{n!(g1)!}}$

(2)

which is exactly the same as the
formula for $\displaystyle w(n,g)$, as derived above with the aid
of
a theorem involving binomial coefficients, namely
$\sum _{k=0}^{n}{\frac {(k+a)!}{k!a!}}={\frac {(n+a+1)!}{n!(a+1)!}}.$

(3)

To understand the decomposition
$\displaystyle w(n,g)=\sum _{k=0}^{n}w(nk,g1)=w(n,g1)+w(n1,g1)+\cdots +w(1,g1)+w(0,g1)$

(4)

or for example,
$\displaystyle n=4$
and
$\displaystyle g=3$
 $\displaystyle w(4,3)=w(4,2)+w(3,2)+w(2,2)+w(1,2)+w(0,2),$
let us rearrange the elements of
$\displaystyle S(4,3)$ as follows
 $S(4,3)=\left\{\underbrace {(1111),(1112),(1122),(1222),(2222)} _{(\alpha )},\underbrace {(111{\color {Red}{\underset {=}{3}}}),(112{\color {Red}{\underset {=}{3}}}),(122{\color {Red}{\underset {=}{3}}}),(222{\color {Red}{\underset {=}{3}}})} _{(\beta )},\right.$
 $\left.\underbrace {(11{\color {Red}{\underset {==}{33}}}),(12{\color {Red}{\underset {==}{33}}}),(22{\color {Red}{\underset {==}{33}}})} _{(\gamma )},\underbrace {(1{\color {Red}{\underset {===}{333}}}),(2{\color {Red}{\underset {===}{333}}})} _{(\delta )}\underbrace {({\color {Red}{\underset {====}{3333}}})} _{(\omega )}\right\}.$
Clearly, the subset
$\displaystyle (\alpha )$
of
$\displaystyle S(4,3)$
is the same as the set
 $\displaystyle S(4,2)=\left\{(1111),(1112),(1122),(1222),(2222)\right\}$.
By deleting the index
$\displaystyle m_{4}=3$
(shown in red with double underline)
in
the subset
$\displaystyle (\beta )$
of
$\displaystyle S(4,3)$,
one obtains
the set
 $\displaystyle S(3,2)=\left\{(111),(112),(122),(222)\right\}$.
In other words, there is a onetoone correspondence between the subset
$\displaystyle (\beta )$
of
$\displaystyle S(4,3)$
and the set
$\displaystyle S(3,2)$. We write
 $\displaystyle (\beta )\longleftrightarrow S(3,2)$.
Similarly, it is easy to see that
 $\displaystyle (\gamma )\longleftrightarrow S(2,2)=\left\{(11),(12),(22)\right\}$
 $\displaystyle (\delta )\longleftrightarrow S(1,2)=\left\{(1),(2)\right\}$
 $\displaystyle (\omega )\longleftrightarrow S(0,2)=\varnothing$ (empty set).
Thus we can write
 $\displaystyle S(4,3)=\bigcup _{k=0}^{4}S(4k,2)$
or more generally,
$\displaystyle S(n,g)=\bigcup _{k=0}^{n}S(nk,g1)$;

(5)

and since the sets
 $S(i,g1),{\text{ for }}i=0,\dots ,n$
are nonintersecting, we thus have
$\displaystyle w(n,g)=\sum _{k=0}^{n}w(nk,g1)$,

(6)

with the convention that
 $w(0,g)=1\ ,\forall g,{\text{ and }}w(n,0)=1\ ,\forall n.$

(7)

Continuing the process, we arrive at the following formula
 $w(n,g)=\sum _{k_{1}=0}^{n}\sum _{k_{2}=0}^{nk_{1}}w(nk_{1}k_{2},g2)=\sum _{k_{1}=0}^{n}\sum _{k_{2}=0}^{nk_{1}}\cdots \sum _{k_{g}=0}^{n\sum _{j=1}^{g1}k_{j}}w(n\sum _{i=1}^{g}k_{i},0).$
Using the convention (7)_{2} above, we obtain the formula
$\displaystyle w(n,g)=\sum _{k_{1}=0}^{n}\sum _{k_{2}=0}^{nk_{1}}\cdots \sum _{k_{g}=0}^{n\sum _{j=1}^{g1}k_{j}}1,$

(8)

keeping in mind that for
$\displaystyle q$
and
$\displaystyle p$
being constants, we have
$\displaystyle \sum _{k=0}^{q}p=qp$.

(9)

It can then be verified that (8) and (2) give the same result for
$\displaystyle w(4,3)$,
$\displaystyle w(3,3)$,
$\displaystyle w(3,2)$, etc.
Interdisciplinary applications
Viewed as a pure probability distribution, the Bose–Einstein distribution has found application in other fields:
 In recent years, BoseEinstein statistics have also been used as a method for term weighting in information retrieval. The method is one of a collection of DFR ("Divergence From Randomness") models,^{[15]} the basic notion being that BoseEinstein statistics may be a useful indicator in cases where a particular term and a particular document have a significant relationship that would not have occurred purely by chance. Source code for implementing this model is available from the Terrier project at the University of Glasgow.
 The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system's constituents. Despite their irreversible and nonequilibrium nature these networks follow Bose statistics and can undergo Bose–Einstein condensation. Addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the "firstmoveradvantage," "fitgetrich(FGR)," and "winnertakesall" phenomena observed in competitive systems are thermodynamically distinct phases of the underlying evolving networks.^{[16]}
See also
Notes
 ^ Pearsall, Thomas (2020). Quantum Photonics, 2nd edition. Graduate Texts in Physics. Springer. doi:10.1007/9783030473259. ISBN 9783030473242.
 ^ d'Alembert, Jean (1754). "Croix ou pile". L'Encyclopédie (in French). 4.
 ^ d'Alembert, Jean (1754). "CROIX OU PILE" [Translated by Richard J. Pulskamp] (PDF). Xavier University. Retrieved 20190114.
 ^ See p. 14, note 3, of the thesis: Michelangeli, Alessandro (October 2007). Bose–Einstein condensation: Analysis of problems and rigorous results (PDF) (Ph.D.). International School for Advanced Studies. Archived (PDF) from the original on 3 November 2018. Retrieved 14 February 2019. Lay summary.
 ^ Bose (2 July 1924). "Planck's law and the hypothesis of light quanta" (PostScript). University of Oldenburg. Retrieved 30 November 2016.
 ^ Bose (1924), "Plancks Gesetz und Lichtquantenhypothese", Zeitschrift für Physik (in German), 26 (1): 178–181, Bibcode:1924ZPhy...26..178B, doi:10.1007/BF01327326, S2CID 186235974
 ^ ^{a} ^{b} H.J.W. MüllerKirsten, Basics of Statistical Physics, 2nd ed., World Scientific (2013), ISBN 9789814449533.
 ^ Srivastava, R. K.; Ashok, J. (2005). "Chapter 7". Statistical Mechanics. New Delhi: PHI Learning Pvt. Ltd. ISBN 9788120327825.
 ^ Landau, L. D., Lifšic, E. M., Lifshitz, E. M., & Pitaevskii, L. P. (1980). Statistical physics (Vol. 5). Pergamon Press.
 ^ "Chapter 6". Statistical Mechanics. January 2005. ISBN 9788120327825.
 ^ The BE distribution can be derived also from thermal field theory.
 ^ R.B. Dingle, Asymptotic Expansions: Their Derivation and Interpretation, Academic Press (1973), pp. 267–271.
 ^ Ziff R. M; Kac, M.; Uhlenbeck, G. E. (1977). "The ideal Bose–Einstein gas, revisited." Phys. Reports 32: 169248.
 ^ See McQuarrie in citations
 ^ Amati, G.; C. J. Van Rijsbergen (2002). "Probabilistic models of information retrieval based on measuring the divergence from randomness " ACM TOIS 20(4):357–389.
 ^ Bianconi, G.; Barabási, A.L. (2001). "Bose–Einstein Condensation in Complex Networks." Phys. Rev. Lett. 86: 5632–35.
References
 Annett, James F. (2004). Superconductivity, Superfluids and Condensates. New York: Oxford University Press. ISBN 0198507550.
 Carter, Ashley H. (2001). Classical and Statistical Thermodynamics. Upper Saddle River, New Jersey: Prentice Hall. ISBN 0137792085.
 Griffiths, David J. (2005). Introduction to Quantum Mechanics (2nd ed.). Upper Saddle River, New Jersey: Pearson, Prentice Hall. ISBN 0131911759.
 McQuarrie, Donald A. (2000). Statistical Mechanics (1st ed.). Sausalito, California 94965: University Science Books. p. 55. ISBN 1891389157.CS1 maint: location (link)