Notes on Bayesian spectrum analysis
Notes on Bretthorst’s thesis on Bayesian spectrum analysis (Bretthorst 1988), with application of speech analysis in mind.
Some history. Bretthorst’s thesis was joint work with Jaynes exploring the ideas first set out in Jaynes (1987); and ultimately formed the foundation for contemporary ideas like time-frequency analysis as probabilistic inference (Turner and Sahani 2014).
Chapter 1
-
FT terminology and stuff
The discrete-time Fourier transform (DTFT) produces from an infinite discrete-time input array a continuous spectrum . This means that the transformations are asymmetrical: we sum over the time series indexed by but integrate over the continuous . The discrete Fourier transform (DFT) produces from a finite discrete-time input array a finite spectrum at the Nyquist frequencies , where is known as one Nyquist step. Thus the transformations are symmetrical; both are summations. But the DFT is a wellbehaved function for any ; seen in this way, it is like the DTFT except that the input array is finite. Thus the DFT at continuous is subject to finite-size effects such as wiggles and peak broadening — these are apparent also at the Nyquist frequencies .
The resolution of the DFT is defined by the Nyquist step. This means that two sinusoids with frequencies will be resolved when . Here “resolved” means that two peaks show up in the spectrum. Although the DFT is defined for all values of , points actually suffice to capture all the information in the input series . These points are any consecutive frequencies separated by the Nyquist step. The DFT with continuous is often limited to , so the DFT is often expressed in frequencies . With this convention, for real the DFT values at negative frequencies are the complex conjugate of the DFT values for positive frequencies. Thus when plotting a spectrum, only the values for positive are shown. Likewise, when doing spectral analysis only the positive frequencies are relevant, as the negative ones contain equivalent information. But for power calculations etc., the negative frequencies still have to be included, typically resulting in a factor 2 in the analysis. The symmetry for real signals holds for continuous and discrete . Remember, the DFT is just the DFT with continuous evaluated at the Nyquist frequencies .
If the are also samples of a real continuous signal , then the DFT can still represent the continuous (i.e. not only the discrete samples ) if the -signal does not contain frequencies ; i.e. it is sufficiently smooth.
The FFT is an implementation of the DFT and also uses the same . To obtain the best frequency estimates from the spectrum, you start out with the FFT and obtain the spectrum at the points . Then it is necessary to evaluate the DFT at values by search routines (see Press et al. (1992)).
The relation between all FTs is explained in the “FT family tree diagram” in (Fessler 2004, 121).
-
Sampling rate for speech
Since the vocal tract filter is expected from theoretical considerations (e.g. [13]) to have a clear resonant structure only in the range below 5.5 kHz or so, it is common to use a digital speech signal that is sampled at around 10 or 11 kHz. (Fulop 2011, 184)
In other words, “no actual physical resonances of the vocal system are generally expected above at most 5.5 kHz” (Fulop 2011, 188). But I don’t know if this bandwidth is sufficient to describe non-voiced speech. According to Chen, “for unvoiced consonants, the frequency range is 16 kHz.” (Chen and Miller 2019, 7)
-
p. 2
Remember: little prior information implies that the estimates of the model parameters cannot differ much from least squares or ML, apparently even after marginalization. Advantages of Bayesian probability theory are marginalization, straightforward evaluation of the estimate’s accuracy, model comparison, and updating hyperpriors as more data is processed, thus increasing the significance of prior information and gradually moving away from LS or ML estimates.
-
p. 6
In order to know what features of the spectrum are relevant, and which can be ignored, the goal is to find the exact conditions under which the discrete Fourier transform is the optimal frequency estimator. Irrelevant features can be induced by noise in the time series but also by properties of the DFT itself (such as wiggles coming from input of finite length effectuated by convolution in the domain with the Dirichlet kernel).
-
Why use Bayesian probability theory in SR?
(i) To improve noise handling in situations with low SNR.
(ii) To use model comparison as regularization devices in autonomous building block discovery.
(iii) Clear interpretability and ability to extend the models easily.
-
DC component in speech
Because of the DC-blocking capacitance in the amplifier circuit, there is no DC component in any voice signal. And the DC component is never audible (Chen 2016, 22).
Chapter 2
-
p. 15: Deriving the Gaussian prior from ME
The notes in the margin relate , the noise power and via the “Lebesgue integral”. The function , can be seen as a “everywhere discontinuous function” or a Wiener process, but this does not matter in the “Lebesgue measure” approach outlined in the margins.
The philosophy of using ME to find a pdf for the noise is reiterated many times by Jaynes. For example:
In practice, common sense usually tells us that any observed fine details of past noise are irrelevant for predicting fine details of future noise, but that coarser features, such as past mean square values, may be expected reasonably to persist, and thus be relevant for predicting future mean square values. Then our probability assignment for future noise should make use only of those coarse features of past noise which we believe to have this persistence. That is, it should have maximum entropy subject to the constraints of the coarse features that we retain because we expect them to be reproducible. [Emphasis mine] (Jaynes 2003, 208)
There are also other rationales, like the central limit theorem. See p. 16.
-
p.17: When is the approximation valid?
The result “DFT magnitude is a sufficient statistic for Bayesian spectrum analysis in case of a single frequency” is valid for frequencies in the case of uniform sampling. In physical units, with , this condition translates to . In words, the approximation is valid for a sufficient amount of data that does not a contain a low frequency. The precise convergence conditions can be obtained by summing the series on p. 16 explicitly to some kind of “Dirichlet kernels” and seeing where the sums are exactly zero (p. 70a). It turns out that this is always the case for uniform sampling at the Nyquist frequencies (p. 70b-c).
-
p. 20: The DFT will give optimal frequency estimates if six conditions are met:[^1]
- The number of data values is large,
- There is no constant (DC) component in the data,
- There is no evidence of a low frequency; i.e. the single frequency present in the signal is not a low one,
- The data contain only one frequency,
- The frequency must be stationary (i.e. constant amplitude and phase),
- The noise is white.
These conditions define the specific question that we are asking and for which the DFT is a sufficient statistic. If any of these six conditions are not met, our question is inaccurate and the DFT will give suboptimal or plainly false answers. In practice violating these conditions can have limited effects, affecting only the accuracy with which the frequencies (or other model parameters) are estimated.
Note: in Numerical Recipes there is a section about estimating the power spectrum based on the FFT with orthodox statistics: (Press et al. 1992, 549).
-
p. 20:
The curvature at the maximum of the spectrum can evaluated numerically in several ways. The first choice is whether to set , i.e. we simply take the th Nyquist frequency at which the FFT magnitude is largest as our best estimate. This means that the actual best estimate differs from our approximation by at most . This strategy is supposed to be viable because
There are always at least three FFT points in this peak. This is because for increasing the peak becomes more narrow, but at the same time the frequency grid becomes finer. These effects counterbalance each other. [From
expand-spectrum-maximum.wxmx
; proof also somewhere in Bretthorst (1988).]There is a very interesting text about this in Numerical Recipes: (Press et al. 1992, 549): 13.4 Power Spectrum Estimation Using the FFT.
When the signal is an actual sinusoid, the peak in the spectrum will be — given — narrower than for any other signal.[^2]
We can estimate by
- (-approximation),
- a finite difference method based on (Press et al. 1992),
- a polynomial expansion based on the same points as in (ii) — see
expand-spectrum-maximum.wxmx
for this, - using the exact result that for any we have . Then can be found by evaluating this expression at . See the math paper “Finding with the help of DFT properties” for more details and a few interesting colloraries. Based on complexity, this is still a viable alternative (needing only several runs next to the original one) while exact. However concerning numerical sensitivity I’m not so sure.
Alternatively, we can first look for (generally for any integer ) with search routines (Press et al. 1992) and then apply the evaluation strategies listed above.
Chapter 3
-
p. 32: Orthogonal model functions
The data taken at times define an -dimensional vector of reals. The model function becomes an -dimensional vector as well when evaluated at the sample times :
S_f = \operatorname{span}[G_1(t_i), G_2(t_i), \ldots, G_m(t_i)].
Here $S_f$ depends implicitly on the grid $t_i$ and the parameters $\omega$. In general it is impossible to expand the data $d_i$ into the basis $G_j$ unless $d_i \in S_f.$ In that case (3.17) blows up as discussed on p. 35 section 3.4.
In numerical implementations it is essential to avoid values of $\omega$ that cause the basis $B_j, (1 \leq j \leq m)$ to become linearly dependent. For example this happens with the sinusoid + constant model for $\omega = 0, \omega = \pi$. (But in that case we can exclude these values anyway because the constant in the model takes care of this situation.) Numerically linearly dependent basis functions will produce near-zero eigenvalues of Eq. (3.5a), causing the normalization in (3.5b) to blow up. In many cases we may simply remove the values of $\omega$ for which this happens.
Because we asume that the basis functions are linearly independent, in this basis the vector $f(t_i)$ has the coordinates $(B_1, B_2, \cdots, B_m)$. We also define the scalar product between two $N$-dimensional vectors $a, b$ as $a \cdot b = \sum_{i=1}^N a(t_i) b(t_i)$. In this notation we have that
$$ g_{jk} = G_j \cdot G_k.
To diagonalize $g_{jk}$ we need to express it in a basis $A_k, (1 \leq k \leq N)$ as $g'_{jk}$ where the basis functions satisfy orthonormality:
The sought basis functions are proportional to the eigenvectors of $g_{jk}$ in the old basis. This can be seen by the fact that the eigenvector equations become trivial when formulated in the eigenbasis of $g_{jk}$.
The reason why we can make this basis substition (coordinate transformation) in $Q$ without changing the value is because the actual values of the vector $f(t_i)$ are unmodified:
$$ f(t_i) = \sum_{j=1}^m B_j G_j(t_i, \omega) = \sum_{k=1}^m A_k H_k(t_i, \omega) .$$
The substitition does however affect the volume elements.
When $\omega$ is given, all of this is finite-dimensional linear algebra, where an $N$-dimensional subspace is $S_f$ expanded in $m \leq N$ finite-dimensional vectors. If the $t_i$ would go into a continuum ($N \rightarrow \infty$) -- and $\omega$ is still a given -- this would become infinite-dimensional linear algebra where an infinite-dimensional subspace is expanded in $m < \infty$ infinite-dimensional vectors. With $m \rightarrow N$ linearly independent model functions we always recover the original space $R^N$ home to the data vector $d_i$, i.e. $S_f \rightarrow R^N$.
The diagonalization of $g_{jk}$ must in principal be done for every set of $t_i$ (usually this can be simplified to $N$) and every value of $\omega$. Therefore analytical approximations will be a must.
-
p. 36: The Bessel inequality
This inequality and the deviation from equality says almost everything there is to say about how a model performs in the theory. I adapted the following definition of the BI from Wikipedia.
Let be a vector in an -dimensional space and be an orthonormal “sequence”. Then
\dv{f}{x}(x) \approx \frac{f(x + \epsilon) - f(x)}{\epsilon},
it is clearly necessary that $\epsilon \ll x$. Therefore, presumably we can choose the step size $\epsilon$ as function of $x$, and no scaling of $x$ would be necessary. However, this is false because there are round-off errors which depend on the absolute numerical values of $x$ and $\epsilon$, and don't care for their relative magnitudes (i.e. $\epsilon/x$).
# Chapter 5
- Approximations in deriving the model evidence
We assume that Laplace's method is accurate, i.e. that the integral in Eq. (5.5c) $\approx$ the integral in (5.6). A sufficient condition for this is that the data determines the model parameters well, i.e.
1. $p(\omega|D,I)$ is unimodal, so there is a well-defined estimate $\hat\omega$;
2. the peak at $\hat\omega$ contains almost all the mass.
When these conditions are not fulfilled, I still think that model comparison based on the expression for the model evidence (5.9) will still yield qualitatively correct results, because of the absolute dependence on the log peak value $\log p(\hat\omega|D,I)$ ... This could work as a "last-resort arbiter."
- Relationship between $p(D|f_j,I)$ and BIC
I think that the model evidence as derived in this Chapter is related to or maybe even equal to the BIC when applied on this situation. The Gaussian priors for $\omega$ and subsequent usage of Laplace's method to integrate the $\omega$ out is also done to derive the BIC. However, there are some differences. Out of the top of my head:
1. Our model parameters are not only $\omega$ but rather $(A,\omega,\sigma)$, and we use not the Gaussian but the Jeffreys prior for $\sigma$.
2. Likewise, two new scale parameters for the Gaussian priors for $(A, \omega)$ are introduced which give rise to the "scale ranges" $R_\delta, R_\gamma, R_\sigma$ used to bound the Jeffreys priors for $\delta, \gamma, \sigma$ respectively (Eq. 5.9).
3. The integration of $A$ is exact.
See "On the derivation of the Bayesian Information Criterion" [@Bhat2010].
- p. 60
In the case of unimodal distributions, the "Laplace approximation" to expand the peak quadratically is proper, because the $\omega$-dependency of the peak is defined by (i) the $\omega$-dependency of our self-chosen model functions and (ii) the degree of their orthogonality. This means that the quality of the Laplace approximation is essentially in our own hands.
An illustration of this principle is the expression for $\dv[2]{\omega} \bar{h^2}(\omega) \approx C''(\omega)$ in the case of the approximate single frequency model.
- p. 60
Nice trick: choosing priors with very large scale parameters $\gamma$ (very large compared to $\sigma$, with $\sigma$ a scale parameter imposed to us by the problem) and then using $\gamma \gg \sigma$ to our benefit to make approximations in the calculations.
[^1]: These conditions seem to be all but forgotten, but are in fact a gold mine be for people working in signal analysis. This neat list smartly enumerates every pitfall we encounter in daily practice... and popped out from a Bayesian analysis of the frequency estimation problem. This is not the only field in which fundamental tools known to be effective workhorses can be reframed as sufficient statistics; see e.g. @Hennig2022.
[^2]: I am not a 100% sure about this, but I think this is true for signals encountered in real situations. One can construct a signal with a narrower peak than a sinusoid by the inverse DFT of this narrow peak. Or does it follow that this signal then has frequencies higher than $f_s/2$? Also on p. 28 I see peaks resolved by only one FFT point. Perhaps a square wave or a wave less smooth than a sinusoid has peaks narrower than three FFT points. (But "less smooth" implies higher frequency content... Unresolved.)Go to next post ▤ or ▥ previous post in research series.