Windowing

Log in if you want to mark this as completed

When we say that a signal is non-stationary we mean that its properties, such as the spectrum, change over time. To analyse signals like this, we need to first assume that these properties do not change over some short period of time, called the frame. We can then analyse individual frames of the signal, one at a time – we perform a short-term analysis. When extracting a frame, its important to apply a window with tapered edges, to remove discontinuities at the start and end of the frame. Here we see why that is, and what would happen if we forgot to apply a tapered window.

Try it for yourself – here are the materials to download:

  • A frame of speech extracted from a larger waveform: framed
  • The same frame of speech, after a tapered window has been applied: framed_windowed

Bitrate

The bitrate (or bit rate) of a signal is the number of bits required to store, or transmit, 1 s of that signal. A bit is a binary number: either 0 or 1. Let’s calculate the bitrate of a digital waveform. First you should revise the concepts of sampling and quantisation from this module of the […]

Continue reading...

A super-simple speech recogniser

We make what is possibly the world’s simplest speech recognition system. It can only recognise two different words, but will help you understand the basic idea of pattern recognition using template matching. The templates are just pre-recorded words, with known labels. The features extracted are just two formant frequencies in the middle of the word, […]

Continue reading...

Autocorrelation for estimating F0

Autocorrelation

Most methods for estimating F0 start from autocorrelation. The idea is pretty simple: we are just looking for a repeating pattern in the waveform, which corresponds to the periodic vocal fold activity. For some waveforms, it might be possible to do that directly in the time domain, but in general that doesn’t work very well. […]

Continue reading...

Pipeline architecture for TTS

Pipeline architecture

Most text-to-speech systems split the problem into two main stages. The first stage is called the front end and contains many separate processes which gradually build up a linguistic specification from the input text. The second stage typically uses language-independent techniques (although they still require a language-specific speech corpus) to generate a waveform. Here we see those two […]

Continue reading...

The speed of sound

At the Parque de las Ciencias in Granada, Spain there is this long tube, open at the end nearest you and closed at the far end. We can calculate the length of this tube just from the audio recording, because we know the speed of sound. Here’s the waveform of part of the recording, showing […]

Continue reading...

Classification and regression trees (CART)

A quick introduction to a very simple but widely-applicable model that can perform classification (predicting a discrete label) or regression (predicting a continuous value). The tree is learned from labelled data, using supervised learning. Before watching this video, you might want to check that you understand what Entropy is.

Continue reading...

Aliasing

Aliasing

In sampling and quantisation we saw that sampling a signal at a fixed rate means that there is an upper limit on the frequencies that can be represented. This limit is called the Nyquist frequency. Before sampling a signal, we must remove all energy above the Nyquist frequency, and here we will see what would […]

Continue reading...

Sampling and quantisation

Is digital better than analogue? Here we discover that there are limitations when storing waveforms digitally. We learn that the consequence of sampling at a fixed rate is an upper limit on the frequencies that can be represented, called the Nyquist frequency. In addition to the limitations of sampling, storing each sample of the waveform as a […]

Continue reading...

Token passing

Token passing is a really nice way to understand (and even to implement) Viterbi search for Hidden Markov Models. Here we see token passing in action, and you can look at the spreadsheet to see the calculations. To keep things simple, we are ignoring transition probabilities in this example. It would be simple to add them […]

Continue reading...

Entropy: understanding the equation

The equation for entropy is very often presented in textbooks without much explanation, other than to say it has the desired properties. Here, I attempt an informal derivation of the equation starting from uniform probability distributions. A good way to think about information is in terms of sending messages. In the video, we send messages […]

Continue reading...

My inaugural lecture

I talk about how speech synthesis works, in what I hope is a non-technical and accessible way, and finish off with an application of speech synthesis that gives personalised voices to people who are losing the ability to speak. I also try to mention bicycles as many times as possible. For a more up-to-date, slightly more technical, […]

Continue reading...