A super-simple speech recogniser

We make what is possibly the world’s simplest speech recognition system. It can only recognise two different words, but will help you understand the basic idea of pattern recognition using template matching. The templates are just pre-recorded words, with known labels. The features extracted are just two formant frequencies in the middle of the word, and the distance measure between unknown word and templates is simply the Euclidean distance.

Try it for yourself with these materials (a zip file containing the three waveforms used in this video).

Look out for later videos where we extend the idea of template matching to use sequences of features, when we will have to solve the problem of aligning the two words before comparing them.

 

Spectrum and spectrogram

The spectrum and the spectrogram are much more useful ways of analysing speech signals than the waveform. We look at how to create them using Wavesurfer and what effect the analysis window size has on what we see.    

Continue reading...

Aliasing

Aliasing

In sampling and quantisation we saw that sampling a signal at a fixed rate means that there is an upper limit on the frequencies that can be represented. This limit is called the Nyquist frequency. Before sampling a signal, we must remove all energy above the Nyquist frequency, and here we will see what would […]

Continue reading...

Autocorrelation for estimating F0

Autocorrelation

Most methods for estimating F0 start from autocorrelation. The idea is pretty simple: we are just looking for a repeating pattern in the waveform, which corresponds to the periodic vocal fold activity. For some waveforms, it might be possible to do that directly in the time domain, but in general that doesn’t work very well. […]

Continue reading...

A simple synthetic vowel

Using Praat, we synthesise a simple vowel-like sound, starting with a pulse train, which we pass through a filter with resonant peaks.

Continue reading...

The speed of sound

At the Parque de las Ciencias in Granada, Spain there is this long tube, open at the end nearest you and closed at the far end. We can calculate the length of this tube just from the audio recording, because we know the speed of sound. Here’s the waveform of part of the recording, showing […]

Continue reading...

Token passing

Token passing is a really nice way to understand (and even to implement) Viterbi search for Hidden Markov Models. Here we see token passing in action, and you can look at the spreadsheet to see the calculations. To keep things simple, we are ignoring transition probabilities in this example. It would be simple to add them […]

Continue reading...

Pipeline architecture for TTS

Pipeline architecture

Most text-to-speech systems split the problem into two main stages. The first stage is called the front end and contains many separate processes which gradually build up a linguistic specification from the input text. The second stage typically uses language-independent techniques (although they still require a language-specific speech corpus) to generate a waveform. Here we see those two […]

Continue reading...

My inaugural lecture

I talk about how speech synthesis works, in what I hope is a non-technical and accessible way, and finish off with an application of speech synthesis that gives personalised voices to people who are losing the ability to speak. I also try to mention bicycles as many times as possible.

Continue reading...

Sampling and quantisation

Is digital better than analogue? Here we discover that there are limitations when storing waveforms digitally. We learn that the consequence of sampling at a fixed rate is an upper limit on the frequencies that can be represented, called the Nyquist frequency. In addition to the limitations of sampling, storing each sample of the waveform as a […]

Continue reading...

TD-PSOLA …the hard way

Time-Domain Pitch Synchronous Overlap and Add (TD-PSOLA) can modify the fundamental frequency and duration of speech signals, without affecting the segment identity – that is, without changing the formants. Normally, it’s an automatic algorithm, but here we do it the hard way – by hand! If you want to follow-along, you will need Audacity and these materials (a […]

Continue reading...

Classification and regression trees (CART)

A quick introduction to a very simple but widely-applicable model that can perform classification (predicting a discrete label) or regression (predicting a continuous value). The tree is learned from labelled data, using supervised learning. Before watching this video, you might want to check that you understand what Entropy is.

Continue reading...