Speech signal modelling

After we parameterise a speech signal, we need to decide how best to represent those parameters for use in statistical modelling, and eventually how to reconstruct the waveform from them.

Whilst the video is playing, click on a line in the transcript to play the video from that point.
00:0500:25 Having considered speech signal analysis - epoch detection (which is really just for signal processing in the time domain), then F0 estimation (which is useful for all sorts of things both in unit selection and statistical parametric speech synthesis), and estimating the smooth spectral envelope - it is now time to think about representing those speech parameters.
00:2500:31 What we have so far is just analysis.
00:3100:40 It takes speech signals and converts them, or extracts from them various pieces of useful information: epochs, F0, spectral envelope.
00:4000:44 The thing we still haven't covered in detail is the aperiodic energy.
00:4400:46 That's coming up pretty soon.
00:4600:49 What we're going to do now is model these things.
00:4900:53 Or, more specifically, we're going to get ready for modelling.
00:5300:59 We're going to get the representations suitable for statistical modelling.
00:5901:04 Everything that's going to happen here we can really just describe as "feature engineering".
01:0401:13 To motivate our choice of which speech parameters are important and what their representation should be, we'll just have a quick look forward to statistical parametric speech synthesis.
01:1301:16 Here's the big block diagram that says it all.
01:1601:22 We're going to take our standard front end, which is going to extract linguistic features from the text.
01:2201:28 We're going to perform some big complicated and definitely non-linear regression.
01:2801:31 The eventual output is going to be the waveform.
01:3101:38 We already know from what we've done in Speech Processing that the waveform isn't always the best choice of representation of speech signals.
01:3801:41 It's often better to parametrize it, which is what we're working our way up to.
01:4101:46 So, we're going to assume that the waveforms not a suitable output for this regression function.
01:4601:49 We're going to regress on to speech parameters.
01:4901:53 Our choice of parameters is going to be motivated by a couple of things.
01:5301:55 One is that we can reconstruct the waveform from it.
01:5501:57 So it must be complete.
01:5702:07 We think a smooth spectral envelope, the fundamental frequency, and this thing that we still have to cover - aperiodic energy - will be enough to completely reconstruct a speech waveform.
02:0702:10 The second thing is that they have to be suitable for modelling.
02:1002:18 We might want to massage them: to transform them in certain ways to make them amenable to certain sorts of statistical model.
02:1802:29 So, if we've established that the parameters are spectral envelope + F0 + aperiodic energy, how would we represent them in a way that's suitable for modelling?
02:2902:43 Remember that this thing that we're constructing - that can analyse and then reconstruct speech - we can call that a "vocoder" (voice coder ) - it has an excitation signal driving some filter and the filter has a response which is the spectral envelope.
02:4302:47 In voiced speech, the excitation signal will be F0.
02:4702:57 Additionally, we need to know if there is a value of F0, so typically we'll have F0 and a binary flag indicating whether the signal is voiced or unvoiced.
02:5702:59 How would we represent that for modelling?
02:5903:02 Well we could just use the raw value of F0.
03:0203:06 But if we plot it we'll realise it has very much a non-Gaussian distribution.
03:0603:09 We might want to do something to make it look a bit more Gaussian.
03:0903:11 The common thing to do will be to take the log.
03:1103:17 So we might take the log of F0 as that representation, plus this binary flag.
03:1703:20 The spectral envelope needs a little bit more thought.
03:2003:26 For the moment we have a smooth spectral envelope that's hopefully already independent of the source.
03:2603:36 That's good, but we're going to find out in a moment it's still very high dimensional and strongly correlated and those aren't good properties for some sorts of statistical model.
03:3603:44 Then we'd better finally tackle this problem of what to do about the other sort of energy in speech which is involved in - for example - fricatives.
03:4403:45 Let's write a wish-list.
03:4503:47 What do we want our parameters to be like?
03:4703:50 Well, we're going to use machine learning: statistical modelling.
03:5003:54 It's always convenient if the number of parameters is the same over time.
03:5403:58 It doesn't vary - for example - with the type of speech segment.
03:5804:00 We don't want a different number of parameters for vowels and constants.
04:0004:02 That would be really messy in machine learning.
04:0204:08 So we want a fixed number of parameters (fixed dimensionality) and we'd like it to be low-dimensional.
04:0804:13 There's no reason to have 2000 parameters per frame if 100 will do.
04:1304:29 For engineering reasons, it's much nicer to work at fixed frame rates - say, every 5ms for synthetic speech or every 10ms for speech recognition - than at a variable frame rate such as - for example - pitch-synchronous signal processing.
04:2904:33 So we're just going to go for a fixed frame rate here because it's easier to deal with.
04:3304:39 Of course we want what we've been aiming at all the time which is to separate out different aspects of speech, so we can model and manipulate them separately.
04:3904:47 There are some other important properties of this parametrization and I'm going to group them under this rather informal term of being "well-behaved".
04:4705:13 What I mean by that is that when we - for example - perturb them, if we add little errors to each of their values, which is going to happen whenever we average them with others or when we just model them and reconstruct them, whether that's consecutive frames or frames pooled across similar sounds to train a single hidden Markov model say - whenever any of these things happen, when we perturb the values of the speech parameters, we would like them to still reconstruct valid speech waveforms and not be unstable.
05:1305:21 So they want to do the "right thing" when we average them / smooth them / introduce errors to them.
05:2105:30 Finally, depending on our choice of statistical model, we might need to do some other processing to make the parameters have the correct statistical properties.
05:3005:41 Specifically, if we're going to use Gaussian distributions to model them, and we would like to avoid covariance because that adds a lot of extra parameters, we'd like statistically uncorrelated parameters.
05:4105:48 That's probably not necessary for neural networks, but it's quite necessary for Gaussians, which we're going to use in hidden Markov models.
05:4805:54 We've talked about STRAIGHT and there's a reading to help you fill in all the details about that.
05:5405:59 Let's just clarify precisely what we get out of STRAIGHT and whether it's actually suitable for modelling.
05:5906:08 It gives us the spectral envelope, which is smooth and free from the effects of F0; good, we need that!
06:0806:09 It gives us a value for F0.
06:0906:15 Now, we could also use any external F0 estimator or the one inside the STRAIGHT vocoder.
06:1506:18 That doesn't matter: that can be an external thing.
06:1806:24 It gives us also the non-periodic energy, which we'll look at and parametrize in a moment.
06:2406:32 The smooth spectral envelope is of the same resolution as the original FFT that we computed it from.
06:3206:39 Remember that when we draw diagrams like this rather colourful spectrogram in 3D, the underlying data is of course discrete.
06:3906:43 Just because we join things that with smooth lines doesn't mean that it's not discrete.
06:4306:52 So this spectral envelope here is a set of discrete bins.
06:5206:54 It's the same as the FFT bins.
06:5406:56 It's just been smoothed.
06:5607:03 Also, because consecutive values ... if we zoom in on this bit here...
07:0307:07 consecutive values (i.e., consecutive FFT bins) will be highly correlated.
07:0707:12 They'll go up and down together in the same way as the outputs of a filterbank are highly correlated.
07:1207:19 That high resolution and that high correlation make this representation less than ideal for modelling with Gaussians.
07:1907:21 We need to do something about that.
07:2107:25 We need to improve the representation of the spectral envelope.
07:2507:34 While we are we doing that, we might as well also warp the frequency scale because we know that perceptual scales normally are a better way of representing the spectrum for speech processing.
07:3407:36 We'll warp it on to the Mel scale.
07:3607:42 We'll decorrelate, and we're going to do that using a standard technique: of the cepstrum.
07:4207:48 We're going to then reduce the dimensionality of that representation simply by truncating the cepstrum.
07:4807:51 What we will end up with is something called the Mel cepstrum.
07:5107:59 That sounds very similar to MFCCs and it's motivated by all the same things, but it's calculated in a different way.
07:5908:06 That's because we need to be able to reconstruct the speech signal, which we don't need to do in speech recognition.
08:0608:13 In speech recognition, we warp the frequency scale with a filterbank: a triangular filterbank spaced on a Mel scale; that loses a lot of information.
08:1308:15 Here, we're not going to do that.
08:1508:19 We're going to work with a continuous function rather than the discreet filterbank.
08:1908:22 We'll omit the details of that because it's not important.
08:2208:30 Once we're on that warped scale (probably the Mel scale, but you could choose some other perceptual frequency scale which would also be fine), we're going to decorrelate.
08:3008:34 We'll first do that by converting the spectrum to the cepstrum.
08:3408:40 The cepstrum is just another representation of the spectral envelope as a sum of cosine basis functions.
08:4008:45 Then we can reduce the dimensionality of that by keeping only the first N coefficients.
08:4508:52 The more we keep, the more detail we'll keep in that spectral envelope representation, so the choice of number is empirical.
08:5208:59 In speech recognition we kept very few, just 12: a very coarse spectral envelope, but that's good enough for pattern recognition.
08:5909:01 It will give very poor reconstruction though.
09:0109:07 So, in synthesis, we're going to keep many more: perhaps 40 to 60 cepstral coefficients.
09:0709:12 So that finalizes the representation of the spectral envelope.
09:1209:19 We use an F0-adaptive window to get the smoothest envelope we can, we do an FFT, do a little additional smoothing, as described in the STRAIGHT paper.
09:1909:25 Then we will warp onto a Mel scale, convert to the cepstrum, and truncate.
09:2509:35 That gives us a set of relatively uncorrelated parameters, reasonably small in number, from which we can reconstruct the speech waveform.
09:3509:39 So let's finally crack the mystery of the aperiodic energy!
09:3909:40 What is it?
09:4009:42 How do we get it out of the spectrum?
09:4209:46 Let's go back to our favourite spectrum, of this particular sound here.
09:4609:51 The assumption is that this spectrum contains both voiced and unvoiced sounds.
09:5109:54 So what we're seeing is the complete spectrum of the speech signal.
09:5409:58 In general, speech signals have both periodic and non-periodic energy.
09:5810:02 Even vowel sounds have some non-periodic energy: maybe turbulence at the vocal folds.
10:0210:10 So we'll assume that this spectrum is made up of a perfectly-voiced part which, if we drew the idealized spectrum, would be a perfect line spectrum....
10:1010:24 ...plus some aperiodic energy which also has a spectral shape but has no structure (no line spectrum), so some some shaped noise.
10:2410:28 These two things have been added together in what we see on this spectrum here.
10:2810:46 So the assumption STRAIGHT makes is that it's the difference between the peaks, which are the perfect periodic part, and the troughs, which are being - if you like - "filled in" by this aperiodic spectrum sitting behind them, it's this difference that tells us how much aperiodic energy there is in this spectrum.
10:4610:49 So we're just going to measure the difference.
10:4910:58 The way STRAIGHT does that is to fit one envelope to the periodic energy - that's the tips of all the harmonics.
10:5811:04 It would fit another envelope to the troughs in-between.
11:0411:13 In-between two harmonics we assume that all energy present at that point (at that frequency) is non-periodic because it's not at a multiple of F0.
11:1311:17 Then we're just going to look at the ratio between these two things.
11:1711:23 If the red and blue lines are very close together, there's a lot of a periodic energy relative to the amount of periodic (i.e., voiced) energy.
11:2311:32 The key point here about STRAIGHT is that we're essentially looking at the difference between these upper and lower envelopes of the spectrum: the ratio between these things.
11:3211:37 That's telling us something about the ratio between periodic and aperiodic energy.
11:3711:51 That's another parameter that we'll need to estimate from our speech signals, and store so that we can reconstruct it: so we can add back in an appropriate amount of aperiodic energy at each frequency when we resynthesise.
11:5111:56 Because all of this is done at the same resolution as the original FFT spectrum, it's all very high resolution.
11:5611:59 That's a bad thing: we need to fix that.
11:5912:08 Again, just for the same reasons as always, the parameters are highly correlated because neighbouring bins will have often the same value.
12:0812:12 So we also need to improve the representation of the aperiodic energy.
12:1212:16 We don't need a very high-resolution representation of aperiodic energy.
12:1612:19 We're not perceptually very sensitive to fine structure.
12:1912:24 So we just have a "broad-brush" representation.
12:2412:35 The standard way to do that would be to divide the spectrum into broad frequency bands and just average the amount of energy in each of those bands, at each moment in time (at each frame, say every 5ms).
12:3512:42 If we did that on a linear frequency scale we might - for example - divide it into these bands.
12:4212:47 Then, for each time window ...let's take a particular time window...
12:4712:51 we just average the energy and use that as the representation.
12:5113:04 Because it's always better to do things on a perceptual scale, our bands might look more like this: getting wider as we go up in frequency.
13:0413:06 We'll do the same thing.
13:0613:09 The number of bands is a parameter we can choose.
13:0913:20 In older papers you'll often see just 5 bands used and newer papers (perhaps with higher bandwidth speech) use more bands - maybe 25 bands - but we don't really need any more than that.
13:2013:25 That's relatively low-resolution compared to the envelope capturing the periodic energy.
13:2513:32 Let's finish off by looking relatively high level at how we actually reconstruct the speech waveform.
13:3213:35 It's pretty straightforward because it's really just a source-filter model again.
13:3513:38 The source and filter are not the true physical source and filter.
13:3813:44 They're the excitation and the spectral envelope that we've estimated from the waveform.
13:4413:45 So they're a signal model.
13:4513:50 What we've covered up to this point is all of this analysis phase.
13:5013:53 The synthesis phase is pretty straightforward.
13:5313:57 We take the value of F0 and we create a pulse train at that frequency.
13:5714:04 We take this non-periodic (i.e., aperiodic) energy in various bands and we just create some shaped noise.
14:0414:12 We just have a random number generator and put a different amount of energy into the various frequency bands according to that aperiodicity ratio.
14:1214:25 For the spectral envelope (possibly collapsed down into the Mel cepstrum and then inverted back up to the full spectrum), we just need to create a filter that has the same frequency response as that.
14:2514:35 We take the aperiodic energy and mix it with the periodic energy - so, mix these two things together - and the ratio (the "band aperiodicity ratio") tells us how to do.
14:3514:37 Excite the filter with it and get our output signal.
14:3714:44 In this course, we're not going to go into the deep details of exactly how you make a filter that has a particular frequency response.
14:4414:52 We're just going to state without proof that it's possible, and it can be done from those Mel cepstral coefficients.
14:5214:58 So STRAIGHT as sophisticated as it is, still uses a pulse train to simulate voiced energy.
14:5815:01 That's something that's just going to have a simple line spectrum.
15:0115:07 We already know that that might sound quite "buzzy": that's a rather artificial source.
15:0715:12 STRAIGHT is doing something a little bit better than just a pulse train.
15:1215:21 Instead of pulses, it performs a little bit of phase manipulation and those pulses become like this.
15:2115:23 That's just smearing the phase.
15:2315:27 Those two signals both have the same magnitude spectrum but different phase spectra.
15:2715:35 This is one situation where moving from the pure pulse to this phase-manipulated pulse actually is perceived as better by listeners.
15:3515:44 The other thing that STRAIGHT does better than our old source-filter model, as we knew it before, is that it can mix together periodic and non-periodic energy.
15:4415:50 We can see here that there's non-periodic energy mixed in with these pulses.
15:5015:59 Good: we've decomposed speech into an appropriate set of speech parameters that's complete (that we can reconstruct from).
15:5916:06 It's got the fundamental frequency plus a flag that's a binary number telling is if there is a frequency or not (i.e., or whether it's unvoiced).
16:0616:13 We have a smooth spectral envelope, which we've parametrized as the Mel cepstrum because it decorrelates and reduces dimensionality.
16:1316:21 Aperiodic energy is represented as essentially a shaped noise spectrum, and the shaping is just a set of broad frequency bands.
16:2116:25 We've seen just in broad terms how to reconstruct the waveform.
16:2516:32 So, there's an analysis phase and that produces these speech parameters.
16:3216:37 Then there's a synthesis phase that reconstructs a waveform.
16:3716:42 What we're going to do now is split apart the analysis and synthesis phases.
16:4216:47 We're going to put something in the middle, and that thing is going to be a statistical model.
16:4717:01 We're going to need to use the model because our input signals will be our training data (perhaps a thousand sentences of someone in the studio) and our output signal will be different sentences: the things we want to create at text-to-speech time.
17:0117:09 This model needs to generalize from the signals it has seen (represented as vocoder parameters) to unseen signals.

Log in if you want to mark this as completed
Excellent 39
Very helpful 11
Quite helpful 5
Slightly helpful 1
Confusing 1
No rating 0
My brain hurts 1
Really quite difficult 1
Getting harder 19
Just right 36
Pretty simple 0
No rating 0