Target cost and join cost

To choose between the many possible sequences of candidate units, we need to quantify how good each possible sequence will sound.

Whilst the video is playing, click on a line in the transcript to play the video from that point.
00:0400:15 We've retrieved from the database a number of possible candidate waveform fragments to use in each target position. The task now is to choose amongst them.
00:1500:32 There are many, many possible sequences of candidates, even for this very small example here. Let's just pick one of them for illustration...
00:3200:41 and you can imagine how many more there are. We want to measure how well each of those will sound. We want to quantify it: put a number on it.
00:4100:45 Then we're going to pick the one that we predict will sound the best.
00:4501:19 So, what do we need to take into account when selecting from amongst those many, many possible candidate sequences? Perhaps the most obvious one is that, when we're choosing a candidate - let's say for this position - from these available candidates, we could consider the linguistic context of the target: in other words, its linguistic environment in this target sentence. We could consider the linguistic environment of each individual candidate, and measure how close they are.
01:1901:30 We're going to look at the similarity between a candidate and a target in terms of their linguistic contexts. The motivation for that is pretty obvious.
01:3001:41 If we could find candidates from identical linguistic contexts to those in the target unit sequence, we'd effectively be pulling out the entire target sentence from the database.
01:4101:56 Now, that's not possible in general, because there's an infinite number of sentences that our system will have to synthesize. So we're not (in general) going to find exactly-matched candidate units, measured in terms of their linguistic context.
01:5602:04 We're going to have to use candidate units from mismatched non-identical linguistic contexts.
02:0402:09 So we need a function to measure this mismatch. We need to quantify it.
02:0902:20 We're going to do that with a function. The function is going to return a cost (we might call that a distance). The function is called the target cost function.
02:2002:30 A target cost of zero means that the linguistic context - measured using whatever features are available to us - was identical between target and candidate.
02:3003:08 That's rarely (if ever) going to be the case, so we're going to try and look for ones that have low target costs. The way I've just described that is in terms of linguistic features: effectively counting how many linguistic features (for example left phonetic context, or syllable stress, or position in phrase) match and how many mis-match. The number of mismatches will lead us to a cost. Taylor, in his book, proposes two possible formulations of the target cost function. One of them is what we've just described.
03:0803:14 It basically counts up the number of mismatched linguistic features.
03:1403:24 He calls that the "Independent Feature Formulation" because a mismatch in one feature and a mismatch in another feature both count independently towards the total cost.
03:2403:29 The function won't do anything clever about particular combinations of mismatch.
03:2903:43 Another way to think about measuring the mismatch between a candidate and a target is in terms of their acoustic features, but we can't do that directly because the targets don't have any acoustic features. We're trying to synthesize them.
03:4303:47 They're just abstract linguistic specifications at this point.
03:4704:22 So, if we wanted to measure the difference between a target and a candidate acoustically (which really is what we want to do: we want to know if they're going to sound the same or not) we would have to make a prediction about the acoustic properties of the target units. The target cost is a very important part of unit selection, so we're going to devote a later part of the course to that, and not go into the details just at this moment. All we need at this point is to know that we can have a function that measures how close a candidate is to the target.
04:2204:42 That closeness could be measured in terms of whether they have similar linguistic environments, or whether they "sound the same". That measure of "sounding the same" involves an extra step of making some prediction of the acoustic properties of those target units.
04:4205:12 Measuring similarity between an individual candidate and its target position is only part of the story. What are we going to do with those candidates after we've selected them? We're going to concatenate their waveforms, and play that back, and hope a listener doesn't notice that we've made a new utterance by concatenating fragments of other utterances. The most perceptible artefact we get in unit selection synthesis is sometimes those concatenation points, or "joins".
05:1205:28 Therefore, we're going to have to quantify how good each join is, and take that into account when choosing the sequence of candidates. So the second part of quantifying the best-sounding candidate sequence is to measure this concatenation quality.
05:2805:48 Let's focus on this target position, and let's imagine we've decided that this candidate has got the lowest overall target cost. It's tempting just to choose that - because we'll make an instant local decision - and then repeat that for each target position, choosing its candidate with the lowest target cost.
05:4806:05 However, that fails to take into account whether this candidate will concatenate well with the candidates either side. So, before choosing this particular candidate, we need to quantify how well it will concatenate with each of the things it needs to join to.
06:0506:38 The same will be true to the left as well. We can see now that the choice of candidate in this position depends (i.e., it's going to change, potentially) on the choice of candidate in the neighbouring positions. So, in general, then we're going to have to measure the join cost - the potential quality of the concatenation - between every possible pair of units... and so on for all the other positions.
06:3806:56 So we have to compute all of these costs and they have to be taken into account when deciding which overall sequence of candidates is best. Our join cost function has to make a prediction about how perceptible the join will be. Will a listener notice there's a join?
06:5606:59 Why would a listener notice there's been a join in some speech?
06:5907:03 Well, that's because there'll be a mismatch in the acoustic properties around the join.
07:0307:28 That mismatch - that change in acoustic properties - will be larger than is normal in natural connected speech. For example, sudden discontinuities in F0 don't happen in natural speech. So, if they do happen in synthetic speech they are likely to be heard by listeners. Our join cost function is going to measure the sorts of things that we think listeners can hear.
07:2807:47 The obvious ones are going to be the pitch (or the physical underlying property: fundamental frequency / F0), the energy - if speech suddenly gets louder or quieter we will notice that, if it's in an unnatural way - and, more generally, the overall spectral characteristics.
07:4708:08 Underlying all of this there's an assumption. The assumption is that measuring acoustic mismatch is a prediction of the perceived discontinuity that a listener will experience when listening to this speech. If we're going to use multiple acoustic properties in the join cost function, then we have to combine those mismatches in some way.
08:0808:13 A typical way is the way that Festival's Multisyn unit selection engine works.
08:1308:25 That's to measure the mismatch in each of those three properties separately and then sum them together. Since some might be more important than others, there'll be some weights. So, it'll be a weighted sum of mismatches.
08:2508:34 It's also quite common to inject a little bit of phonetic knowledge into the join cost.
08:3408:41 We know that listeners are much more sensitive to some sorts of discontinuities than others.
08:4108:49 A simple way of expressing that is to say that they are much more likely to notice a join in some segment types than in other segment types.
08:4908:58 For example, making joins in unvoiced fricatives is fairly straightforward: the spectral envelope doesn't have much detail, and there's no pitch to have a mismatch in.
08:5809:02 So we can quite easily splice those things together.
09:0209:12 Whereas perhaps in a more complex sound, such as a liquid or a diphthong, with a complex and changing spectral envelope, it's more difficult to hide the joins in those sounds.
09:1209:38 So, very commonly, join costs will also include some rules which express phonetic knowledge about where the joins are best placed. Here's a graphical representation of what the join cost is doing. We have a diphone on the left, and a diphone on the right. (Or, in our simple example, just whole phones) We have their waveforms, because these are candidates from the database.
09:3809:42 Because we have their waveforms, we can extract any acoustic properties that we like.
09:4209:48 In this example, we've extracted fundamental frequency, energy and the spectral envelope.
09:4810:13 It's plotted here as a spectrogram. We could parameterize that spectral envelope any way we like. This picture is using formants to make things obvious. More generally, we wouldn't use formants: they're rather hard to track automatically. We'd use a more generalized representation like the cepstrum. We're going to measure the mismatch in each of these properties. For example...
10:1310:18 the F0 is slightly discontinuous, so that's going to contribute something to the cost.
10:1810:25 The energy is continuous here, so there's very low mismatch (so, low cost) in the energy.
10:2510:40 We're similarly going to quantify the difference in the spectral envelope just before the join and just after the join. We're going to sum up those mismatches with some weights that express the relative importance of them, perceptually.
10:4010:51 That's a really simple join cost. It's going to work perfectly well, although it's a little bit simple. Its main limitation is it's extremely local.
10:5111:06 We just took the last frame (maybe 20ms) of one diphone and the first frame (maybe the first 20ms) of the next diphone (the next candidate that we're considering concatenating) and we're just measuring the very local mismatch between those.
11:0611:11 That will fail to capture things like sudden changes of direction.
11:1111:31 Maybe F0 has no discontinuity but in the left diphone it was increasing and in the right diphone it was decreasing. That sudden change from increasing to decreasing will also be unnatural: listeners might notice. So we could improve that: we could put several frames around the join and measure the join cost across multiple frames.
11:3112:01 We could look at the rate of change (the deltas). Or we could just generalize that much further and build some probabilistic model of what trajectories of natural speech parameters normally look like, compare that model's prediction to the concatenated diphones, and measure how natural they are under this model. Now, eventually we are going to go there: we're going to have a statistical model that's going to do that for us, but we're not ready for that yet because we don't know about statistical models.
12:0112:24 So we're going to defer that for later, once we've understood statistical models and how they can be used to synthesize speech themselves, we'll then come back to unit selection and see how that statistical model can help us compute the joint cost, and in fact also the target cost. When we use a statistical model underlying our unit selection system we call that "hybrid synthesis".
12:2412:26 But that's for later: we'll come back to that.

Log in if you want to mark this as completed
This video covers topics: , ,
Excellent 84
Very helpful 30
Quite helpful 14
Slightly helpful 0
Confusing 0
No rating 0
My brain hurts 0
Really quite difficult 3
Getting harder 14
Just right 106
Pretty simple 5
No rating 0