Proceedings of the ACL 2010 Conference Short Papers, pages 27–32,
Uppsala, Sweden, 11-16 July 2010.
c
2010 Association for Computational Linguistics
Efficient PathCountingTransducers for MinimumBayes-Risk Decoding
of StatisticalMachineTranslation Lattices
Graeme Blackwood, Adri
`
a de Gispert, William Byrne
Machine Intelligence Laboratory
Cambridge University Engineering Department
Trumpington Street, CB2 1PZ, U.K.
{gwb24|ad465|wjb31}@cam.ac.uk
Abstract
This paper presents an efficient imple-
mentation of linearised lattice minimum
Bayes-risk decoding using weighted finite
state transducers. We introduce transduc-
ers to efficiently count lattice paths con-
taining n-grams and use these to gather
the required statistics. We show that these
procedures can be implemented exactly
through simple transformations of word
sequences to sequences of n-grams. This
yields a novel implementation of lattice
minimum Bayes-riskdecoding which is
fast and exact even for very large lattices.
1 Introduction
This paper focuses on an exact implementation
of the linearised form of lattice minimum Bayes-
risk (LMBR) decoding using general purpose
weighted finite state transducer (WFST) opera-
tions
1
. The LMBR decision rule in Tromble et al.
(2008) has the form
ˆ
E = argmax
E
′
∈E
θ
0
|E
′
| +
u∈N
θ
u
#
u
(E
′
)p(u|E)
(1)
where E is a lattice oftranslation hypotheses, N
is the set of all n-grams in the lattice (typically,
n = 1 . . . 4), and the parameters θ are constants
estimated on held-out data. The quantity p(u|E)
we refer to as the path posterior probability of the
n-gram u. This particular posterior is defined as
p(u|E) = p(E
u
|E) =
E∈E
u
P (E|F ), (2)
where E
u
= {E ∈ E : #
u
(E) > 0} is the sub-
set of lattice paths containing the n-gram u at least
1
We omit an introduction to WFSTs for space reasons.
See Mohri et al. (2008) for details of the general purpose
WFST operations used in this paper.
once. It is the efficient computation of these path
posterior n-gram probabilities that is the primary
focus of this paper. We will show how general
purpose WFST algorithms can be employed to ef-
ficiently compute p(u|E) for all u ∈ N.
Tromble et al. (2008) use Equation (1) as an
approximation to the general form of statistical
machine translation MBR decoder (Kumar and
Byrne, 2004):
ˆ
E = argmin
E
′
∈E
E∈E
L(E, E
′
)P (E|F ) (3)
The approximation replaces the sum over all paths
in the lattice by a sum over lattice n-grams. Even
though a lattice may have many n-grams, it is
possible to extract and enumerate them exactly
whereas this is often impossible for individual
paths. Therefore, while the Tromble et al. (2008)
linearisation of the gain function in the decision
rule is an approximation, Equation (1) can be com-
puted exactly even over very large lattices. The
challenge is to do so efficiently.
If the quantity p(u|E) had the form of a condi-
tional expected count
c(u|E) =
E∈E
#
u
(E)P (E|F ), (4)
it could be computed efficiently using counting
transducers (Allauzen et al., 2003). The statis-
tic c(u|E) counts the number of times an n-gram
occurs on each path, accumulating the weighted
count over all paths. By contrast, what is needed
by the approximation in Equation (1) is to iden-
tify all paths containing an n-gram and accumulate
their probabilities. The accumulation of probabil-
ities at the path level, rather than the n-gram level,
makes the exact computation of p(u|E) hard.
Tromble et al. (2008) approach this problem by
building a separate word sequence acceptor for
each n-gram in N and intersecting this acceptor
27
with the lattice to discard all paths that do not con-
tain the n-gram; they then sum the probabilities of
all paths in the filtered lattice. We refer to this as
the sequential method, since p(u|E) is calculated
separately for each u in sequence.
Allauzen et al. (2010) introduce a transducer
for simultaneous calculation of p(u|E) for all un-
igrams u ∈ N
1
in a lattice. This transducer is
effective for finding path posterior probabilities of
unigrams because there are relatively few unique
unigrams in the lattice. As we will show, however,
it is less efficient for higher-order n-grams.
Allauzen et al. (2010) use exact statistics for
the unigram path posterior probabilities in Equa-
tion (1), but use the conditional expected counts
of Equation (4) for higher-order n-grams. Their
hybrid MBR decoder has the form
ˆ
E = argmax
E
′
∈E
θ
0
|E
′
|
+
u∈N :1≤|u|≤k
θ
u
#
u
(E
′
)p(u|E)
+
u∈N :k<|u|≤4
θ
u
#
u
(E
′
)c(u|E)
, (5)
where k determines the range of n-gram orders
at which the path posterior probabilities p(u|E)
of Equation (2) and conditional expected counts
c(u|E) of Equation (4) are used to compute the
expected gain. For k < 4, Equation (5) is thus
an approximation to the approximation. In many
cases it will be perfectly fine, depending on how
closely p(u|E) and c(u|E) agree for higher-order
n-grams. Experimentally, Allauzen et al. (2010)
find this approximation works well at k = 1 for
MBR decodingofstatisticalmachine translation
lattices. However, there may be scenarios in which
p(u|E) and c(u|E) differ so that Equation (5) is no
longer useful in place of the original Tromble et
al. (2008) approximation.
In the following sections, we present an efficient
method for simultaneous calculation of p(u|E) for
n-grams of a fixed order. While other fast MBR
approximations are possible (Kumar et al., 2009),
we show how the exact path posterior probabilities
can be calculated and applied in the implementa-
tion of Equation (1) for efficient MBR decoding
over lattices.
2 N -gram Mapping Transducer
We make use of a trick to count higher-order n-
grams. We build transducer Φ
n
to map word se-
quences to n-gram sequences of order n. Φ
n
has a
similar form to the WFST implementation of an n-
gram language model (Allauzen et al., 2003). Φ
n
includes for each n-gram u = w
n
1
arcs of the form:
w
n-1
1
w
n
2
w
n
:u
The n-gram lattice of order n is called E
n
and is
found by composing E ◦Φ
n
, projecting on the out-
put, removing ǫ-arcs, determinizing, and minimis-
ing. The construction of E
n
is fast even for large
lattices and is memory efficient. E
n
itself may
have more states than E due to the association of
distinct n-gram histories with states. However, the
counting transducer for unigrams is simpler than
the corresponding counting transducer for higher-
order n-grams. As a result, counting unigrams in
E
n
is easier than counting n-grams in E.
3 Efficient Path Counting
Associated with each E
n
we have a transducer Ψ
n
which can be used to calculate the path posterior
probabilities p(u|E) for all u ∈ N
n
. In Figures
1 and 2 we give two possible forms
2
of Ψ
n
that
can be used to compute path posterior probabilities
over n-grams u
1,2
∈ N
n
for some n. No modifica-
tion to the ρ-arc matching mechanism is required
even in counting higher-order n-grams since all n-
grams are represented as individual symbols after
application of the mapping transducer Φ
n
.
Transducer Ψ
L
n
is used by Allauzen et al. (2010)
to compute the exact unigram contribution to the
conditional expected gain in Equation (5). For ex-
ample, in counting paths that contain u
1
, Ψ
L
n
re-
tains the first occurrence of u
1
and maps every
other symbol to ǫ. This ensures that in any path
containing a given u, only the first u is counted,
avoiding multiple countingof paths.
We introduce an alternative pathcounting trans-
ducer Ψ
R
n
that effectively deletes all symbols ex-
cept the last occurrence of u on any path by en-
suring that any paths in composition which count
earlier instances of u do not end in a final state.
Multiple counting is avoided by counting only the
last occurrence of each symbol u on a path.
We note that initial ǫ:ǫ arcs in Ψ
L
n
effectively
create |N
n
| copies of E
n
in composition while
searching for the first occurrence of each u. Com-
2
The special composition symbol σ matches any arc; ρ
matches any arc other than those with an explicit transition.
See the OpenFst documentation: http://openfst.org
28
0
1
2
3
u
1
:u
1
u
2
:u
2
ǫ:ǫ
ǫ:ǫ
ρ:ǫ
ρ:ǫ
σ:ǫ
Figure 1: Pathcounting transducer Ψ
L
n
matching
first (left-most) occurrence of each u ∈ N
n
.
0
1
3
2
4
u
1
:u
1
u
2
:u
2
u
1
:ǫ
u
2
:ǫ
σ:ǫ
ρ:ǫ
ρ:ǫ
Figure 2: Pathcounting transducer Ψ
R
n
matching
last (right-most) occurrence of each u ∈ N
n
.
posing with Ψ
R
n
creates a single copy of E
n
while
searching for the last occurrence of u; we find this
to be much more efficient for large N
n
.
Path posterior probabilities are calculated over
each E
n
by composing with Ψ
n
in the log semir-
ing, projecting on the output, removing ǫ-arcs, de-
terminizing, minimising, and pushing weights to
the initial state (Allauzen et al., 2010). Using ei-
ther Ψ
L
n
or Ψ
R
n
, the resulting counts acceptor is X
n
.
It has a compact form with one arc from the start
state for each u
i
∈ N
n
:
0
i
u
i
/- log p(u
i
|E)
3.1 Efficient Path Posterior Calculation
Although X
n
has a convenient and elegant form,
it can be difficult to build for large N
n
because
the composition E
n
◦ Ψ
n
results in millions of
states and arcs. The log semiring ǫ-removal and
determinization required to sum the probabilities
of paths labelled with each u can be slow.
However, if we use the proposed Ψ
R
n
, then each
path in E
n
◦ Ψ
R
n
has only one non-ǫ output la-
bel u and all paths leading to a given final state
share the same u. A modified forward algorithm
can be used to calculate p(u|E) without the costly
ǫ-removal and determinization. The modification
simply requires keeping track of which symbol
u is encountered along each path to a final state.
More than one final state may gather probabilities
for the same u; to compute p(u|E) these proba-
bilities are added. The forward algorithm requires
that E
n
◦Ψ
R
n
be topologically sorted; although sort-
ing can be slow, it is still quicker than log semiring
ǫ-removal and determinization.
The statistics gathered by the forward algo-
rithm could also be gathered under the expectation
semiring (Eisner, 2002) with suitably defined fea-
tures. We take the view that the full complexity of
that approach is not needed here, since only one
symbol is introduced per path and per exit state.
Unlike E
n
◦ Ψ
R
n
, the composition E
n
◦ Ψ
L
n
does
not segregate paths by u such that there is a di-
rect association between final states and symbols.
The forward algorithm does not readily yield the
per-symbol probabilities, although an arc weight
vector indexed by symbols could be used to cor-
rectly aggregate the required statistics (Riley et al.,
2009). For large N
n
this would be memory in-
tensive. The association between final states and
symbols could also be found by label pushing, but
we find this slow for large E
n
◦ Ψ
n
.
4 Efficient Decoder Implementation
In contrast to Equation (5), we use the exact values
of p(u|E) for all u ∈ N
n
at orders n = 1 . . . 4 to
compute
ˆ
E = argmin
E
′
∈E
θ
0
|E
′
| +
4
n=1
g
n
(E, E
′
)
, (6)
where g
n
(E, E
′
) =
u∈N
n
θ
u
#
u
(E
′
)p(u|E) us-
ing the exact path posterior probabilities at each
order. We make acceptors Ω
n
such that E ◦ Ω
n
assigns order n partial gain g
n
(E, E
′
) to all paths
E ∈ E. Ω
n
is derived from Φ
n
directly by assign-
ing arc weight θ
u
×p(u|E) to arcs with output label
u and then projecting on the input labels. For each
n-gram u = w
n
1
in N
n
arcs of Ω
n
have the form:
w
n-1
1
w
n
2
w
n
/θ
u
× p(u|E)
To apply θ
0
we make a copy of E, called E
0
,
with fixed weight θ
0
on all arcs. The decoder is
formed as the composition E
0
◦ Ω
1
◦ Ω
2
◦ Ω
3
◦ Ω
4
and
ˆ
E is extracted as the maximum cost string.
5 Lattice Generation for LMBR
Lattice MBR decoding performance and effi-
ciency is evaluated in the context of the NIST
29
mt0205tune mt0205test mt08nw mt08ng
ML 54.2 53.8 51.4 36.3
k
0 52.6 52.3 49.8 34.5
1 54.8 54.4 52.2 36.6
2 54.9 54.5 52.4 36.8
3 54.9 54.5 52.4 36.8
LMBR 55.0 54.6 52.4 36.8
Table 1: BLEU scores for Arabic→English maximum likelihood translation (ML), MBR decoding using
the hybrid decision rule of Equation (5) at 0 ≤ k ≤ 3, and regular linearised lattice MBR (LMBR).
mt0205tune mt0205test mt08nw mt08ng
Posteriors
sequential 3160 3306 2090 3791
Ψ
L
n
6880 7387 4201 8796
Ψ
R
n
1746 1789 1182 2787
Decoding
sequential 4340 4530 2225 4104
Ψ
n
284 319 118 197
Total
sequential 7711 8065 4437 8085
Ψ
L
n
7458 8075 4495 9199
Ψ
R
n
2321 2348 1468 3149
Table 2: Time in seconds required forpath posterior n-gram probability calculation and LMBR decoding
using sequential method and left-most (Ψ
L
n
) or right-most (Ψ
R
n
) counting transducer implementations.
Arabic→English machinetranslation task
3
. The
development set mt0205tune is formed from the
odd numbered sentences of the NIST MT02–
MT05 testsets; the even numbered sentences form
the validation set mt0205test. Performance on
NIST MT08 newswire (mt08nw) and newsgroup
(mt08ng) data is also reported.
First-pass translation is performed using HiFST
(Iglesias et al., 2009), a hierarchical phrase-based
decoder. Word alignments are generated using
MTTK (Deng and Byrne, 2008) over 150M words
of parallel text for the constrained NIST MT08
Arabic→English track. In decoding, a Shallow-
1 grammar with a single level of rule nesting is
used and no pruning is performed in generating
first-pass lattices (Iglesias et al., 2009).
The first-pass language model is a modified
Kneser-Ney (Kneser and Ney, 1995) 4-gram esti-
mated over the English parallel text and an 881M
word subset of the GigaWord Third Edition (Graff
et al., 2007). Prior to LMBR, the lattices are
rescored with large stupid-backoff 5-gram lan-
guage models (Brants et al., 2007) estimated over
more than 6 billion words of English text.
The n-gram factors θ
0
, . . . , θ
4
are set according
to Tromble et al. (2008) using unigram precision
3
http://www.itl.nist.gov/iad/mig/tests/mt
p = 0.85 and average recall ratio r = 0.74. Our
translation decoder and MBR procedures are im-
plemented using OpenFst (Allauzen et al., 2007).
6 LMBR Speed and Performance
Lattice MBR decoding performance is shown in
Table 1. Compared to the maximum likelihood
translation hypotheses (row ML), LMBR gives
gains of +0.8 to +1.0 BLEU for newswire data and
+0.5 BLEU for newsgroup data (row LMBR).
The other rows of Table 1 show the performance
of LMBR decoding using the hybrid decision rule
of Equation (5) for 0 ≤ k ≤ 3. When the condi-
tional expected counts c(u|E) are used at all orders
(i.e. k = 0), the hybrid decoder BLEU scores are
considerably lower than even the ML scores. This
poor performance is because there are many un-
igrams u for which c(u|E) is much greater than
p(u|E). The consensus translation maximising the
conditional expected gain is then dominated by
unigram matches, significantly degrading LMBR
decoding performance. Table 1 shows that for
these lattices the hybrid decision rule is an ac-
curate approximation to Equation (1) only when
k ≥ 2 and the exact contribution to the gain func-
tion is computed using the path posterior probabil-
ities at orders n = 1 and n = 2.
30
We now analyse the efficiency of lattice MBR
decoding using the exact path posterior probabil-
ities of Equation (2) at all orders. We note that
the sequential method and both simultaneous im-
plementations using pathcountingtransducers Ψ
L
n
and Ψ
R
n
yield the same hypotheses (allowing for
numerical accuracy); they differ only in speed and
memory usage.
Posteriors Efficiency Computation times for
the steps in LMBR are given in Table 2. In calcu-
lating path posterior n-gram probabilities p(u|E),
we find that the use of Ψ
L
n
is more than twice
as slow as the sequential method. This is due to
the difficulty ofcounting higher-order n-grams in
large lattices. Ψ
L
n
is effective forcounting uni-
grams, however, since there are far fewer of them.
Using Ψ
R
n
is almost twice as fast as the sequential
method. This speed difference is due to the sim-
ple forward algorithm. We also observe that for
higher-order n, the composition E
n
◦ Ψ
R
n
requires
less memory and produces a smaller machine than
E
n
◦ Ψ
L
n
. It is easier to count paths by the final
occurrence of a symbol than by the first.
Decoding Efficiency Decoding times are signif-
icantly faster using Ω
n
than the sequential method;
average decoding time is around 0.1 seconds per
sentence. The total time required for lattice MBR
is dominated by the calculation of the path pos-
terior n-gram probabilities, and this is a func-
tion of the number of n-grams in the lattice |N |.
For each sentence in mt0205tune, Figure 3 plots
the total LMBR time for the sequential method
(marked ‘o’) and for probabilities computed using
Ψ
R
n
(marked ‘+’). This compares the two tech-
niques on a sentence-by-sentence basis. As |N |
grows, the simultaneous pathcounting transducer
is found to be much more efficient.
7 Conclusion
We have described an efficient and exact imple-
mentation of the linear approximation to LMBR
using general WFST operations. A simple trans-
ducer was used to map words to sequences of n-
grams in order to simplify the extraction of higher-
order statistics. We presented a counting trans-
ducer Ψ
R
n
that extracts the statistics required for
all n-grams of order n in a single composition and
allows path posterior probabilities to be computed
efficiently using a modified forward procedure.
We take the view that even approximate search
0 1000 2000 3000 4000 5000 6000
0
10
20
30
40
50
60
70
sequential
simultaneous Ψ
R
n
total time (seconds)
lattice n-grams
Figure 3: Total time in seconds versus |N |.
criteria should be implemented exactly where pos-
sible, so that it is clear exactly what the system is
doing. Formachinetranslation lattices, conflat-
ing the values of p(u|E) and c(u|E) for higher-
order n-grams might not be a serious problem, but
in other scenarios – especially where symbol se-
quences are repeated multiple times on the same
path – it may be a poor approximation.
We note that since much of the time in calcula-
tion is spent dealing with ǫ-arcs that are ultimately
removed, an optimised composition algorithm that
skips over such redundant structure may lead to
further improvements in time efficiency.
Acknowledgments
This work was supported in part under the
GALE program of the Defense Advanced Re-
search Projects Agency, Contract No. HR0011-
06-C-0022.
References
Cyril Allauzen, Mehryar Mohri, and Brian Roark.
2003. Generalized algorithms for constructing sta-
tistical language models. In Proceedings of the 41st
Meeting of the Association for Computational Lin-
guistics, pages 557–564.
Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wo-
jciech Skut, and Mehryar Mohri. 2007. OpenFst: a
general and efficient weighted finite-state transducer
library. In Proceedings of the 9th International Con-
ference on Implementation and Application of Au-
tomata, pages 11–23. Springer.
Cyril Allauzen, Shankar Kumar, Wolfgang Macherey,
Mehryar Mohri, and Michael Riley. 2010. Expected
31
sequence similarity maximization. In Human Lan-
guage Technologies 2010: The 11th Annual Confer-
ence of the North American Chapter of the Associ-
ation for Computational Linguistics, Los Angeles,
California, June.
Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J.
Och, and Jeffrey Dean. 2007. Large language
models in machine translation. In Proceedings of
the 2007 Joint Conference on Empirical Methods
in Natural Language Processing and Computational
Natural Language Learning, pages 858–867.
Yonggang Deng and William Byrne. 2008. HMM
word and phrase alignment forstatistical machine
translation. IEEE Transactions on Audio, Speech,
and Language Processing, 16(3):494–507.
Jason Eisner. 2002. Parameter estimation for prob-
abilistic finite-state transducers. In Proceedings of
the 40th Annual Meeting of the Association for Com-
putational Linguistics (ACL), pages 1–8, Philadel-
phia, July.
David Graff, Junbo Kong, Ke Chen, and Kazuaki
Maeda. 2007. English Gigaword Third Edition.
Gonzalo Iglesias, Adri`a de Gispert, Eduardo R. Banga,
and William Byrne. 2009. Hierarchical phrase-
based translation with weighted finite state trans-
ducers. In Proceedings of Human Language Tech-
nologies: The 2009 Annual Conference of the North
American Chapter of the Association for Compu-
tational Linguistics, pages 433–441, Boulder, Col-
orado, June. Association for Computational Linguis-
tics.
R. Kneser and H. Ney. 1995. Improvedbacking-off for
m-gram language modeling. In Acoustics, Speech,
and Signal Processing, pages 181–184.
Shankar Kumar and William Byrne. 2004. Minimum
Bayes-risk decodingforstatisticalmachine trans-
lation. In Proceedings of Human Language Tech-
nologies: The 2004 Annual Conference of the North
American Chapter of the Association for Computa-
tional Linguistics, pages 169–176.
Shankar Kumar, Wolfgang Macherey, Chris Dyer, and
Franz Och. 2009. Efficient minimum error rate
training and minimumbayes-riskdecodingfor trans-
lation hypergraphs and lattices. In Proceedings of
the Joint Conference of the 47th Annual Meeting of
the Association for Computational Linguistics and
the 4th International Joint Conference on Natural
Language Processing of the AFNLP, pages 163–
171, Suntec, Singapore, August. Association for
Computational Linguistics.
M. Mohri, F.C.N. Pereira, and M. Riley. 2008. Speech
recognition with weighted finite-state transducers.
Handbook on Speech Processing and Speech Com-
munication.
Michael Riley, Cyril Allauzen, and Martin Jansche.
2009. OpenFst: An Open-Source, Weighted Finite-
State Transducer Library and its Applications to
Speech and Language. In Proceedings of Human
Language Technologies: The 2009 Annual Confer-
ence of the North American Chapter of the Associa-
tion for Computational Linguistics, Companion Vol-
ume: Tutorial Abstracts, pages 9–10, Boulder, Col-
orado, May. Association for Computational Linguis-
tics.
Roy Tromble, Shankar Kumar, Franz Och, and Wolf-
gang Macherey. 2008. Lattice Minimum Bayes-
Risk decodingforstatisticalmachine translation.
In Proceedings of the 2008 Conference on Empiri-
cal Methods in Natural Language Processing, pages
620–629, Honolulu, Hawaii, October. Association
for Computational Linguistics.
32
. Counting Transducers for Minimum Bayes-Risk Decoding
of Statistical Machine Translation Lattices
Graeme Blackwood, Adri
`
a de Gispert, William Byrne
Machine. exactly
through simple transformations of word
sequences to sequences of n-grams. This
yields a novel implementation of lattice
minimum Bayes-risk decoding which is
fast