Proceedings of the 43rd Annual Meeting of the ACL, pages 66–74,
Ann Arbor, June 2005.
c
2005 Association for Computational Linguistics
Towards DevelopingGenerationAlgorithmsforText-to-Text Applications
Radu Soricut and Daniel Marcu
Information Sciences Institute
University of Southern California
4676 Admiralty Way, Suite 1001
Marina del Rey, CA 90292
radu, marcu @isi.edu
Abstract
We describe a new sentence realization
framework fortext-to-text applications.
This framework uses IDL-expressions as
a representation formalism, and a gener-
ation mechanism based on algorithms for
intersecting IDL-expressions with proba-
bilistic language models. We present both
theoretical and empirical results concern-
ing the correctness and efficiency of these
algorithms.
1 Introduction
Many of today’s most popular natural language ap-
plications – Machine Translation, Summarization,
Question Answering – are text-to-text applications.
That is, they produce textual outputs from inputs that
are also textual. Because these applications need
to produce well-formed text, it would appear nat-
ural that they are the favorite testbed for generic
generation components developed within the Natu-
ral Language Generation (NLG) community. Over
the years, several proposals of generic NLG systems
have been made: Penman (Matthiessen and Bate-
man, 1991), FUF (Elhadad, 1991), Nitrogen (Knight
and Hatzivassiloglou, 1995), Fergus (Bangalore
and Rambow, 2000), HALogen (Langkilde-Geary,
2002), Amalgam (Corston-Oliver et al., 2002), etc.
Instead of relying on such generic NLG systems,
however, most of the current text-to-text applica-
tions use other means to address thegeneration need.
In Machine Translation, for example, sentences are
produced using application-specific “decoders”, in-
spired by work on speech recognition (Brown et
al., 1993), whereas in Summarization, summaries
are produced as either extracts or using task-specific
strategies (Barzilay, 2003). The main reason for
which text-to-text applications do not usually in-
volve generic NLG systems is that such applica-
tions do not have access to the kind of informa-
tion that the input representation formalisms of cur-
rent NLG systems require. A machine translation or
summarization system does not usually have access
to deep subject-verb or verb-object relations (such
as ACTOR, AGENT, PATIENT, POSSESSOR, etc.)
as needed by Penman or FUF, or even shallower
syntactic relations (such as subject, object,
premod, etc.) as needed by HALogen.
In this paper, following the recent proposal
made by Nederhof and Satta (2004), we argue
for the use of IDL-expressions as an application-
independent, information-slim representation lan-
guage fortext-to-text natural language generation.
IDL-expressions are created from strings using four
operators: concatenation (
), interleave ( ), disjunc-
tion ( ), and lock ( ). We claim that the IDL
formalism is appropriate fortext-to-text generation,
as it encodes meaning only via words and phrases,
combined using a set of formally defined operators.
Appropriate words and phrases can be, and usually
are, produced by the applications mentioned above.
The IDL operators have been specifically designed
to handle natural constraints such as word choice
and precedence, constructions such as phrasal com-
bination, and underspecifications such as free word
order.
66
CFGs
via intersection with
Deterministic
Non−deterministic
via intersection with
probabilistic LMs
Word/Phrase
based
Fergus, Amalgam
Nitrogen, HALogen
FUF, PENMAN
NLG System
(Nederhof&Satta 2004)
IDL
Representation
(formalism)
Semantic,
few meanings
Syntactically/
Semantically
grounded
Syntactic
dependencies
Representation
(computational)
Linear
Exponential
Linear
Deterministic
Generation
(mechanism)
Non−deterministic
via intersection with
probabilistic LMs
Non−deterministic
via intersection with
probabilistic LMs
(this paper)
IDL
Linear
Generation
(computational)
Optimal Solution
Efficient Run−time
Efficient Run−time
Optimal Solution
Efficient Run−time
All Solutions
Efficient Run−time
Optimal Solution
Linear Linear
based
Word/Phrase
Table 1: Comparison of the present proposal with
current NLG systems.
In Table 1, we present a summary of the repre-
sentation and generation characteristics of current
NLG systems. We mark by characteristics that are
needed/desirable in a generation component for text-
to-text applications, and by
characteristics that
make the proposal inapplicable or problematic. For
instance, as already argued, the representation for-
malism of all previous proposals except for IDL is
problematic (
) fortext-to-text applications. The
IDL formalism, while applicable to text-to-text ap-
plications, has the additional desirable property that
it is a compact representation, while formalisms
such as word-lattices and non-recursive CFGs can
have exponential size in the number of words avail-
able forgeneration (Nederhof and Satta, 2004).
While the IDL representational properties are all
desirable, the generation mechanism proposed for
IDL by Nederhof and Satta (2004) is problematic
( ), because it does not allow for scoring and
ranking of candidate realizations. Their genera-
tion mechanism, while computationally efficient, in-
volves intersection with context free grammars, and
therefore works by excluding all realizations that are
not accepted by a CFG and including (without rank-
ing) all realizations that are accepted.
The approach to generation taken in this paper
is presented in the last row in Table 1, and can be
summarized as a tiling of generation character-
istics of previous proposals (see the shaded area in
Table 1). Our goal is to provide an optimal gen-
eration framework fortext-to-text applications, in
which the representation formalism, the generation
mechanism, and the computational properties are all
needed and desirable ( ). Toward this goal, we
present a new generation mechanism that intersects
IDL-expressions with probabilistic language mod-
els. The generation mechanism implements new al-
gorithms, which cover a wide spectrum of run-time
behaviors (from linear to exponential), depending on
the complexity of the input. We also present theoret-
ical results concerning the correctness and the effi-
ciency input IDL-expression) of our algorithms.
We evaluate these algorithms by performing ex-
periments on a challenging word-ordering task.
These experiments are carried out under a high-
complexity generation scenario: find the most prob-
able sentence realization under an n-gram language
model for IDL-expressions encoding bags-of-words
of size up to 25 (up to 10
possible realizations!).
Our evaluation shows that the proposed algorithms
are able to cope well with such orders of complex-
ity, while maintaining high levels of accuracy.
2 The IDL Language for NLG
2.1 IDL-expressions
IDL-expressions have been proposed by Nederhof
& Satta (2004) (henceforth N&S) as a representa-
tion for finite languages, and are created from strings
using four operators: concatenation (
), interleave
( ), disjunction ( ), and lock ( ). The semantics of
IDL-expressions is given in terms of sets of strings.
The concatenation ( ) operator takes two argu-
ments, and uses the strings encoded by its argu-
ment expressions to obtain concatenated strings that
respect the order of the arguments; e.g., en-
codes the singleton set . The nterleave ( )
operator interleaves the strings encoded by its argu-
ment expressions; e.g., encodes the set
. The isjunction ( ) operator al-
lows a choice among the strings encoded by its ar-
gument expressions; e.g., encodes the set
. The ock ( ) operator takes only one ar-
gument, and “locks-in” the strings encoded by its
argument expression, such that no additional mate-
rial can be interleaved; e.g., encodes
the set .
Consider the following IDL-expression:
The concatenation ( ) operator captures precedence
constraints, such as the fact that a determiner like
67
the appears before the noun it determines. The lock
( ) operator enforces phrase-encoding constraints,
such as the fact that the captives is a phrase which
should be used as a whole. The disjunction ( ) op-
erator allows for multiple word/phrase choice (e.g.,
the prisoners versus the captives), and the inter-
leave (
) operator allows for word-order freedom,
i.e., word order underspecification at meaning repre-
sentation level. Among the strings encoded by IDL-
expression 1 are the following:
finally the prisoners were released
the captives finally were released
the prisoners were finally released
The following strings, however, are not part of the
language defined by IDL-expression 1:
the finally captives were released
the prisoners were released
finally the captives released were
The first string is disallowed because the
oper-
ator locks the phrase the captives. The second string
is not allowed because the operator requires all its
arguments to be represented. The last string violates
the order imposed by the precedence operator be-
tween were and released.
2.2 IDL-graphs
IDL-expressions are a convenient way to com-
pactly represent finite languages. However, IDL-
expressions do not directly allow formulations of
algorithms to process them. For this purpose, an
equivalent representation is introduced by N&S,
called IDL-graphs. We refer the interested reader to
the formal definition provided by N&S, and provide
here only an intuitive description of IDL-graphs.
We illustrate in Figure 1 the IDL-graph corre-
sponding to IDL-expression 1. In this graph, ver-
tices and are called initial and final, respec-
tively. Vertices , with in-going -labeled edges,
and , with out-going -labeled edges, for ex-
ample, result from the expansion of the operator,
while vertices , with in-going -labeled edges,
and , with out-going -labeled edges result
from the expansion of the operator. Vertices
to and to result from the expansion of
the two operators, respectively. These latter ver-
tices are also shown to have rank 1, as opposed to
rank 0 (not shown) assigned to all other vertices.
The ranking of vertices in an IDL-graph is needed
to enforce a higher priority on the processing of the
higher-ranked vertices, such that the desired seman-
tics for the lock operator is preserved.
With each IDL-graph we can associate a fi-
nite language: the set of strings that can be generated
by an IDL-specific traversal of
, starting from
and ending in . An IDL-expression and its
corresponding IDL-graph are said to be equiv-
alent because they generate the same finite language,
denoted .
2.3 IDL-graphs and Finite-State Acceptors
To make the connection with the formulation of our
algorithms, in this section we link the IDL formal-
ism with the more classical formalism of finite-state
acceptors (FSA) (Hopcroft and Ullman, 1979). The
FSA representation can naturally encode precedence
and multiple choice, but it lacks primitives corre-
sponding to the interleave (
) and lock ( ) opera-
tors. As such, an FSA representation must explic-
itly enumerate all possible interleavings, which are
implicitly captured in an IDL representation. This
correspondence between implicit and explicit inter-
leavings is naturally handled by the notion of a cut
of an IDL-graph .
Intuitively, a cut through is a set of vertices
that can be reached simultaneously when traversing
from the initial node to the final node, follow-
ing the branches as prescribed by the encoded , ,
and operators, in an attempt to produce a string in
. More precisely, the initial vertex is consid-
ered a cut (Figure 2 (a)). For each vertex in a given
cut, we create a new cut by replacing the start ver-
tex of some edge with the end vertex of that edge,
observing the following rules:
the vertex that is the start of several edges la-
beled using the special symbol is replaced
by a sequence of all the end vertices of these
edges (for example, is a cut derived from
(Figure 2 (b))); a mirror rule handles the spe-
cial symbol ;
the vertex that is the start of an edge labeled us-
ing vocabulary items or is replaced by the end
vertex of that edge (for example,
, ,
, are cuts derived from , ,
68
v1v0
vevs
finally
εεε
ε
ε
ε
ε
ε
ε
ε
ε
ε
ε
releasedwere
captives
prisoners
the
the
v2
1111
1111
v20v19v18v17v16v15
v14v13v12v11v10
v9v8v7v6v5
v4
v3
Figure 1: The IDL-graph corresponding to the IDL-
expression
.
(a)
vs
(c)
v1
finally
v2
v0
vs
(b)
v2
v0
vs
rank 1
rank 0
finally
ε
v5
the
(e)
v3
ε
v2
v0
vs
the
ε
v2
v0
vs
ε v6
v1
(d)
v6v5v3
Figure 2: Cuts of the IDL-graph in Figure 1 (a-d). A
non-cut is presented in (e).
, and , respectively, see Figure 2 (c-
d)), only if the end vertex is not lower ranked
than any of the vertices already present in the
cut (for example, is not a cut that can be
derived from , see Figure 2 (e)).
Note the last part of the second rule, which restricts
the set of cuts by using the ranking mechanism. If
one would allow to be a cut, one would imply
that finally may appear inserted between the words
of the locked phrase the prisoners.
We now link the IDL formalism with the FSA for-
malism by providing a mapping from an IDL-graph
to an acyclic finite-state acceptor . Be-
cause both formalisms are used for representing fi-
nite languages, they have equivalent representational
power. The IDL representation is much more com-
pact, however, as one can observe by comparing the
IDL-graph in Figure 1 with the equivalent finite-
state acceptor in Figure 3. The set of states of
is the set of cuts of . The initial state of
the finite-state acceptor is the state corresponding to
cut , and the final states of the finite-state acceptor
are the state corresponding to cuts that contain .
In what follows, we denote a state of
by the
name of the cut to which it corresponds. A transi-
v0v2vs
ε
v1v2
v0v4
v0 v10
the
v0v5
the
v0 v0
v0 v0v11 v12
v6 v7 v0
v0
v8
v13
prisoners
captives
ε
ε
ε
ε
v10v1
ε
ε
the
the
v6
v11
prisoners
captives
v5
v1
v1 v1 v1
v1
v7
v12
ε
ε
v1v8
v13v1
v0v3
v4v1
finally
finally
finally
v3v1
ε
ε
ε
ε
v14v0
v1v9
v0v9
v1v14
finally
finallyfinally
finally
ve
ε
v1v15
v0v15
were
were
εε
ε ε
released
released
v16
v16 v17
v17
v18
v18
v19
v19
ε
v20
v1 v1 v1 v1
v0 v0 v0 v0 v0
finally finally finally finally
v20v1
ε
ε
ε
ε
ε
ε
ε
ε
ε
Figure 3: The finite-state acceptor corresponding to
the IDL-graph in Figure 1.
tion labeled in between state
and state occurs if there is an edge
in . For the example in Figure 3,
the transition labeled were between states
and occurs because of the edge labeled were
between nodes and (Figure 1), whereas the
transition labeled finally between states
and
occurs because of the edge labeled finally be-
tween nodes and (Figure 1). The two represen-
tations and are equivalent in the sense
that the language generated by IDL-graph is
the same as the language accepted by FSA
.
It is not hard to see that the conversion from the
IDL representation to the FSA representation de-
stroys the compactness property of the IDL formal-
ism, because of the explicit enumeration of all possi-
ble interleavings, which causes certain labels to ap-
pear repeatedly in transitions. For example, a tran-
sition labeled finally appears 11 times in the finite-
state acceptor in Figure 3, whereas an edge labeled
finally appears only once in the IDL-graph in Fig-
ure 1.
3 Computational Properties of
IDL-expressions
3.1 IDL-graphs and Weighted Finite-State
Acceptors
As mentioned in Section 1, the generation mecha-
nism we propose performs an intersection of IDL-
expressions with n-gram language models. Follow-
ing (Mohri et al., 2002; Knight and Graehl, 1998),
we implement language models using weighted
finite-state acceptors (wFSA). In Section 2.3, we
presented a mapping from an IDL-graph to a
finite-state acceptor . From such a finite-state
acceptor
, we arrive at a weighted finite-state
acceptor
, by splitting the states of ac-
69
cording to the information needed by the language
model to assign weights to transitions. For ex-
ample, under a bigram language model , state
in Figure 3 must be split into three differ-
ent states, , , and
, according to which (non-epsilon)
transition was last used to reach this state. The
transitions leaving these states have the same la-
bels as those leaving state , and are now
weighted using the language model probability dis-
tributions , , and
, respectively.
Note that, at this point, we already have a na¨ıve
algorithm for intersecting IDL-expressions with n-
gram language models. From an IDL-expression ,
following the mapping
, we arrive at a weighted finite-state accep-
tor, on which we can use a single-source shortest-
path algorithm for directed acyclic graphs (Cormen
et al., 2001) to extract the realization corresponding
to the most probable path. The problem with this al-
gorithm, however, is that the premature unfolding of
the IDL-graph into a finite-state acceptor destroys
the representation compactness of the IDL repre-
sentation. For this reason, we devise algorithms
that, although similar in spirit with the single-source
shortest-path algorithm for directed acyclic graphs,
perform on-the-fly unfolding of the IDL-graph, with
a mechanism to control the unfolding based on the
scores of the paths already unfolded. Such an ap-
proach has the advantage that prefixes that are ex-
tremely unlikely under the language model may be
regarded as not so promising, and parts of the IDL-
expression that contain them may not be unfolded,
leading to significant savings.
3.2 Generation via Intersection of
IDL-expressions with Language Models
Algorithm IDL-NGLM-BFS The first algorithm
that we propose is algorithm IDL-NGLM-BFS in
Figure 4. The algorithm builds a weighted finite-
state acceptor
corresponding to an IDL-graph
incrementally, by keeping track of a set of ac-
tive states, called . The incrementality comes
from creating new transitions and states in orig-
inating in these active states, by unfolding the IDL-
graph
; the set of newly unfolded states is called
. The new transitions in are weighted ac-
IDL-NGLM-BFS
1
2
3 while
4 do UNFOLDIDLG
5 EVALUATENGLM
6 if FINALIDLG
7 then
8
9 return
Figure 4: Pseudo-code for intersecting an IDL-graph
with an n-gram language model using incre-
mental unfolding and breadth-first search.
cording to the language model. If a final state of
is not yet reached, the while loop is closed by
making the set of states to be the next set of
states. Note that this is actually a breadth-
first search (BFS) with incremental unfolding. This
algorithm still unfolds the IDL-graph completely,
and therefore suffers from the same drawback as the
na¨ıve algorithm.
The interesting contribution of algorithm
IDL-NGLM-BFS, however, is the incremental
unfolding. If, instead of line 8 in Figure 4, we
introduce mechanisms to control which
states become part of the state set for the
next unfolding iteration, we obtain a series of more
effective algorithms.
Algorithm IDL-NGLM-A We arrive at algo-
rithm IDL-NGLM-A by modifying line 8 in Fig-
ure 4, thus obtaining the algorithm in Figure 5. We
use as control mechanism a priority queue, ,
in which the states from are PUSH-ed, sorted
according to an admissible heuristic function (Rus-
sell and Norvig, 1995). In the next iteration,
is a singleton set containing the state POP-ed out
from the top of the priority queue.
Algorithm IDL-NGLM-BEAM We arrive at al-
gorithm IDL-NGLM-BEAM by again modifying
line 8 in Figure 4, thus obtaining the algorithm in
Figure 6. We control the unfolding using a prob-
abilistic beam
, which, via the BEAMSTATES
function, selects as
states only the states in
70
IDL-NGLM-A
1
2
3 while
4 do UNFOLDIDLG
5 EVALUATENGLM
6 if FINALIDLG
7 then
8 for each in
do PUSH
POP
9 return
Figure 5: Pseudo-code for intersecting an IDL-graph
with an n-gram language model using incre-
mental unfolding and A search.
IDL-NGLM-BEAM
1
2
3 while
4 do UNFOLDIDLG
5 EVALUATENGLM
6 if FINALIDLG
7 then
8 BEAMSTATES
9 return
Figure 6: Pseudo-code for intersecting an IDL-graph
with an n-gram language model using incre-
mental unfolding and probabilistic beam search.
reachable with a probability higher or equal
to the current maximum probability times the prob-
ability beam .
3.3 Computing Admissible Heuristics for
IDL-expressions
The IDL representation is ideally suited for com-
puting accurate admissible heuristics under lan-
guage models. These heuristics are needed by the
IDL-NGLM-A algorithm, and are also employed
for pruning by the IDL-NGLM-BEAM algorithm.
For each state in a weighted finite-state accep-
tor
corresponding to an IDL-graph , one can
efficiently extract from
– without further unfold-
ing – the set
1
of all edge labels that can be used to
reach the final states of . This set of labels, de-
noted , is an overestimation of the set of fu-
ture events reachable from , because the labels un-
der the operators are all considered. From
and the -1 labels (when using an -gram language
model) recorded in state we obtain the set of label
sequences of length -1. This set, denoted , is
an (over)estimated set of possible future condition-
ing events for state , guaranteed to contain the most
cost-efficient future conditioning events for state .
Using , one needs to extract from the
set of most cost-efficient future events from under
each operator. We use this set, denoted , to
arrive at an admissible heuristic for state under a
language model
, using Equation 2:
(2)
If is the true future cost for state , we guar-
antee that
from the way and
are constructed. Note that, as it usually hap-
pens with admissible heuristics, we can make
come arbitrarily close to , by computing in-
creasingly better approximations of .
Such approximations, however, require increasingly
advanced unfoldings of the IDL-graph (a com-
plete unfolding of for state gives
, and consequently ). It fol-
lows that arbitrarily accurate admissible heuristics
exist for IDL-expressions, but computing them on-
the-fly requires finding a balance between the time
and space requirements for computing better heuris-
tics and the speed-up obtained by using them in the
search algorithms.
3.4 Formal Properties of IDL-NGLM
algorithms
The following theorem states the correctness of our
algorithms, in the sense that they find the maximum
probability path encoded by an IDL-graph under an
n-gram language model.
Theorem 1 Let be an IDL-expression, G( )
its IDL-graph, and W( ) its wFSA under
an n-gram language model LM. Algorithms
IDL-NGLM-BFS and IDL-NGLM-A find the
1
Actually, these are multisets, as we treat multiply-occurring
labels as separate items.
71
path of maximum probability under LM. Algorithm
IDL-NGLM-BEAM finds the path of maximum
probability under LM, if all states in W( ) along
this path are selected by its BEAMSTATES function.
The proof of the theorem follows directly from the
correctness of the BFS and A search, and from the
condition imposed on the beam search.
The next theorem characterizes the run-time com-
plexity of these algorithms, in terms of an input IDL-
expression and its corresponding IDL-graph
complexity. There are three factors that linearly in-
fluence the run-time complexity of our algorithms:
is the maximum number of nodes in needed
to represent a state in
– depends solely on ;
is the maximum number of nodes in needed
to represent a state in – depends on and
, the length of the context used by the -gram lan-
guage model; and is the number of states of
– also depends on and . Of these three factors,
is by far the predominant one, and we simply call
the complexity of an IDL-expression.
Theorem 2 Let be an IDL-expression, its
IDL-graph, its FSA, and its wFSA
under an n-gram language model. Let
be the set of states of , and
the set of states of . Let also
, , and
. Algorithms IDL-NGLM-BFS
and IDL-NGLM-BEAM have run-time complexity
. Algorithm IDL-NGLM-A has run-time
complexity .
We omit the proof here due to space constraints. The
fact that the run-time behavior of our algorithms is
linear in the complexity of the input IDL-expression
(with an additional log factor in the case of A
search due to priority queue management) allows us
to say that our algorithms are efficient with respect
to the task they accomplish.
We note here, however, that depending on the
input IDL-expression, the task addressed can vary
in complexity from linear to exponential. That
is, for the intersection of an IDL-expression
(bag of words) with a trigram lan-
guage model, we have , ,
, and therefore a com-
plexity. This exponential complexity comes as no
surprise given that the problem of intersecting an n-
gram language model with a bag of words is known
to be NP-complete (Knight, 1999). On the other
hand, for intersecting an IDL-expression
(sequence of words) with a trigram lan-
guage model, we have , , and
, and therefore an generation algorithm.
In general, for IDL-expressions for which is
bounded, which we expect to be the case for most
practical problems, our algorithms perform in poly-
nomial time in the number of words available for
generation.
4 Evaluation of IDL-NGLM Algorithms
In this section, we present results concerning
the performance of our algorithms on a word-
ordering task. This task can be easily defined as
follows: from a bag of words originating from
some sentence, reconstruct the original sentence as
faithfully as possible. In our case, from an original
sentence such as “the gifts are donated by amer-
ican companies”, we create the IDL-expression
, from which some algorithm realizes a sen-
tence such as “donated by the american companies
are gifts”. Note the natural way we represent in
an IDL-expression beginning and end of sentence
constraints, using the operator. Since this is
generation from bag-of-words, the task is known to
be at the high-complexity extreme of the run-time
behavior of our algorithms. As such, we consider it
a good test for the ability of our algorithms to scale
up to increasingly complex inputs.
We use a state-of-the-art, publicly available
toolkit
2
to train a trigram language model using
Kneser-Ney smoothing, on 10 million sentences
(170 million words) from the Wall Street Journal
(WSJ), lower case and no final punctuation. The test
data is also lower case (such that upper-case words
cannot be hypothesized as first words), with final
punctuation removed (such that periods cannot be
hypothesized as final words), and consists of 2000
unseen WSJ sentences of length 3-7, and 2000 un-
seen WSJ sentences of length 10-25.
The algorithms we tested in this experiments were
the ones presented in Section 3.2, plus two baseline
algorithms. The first baseline algorithm, L, uses an
2
http://www.speech.sri.com/projects/srilm/
72
inverse-lexicographic order for the bag items as its
output, in order to get the word the on sentence ini-
tial position. The second baseline algorithm, G, is
a greedy algorithm that realizes sentences by maxi-
mizing the probability of joining any two word se-
quences until only one sequence is left.
For the A
algorithm, an admissible cost is com-
puted for each state in a weighted finite-state au-
tomaton, as the sum (over all unused words) of the
minimum language model cost (i.e., maximum prob-
ability) of each unused word when conditioning over
all sequences of two words available at that particu-
lar state for future conditioning (see Equation 2, with
). These estimates are also used by
the beam algorithm for deciding which IDL-graph
nodes are not unfolded. We also test a greedy ver-
sion of the A algorithm, denoted A , which con-
siders for unfolding only the nodes extracted from
the priority queue which already unfolded a path of
length greater than or equal to the maximum length
already unfolded minus
(in this notation, the A
algorithm would be denoted A ). For the beam al-
gorithms, we use the notation B to specify a proba-
bilistic beam of size , i.e., an algorithm that beams
out the states reachable with probability less than the
current maximum probability times .
Our first batch of experiments concerns bags-of-
words of size 3-7, for which exhaustive search is
possible. In Table 2, we present the results on the
word-ordering task achieved by various algorithms.
We evaluate accuracy performance using two auto-
matic metrics: an identity metric, ID, which mea-
sures the percent of sentences recreated exactly, and
BLEU (Papineni et al., 2002), which gives the ge-
ometric average of the number of uni-, bi-, tri-, and
four-grams recreated exactly. We evaluate the search
performance by the percent of Search Errors made
by our algorithms, as well as a percent figure of Es-
timated Search Errors, computed as the percent of
searches that result in a string with a lower proba-
bility than the probability of the original sentence.
To measure the impact of using IDL-expressions for
this task, we also measure the percent of unfolding
of an IDL graph with respect to a full unfolding. We
report speed results as the average number of sec-
onds per bag-of-words, when using a 3.0GHz CPU
machine under a Linux OS.
The first notable result in Table 2 is the savings
ALG ID BLEU Search Unfold Speed
(%) Errors (%) (%) (sec./bag)
L 2.5 9.5 97.2 (95.8) N/A .000
G 30.9 51.0 67.5 (57.6) N/A .000
BFS 67.1 79.2 0.0 (0.0) 100.0 .072
A 67.1 79.2 0.0 (0.0) 12.0 .010
A 60.5 74.8 21.1 (11.9) 3.2 .004
A 64.3 77.2 8.5 (4.0) 5.3 .005
B 65.0 78.0 9.2 (5.0) 7.2 .006
B 66.6 78.8 3.2 (1.7) 13.2 .011
Table 2: Bags-of-words of size 3-7: accuracy (ID,
BLEU), Search Errors (and Estimated Search Errors), space
savings (Unfold), and speed results.
achieved by the A algorithm under the IDL repre-
sentation. At no cost in accuracy, it unfolds only
12% of the edges, and achieves a 7 times speed-
up, compared to the BFS algorithm. The savings
achieved by not unfolding are especially important,
since the exponential complexity of the problem is
hidden by the IDL representation via the folding
mechanism of the
operator. The algorithms that
find sub-optimal solutions also perform well. While
maintaining high accuracy, the A and B algo-
rithms unfold only about 5-7% of the edges, at 12-14
times speed-up.
Our second batch of experiments concerns bag-
of-words of size 10-25, for which exhaustive search
is no longer possible (Table 3). Not only exhaustive
search, but also full A search is too expensive in
terms of memory (we were limited to 2GiB of RAM
for our experiments) and speed. Only the greedy
versions A and A , and the beam search using tight
probability beams (0.2-0.1) scale up to these bag
sizes. Because we no longer have access to the string
of maximum probability, we report only the per-
cent of Estimated Search Errors. Note that, in terms
of accuracy, we get around 20% Estimated Search
Errors for the best performing algorithms (A and
B ), which means that 80% of the time the algo-
rithms are able to find sentences of equal or better
probability than the original sentences.
5 Conclusions
In this paper, we advocate that IDL expressions
can provide an adequate framework for develop-
73
ALG ID BLEU Est. Search Speed
(%) Errors (%) (sec./bag)
L 0.0 1.4 99.9 0.0
G 1.2 31.6 83.6 0.0
A 5.8 47.7 34.0 0.7
A 7.4 51.2 21.4 9.5
B 9.0 52.1 23.3 7.1
B 12.2 52.6 19.9 36.7
Table 3: Bags-of-words of size 10-25: accuracy (ID,
BLEU), Estimated Search Errors, and speed results.
ing text-to-textgeneration capabilities. Our contri-
bution concerns a new generation mechanism that
implements intersection between an IDL expression
and a probabilistic language model. The IDL for-
malism is ideally suited for our approach, due to
its efficient representation and, as we show in this
paper, efficient algorithmsfor intersecting, scoring,
and ranking sentence realizations using probabilistic
language models.
We present theoretical results concerning the cor-
rectness and efficiency of the proposed algorithms,
and also present empirical results that show that
our algorithms scale up to handling IDL-expressions
of high complexity. Real-world text-to-text genera-
tion tasks, such as headline generation and machine
translation, are likely to be handled graciously in this
framework, as the complexity of IDL-expressions
for these tasks tends to be lower than the complex-
ity of the IDL-expressions we worked with in our
experiments.
Acknowledgment
This work was supported by DARPA-ITO grant
NN66001-00-1-9814.
References
Srinivas Bangalore and Owen Rambow. 2000. Using
TAG, a tree model, and a language model for genera-
tion. In Proceedings of the 1st International Natural
Language Generation Conference.
Regina Barzilay. 2003. Information Fusion for Multi-
document Summarization: Paraphrasing and Genera-
tion. Ph.D. thesis, Columbia University.
Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della
Pietra, and Robert L. Mercer. 1993. The mathematics
of statistical machine translation: Parameter estima-
tion. Computational Linguistics, 19(2):263–311.
Thomas H. Cormen, Charles E. Leiserson, Ronald L.
Rivest, and Clifford Stein. 2001. Introduction to Al-
gorithms. The MIT Press and McGraw-Hill. Second
Edition.
Simon Corston-Oliver, Michael Gamon, Eric K. Ringger,
and Robert Moore. 2002. An overview of Amalgam:
A machine-learned generation module. In Proceed-
ings of the International Natural Language Genera-
tion Conference.
Michael Elhadad. 1991. FUF User manual — version
5.0. Technical Report CUCS-038-91, Department of
Computer Science, Columbia University.
John E. Hopcroft and Jeffrey D. Ullman. 1979. Introduc-
tion to automata theory, languages, and computation.
Addison-Wesley.
Kevin Knight and Jonathan Graehl. 1998. Machine
transliteration. Computational Linguistics, 24(4):599–
612.
Kevin Knight and Vasileios Hatzivassiloglou. 1995. Two
level, many-path generation. In Proceedings of the As-
sociation of Computational Linguistics.
Kevin Knight. 1999. Decoding complexity in word-
replacement translation models. Computational Lin-
guistics, 25(4):607–615.
Irene Langkilde-Geary. 2002. A foundation for general-
purpose natural language generation: sentence real-
ization using probabilistic models of language. Ph.D.
thesis, University of Southern California.
Christian Matthiessen and John Bateman. 1991. Text
Generation and Systemic-Functional Linguistic. Pin-
ter Publishers, London.
Mehryar Mohri, Fernando Pereira, and Michael Ri-
ley. 2002. Weighted finite-state transducers in
speech recognition. Computer Speech and Language,
16(1):69–88.
Mark-Jan Nederhof and Giorgio Satta. 2004. IDL-
expressions: a formalism for representing and parsing
finite languages in natural language processing. Jour-
nal of Artificial Intelligence Research, 21:287–317.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
Jing Zhu. 2002. BLEU: a method for automatic evalu-
ation of machine translation. In Proceedings of the As-
sociation for Computational Linguistics (ACL-2002),
pages 311–318, Philadelphia, PA, July 7-12.
Stuart Russell and Peter Norvig. 1995. Artificial Intelli-
gence. A Modern Approach. Prentice Hall, Englewood
Cliffs, New Jersey.
74
. Association for Computational Linguistics Towards Developing Generation Algorithms for Text-to-Text Applications Radu Soricut and Daniel Marcu Information Sciences Institute University of Southern California 4676. or problematic. For instance, as already argued, the representation for- malism of all previous proposals except for IDL is problematic ( ) for text-to-text applications. The IDL formalism, while. expect to be the case for most practical problems, our algorithms perform in poly- nomial time in the number of words available for generation. 4 Evaluation of IDL-NGLM Algorithms In this section,