Knowledge about the state of the dialog is represented in a dedicated language and changes of this state are described by a compact set of rules.. the dialog, at least to the extent that
Trang 1D i a l o g C o n t r o l in a N a t u r a l L a n g u a g e S y s t e m 1
Michael Gerlach Helmut Horacek Universit~t Hamburg Fachbereich Informatik Projektgruppe WISBER
ABSTRACT
In this paper a method for controlling
the dialog in a natural language (NL)
system is presented It provides a deep
modeling of information processing
based on time dependent propositional
attitudes of the interacting agents
Knowledge about the state of the dialog
is represented in a dedicated language
and changes of this state are described
by a compact set of rules An appropri-
ate organization of rule application is
introduced including the initiation of
an adequate system reaction Finally
the application of the method in an NL
consultation system is outlined
INTRODUCTION
The solution of complex problems fre-
quently requires cooperation of multi-
ple agents A great deal of interaction is
needed to identify suitable tasks whose
completion contributes to attaining a
common goal and to organize those
tasks appropriately In particular, this
involves carrying out communicative
subtasks including the t r a n s f e r of
knowledge, the adjustment of beliefs,
expressing wants and pursuing their
satisfaction, all of which is motivated
by the intentions of the interacting
agents [Werner 88] An ambitious dia-
log system (be it an interface, a mani-
pulation system, or a consultation sys-
tem) which is intended to exhibit (some
of) these capabilities should therefore
consider these intentions in processing
1 The work described in this paper is part
of the joint project WISBER, which is sup-
ported by the German Federal Ministery for
Research and Technology under grant ITW-
8502 The partners in the project are: Nixdorf
Computer AG, SCS Orbit GmbH, Siemens
AG, the University of Hamburg, and the Uni-
versity of SaarbrOcken
the dialog, at least to the extent that is required for the particular type of the dialog and the domain of application
A considerable amount of work in cur- rent AI research is concerned with in- ferring intentions from utterances (e.g., [Allen 83], [Carberry 83], [Grosz, Sidner 86]) or planning speech acts serving certain goals (e.g., [Appelt 85]), but only
a few uniform approaches to both as- pects have been presented
Most approaches to dialog control de- scribed in the literature offer either rigid action schemata that enable the simulation of the desired behavior on the surface (but lack the necessary de- gree of flexibility, e g., [Metzing 79]), or descriptive methods which may also in- clude possible alternatives for the con- tinuation of the dialog, but without ex- pressing criteria to ~aide an adequate choice among them (e g., [Me~ing et al 87], [Bergmann, Gerlach 87])
Modeling of beliefs and intentions (i.e.,
of propositional attitudes) of the agents involved is found only in the ARGOT system [Litman, Allen 84] This ap- proach behaves sufficiently well in se- veral isolated situations, but it fails to demonstrate a continuously adequate behavior in the course of a complete dia- log An elaborated theoretical frame- work is provided in [Cohen, Perrault 79] but they explicitly exclude the dele- tion of propositional attitudes Hence, they cannot explain w h a t h a p p e n s when a want has been satisfied
In our approach we have enhanced the propositional attitudes by associating them with time intervals expressing their time of validity This enables us to represent the knowledge about the ac- tual state of the dialog (and also about past states) seen from the point of view
of a certain a g e n t and to express changes in the propositional attitudes
Trang 2occurring in the course of the dialog
and to calculate their effect This deep
modeling is the essential resource for
controling the progress of the conversa-
tion in a p p r o a c h i n g its overall goal,
and, in particular, for determining the
n e x t subgoal in the conversation which
m a n i f e s t s itself in a system utterance
We have applied our method in the NL
consultation s y s t e m W I S B E R ([Hora-
cek et al 88], [Sprenger, Gerlach 88])
which is able to participate in a dialog
in the domain of financial investment
R E P R E S E N T I N G
P R O P O S I T I O N A L A T T I T U D E S
Knowledge about the state of the dialog
is represented as a set of propositional
attitudes The following three types of
p r o p o s i t i o n a l a t t i t u d e s of a n a g e n t
towards a proposition p form a basic
repertoire :
KNOW : The a g e n t is sure t h a t p is true
This does not imply t h a t p is really true
since the system h a s no m e a n s to find
o u t the r e a l s t a t e of t h e world As-
s u m i n g t h a t the user of a dialog system
obeys the sincerity condition (i.e., al-
w a y s telling the t r u t h , c.f [Grice 75])
a n assertion uttered by the user implies
t h a t the user knows the content of t h a t
assertion
BELIEVE : The a g e n t believes, b u t is not
sure, t h a t p is true, or he/she assumes p
without sufficient evidence
WANT : The a g e n t w a n t s p to be true
Propositional attitudes are represented
in our s e m a n t i c r e p r e s e n t a t i o n lan-
g u a g e IRS, which is used by all system
components involved in semantic-prag-
m a t i c processing IRS is based on predi-
cate calculus, and contains a rich collec-
tion of additional features required by
N L processing (see [Bergmann et al 87]
for detailed information) A propositio-
nal attitude is written as
( < t y p e > < a g e n t > < p r o p > < t i m e > ) :
• < t y p e > is a n e l e m e n t of the set:
• The two agents r e l e v a n t in a dialog
system are the USER and the SYSTEM
I n a d d i t i o n , we u s e t h e n o t i o n
' m u t u a l k n o w l e d g e ' I n f o r m a l l y ,
this m e a n s t h a t both the user and
the s y s t e m k n o w t h a t < p r o p > is true, and t h a t each knows t h a t the other knows, r e c u r s i v e l y We will use t h e n o t a t i o n (KNOW MUTUAL
< prop > .) to express t h a t the pro- position < prop > is m u t u a l l y k n o w n
by the user and the system
• < prop > is an IRS formula denoting the proposition the attitude is about
It m a y a g a i n be a propositional atti- tude, as in (WANT USER (KNOW USER x .) ) which m e a n s that the user wants to k n o w x T h e proposition
m a y also contain the meta-predi-
cates RELATED and AUGMENT: (RELATED
x) m e a n s 'something which is related
to the individual x', i.e., it m u s t be possible to establish a chain of l i n k s connecting the i n d i v i d u a l a n d t h e proposition In t h i s g e n e r a l f o r m
assumptions about the user's compe- tence For a more intensive applica- tion, however, f u r t h e r c o n d i t i o n s
m u s t be put on the connecting links
(AUGMENT 0 m e a n s 'something more
specific t h a n the formula f', i.e., a t least one of the v a r i a b l e s m u s t be
q u a n t i f i e d or c a t e g o r i z e d m o r e precisely or additional propositions
m u s t be associated These meta-pre- dicates are used by the dialog con- trol rules as a very compact w a y of expressing general properties of pro- positions
• Propositional attitudes as a n y o t h e r states hold d u r i n g a period of time
In W I S B E R we use A l l e n ' s t i m e logic [Allen 84] to r e p r e s e n t such
t e m p o r a l i n f o r m a t i o n [Poesio 88]
< t i m e > m u s t be a n i n d i v i d u a l of type TIME-INTERVAL In t h i s p a p e r , however, for the sake of brevity we will use almost exclusively the spe- cial constants NOW, PAST a n d FUTURE, denoting time i n t e r v a l s w h i c h a r e asserted to be during, before or a f t e r the current time
I N F E R E N C E R U L E S
A s n e w information is provided by the user and inferences are m a d e by the system, the set of propositional atti- tudes to be represented in the system will evolve While the semantic-prag- matic analysis of user utterances ex- ploits linguistic features to derive the
Trang 3attitudes expressed by the u t t e r a n c e s
(c.f [Gerlach, Sprenger 88]), the dialog
control c o m p o n e n t i n t e r p r e t s r u l e s
which embody knowledge about know-
ing and w a n t i n g as well as about the
domain of discourse These rules de-
scribe communicative as well as r/on-
c o m m u n i c a t i v e actions, a n d specify
how new propositional attitudes can be
derived Rules about the domain of dis-
course express state changes including
the involved action The related states
a n d the t r i g g e r i n g action are associated
with time-intervals so t h a t the correct
temporal sequence can be derived
Both classes of rules are represented in
a uniform formalism based on the sche-
m a p r e c o n d i t i o n - a c t i o n - effect:
• The p r e c o n d i t i o n consists of patterns
of propositional attitudes or s t a t e s
in the domain of discourse The pat-
terns m a y contain temporal restric-
tions as well as the meta-predicates
m e n t i o n e d above A p r e c o n d i t i o n
m a y also contain a rule description,
e.g., to express t h a t an a g e n t knows
a rule
el of communication (in the case of
speech act triggering rules) or on the
level of the domain (actions the dia-
log is about) However, there are al-
so pure inference rules in the dialog
control module; their action p a r t is
void
• The e f f e c t of a rule is a set of descrip-
tions of states of the world and pro-
positional a t t i t u d e s which are in-
s t a n t i a t e d when applying the rule
yielding new entries in the system's
knowledge base We do not delete
propositional attitudes or other pro-
OSitions, i.e., the s y s t e m will not
rget them, b u t we can m a r k the
time interval associated with an en-
t r y as being 'finished' Thus we can
express t h a t the e n t r y is no longer
valid, and it will no longer match a
p a t t e r n with the time of v a l i d i t y
restricted to NOW
C O N T R O L S T R U C T U R E
So far, we have only discussed how the
a c t u a l s t a t e of the d i a l o g (from t h e
point of view o f a c e r t a i n a g e n t ) can be
represented and how c h a n g e s in this
state can be described We still need a method to determine and c a r r y out the relevant changes, given a certain state
of the dialog, after i n t e r p r e t i n g a user utterance (i.e., to decide which dialog rules m a y be tried and in which order) For reasons of simplicity we have divid-
ed the set of rules into t h r e e s u b s e t s each of them being responsible for ac- complishing a specific subtask, namely:
• gaining additional i n f o r m a t i o n in- ferable from the i n t e r r e l a t i o n bet-
w e e n r e c e n t i n f o r m a t i o n c o m i n g from the last user utterance a n d the actual dialog context The combina- tion of new and old information m a y ,
e g., change the degree of c e r t a i n t y
of some proposition, i e., t e r m i n a t e
an (uncertain) BELIEVE state a n d cre- ate a (certain) KNOW state with iden- tical propositional content (the con- sistency m a i n t e n a n c e rule package)
• p u r s u i n g a global (cognitive or ma- nipulative) goal; this m a y be done either by t r y i n g to satisfy this goal directly, or indirectly by substitut- ing a more adequate goal for it a n d
p u r s u i n g this new goal In particu- lar, a goal substitution is u r g e n t l y needed in case the original goal is unsatisfiable (for the system), b u t a promising a l t e r n a t i v e is a v a i l a b l e (the goal pursuit rule package)
• p u r s u i n g a communicative subgoal
I f a goal can not (yet) be accom- plished due to lack of information, this leads to the creation of a WANT
c o n c e r n i n g k n o w l e d g e a b o u t t h e missing information W h e n a goal
h a s been accomplished or a signi- ficant difference in the beliefs of the user and the system h a s been disco- vered, the system WANTS the user to
be informed about that All this is done in the p h a s e concerned w i t h cognitive goals Once such a WANT is created, it can be associated with a n appropriate speech act, provided the competent dialog p a r t n e r (be it the user or an external expert) is deter-
m i n e d (the speech a c t t r i g g e r i n ~ rule package)
There is a certain l i n e a r d e p e n d e n c y between these subtasks Therefore the respective rule packages are applied in
a suitable (sequential) order, w h e r e a s those rules belonging to the same pack-
Trang 4age m a y be applied in a n y order (there
exist no interrelations w i t h i n a single
rule package) This simple forward in-
ferencing works correctly and with an
acceptable performance for the actual
coverage and degree of complexity of
the system
A sequence c o n s i s t i n g of these three
subtasks forms a (cognitive) processing
cycle of the s y s t e m from r e c e i v i n g a
user message to i n i t i a t i n g an adequate
reply This procedure is repeated until
there is evidence t h a t the goal of the
conversation h a s been accomplished (as
indicated by knowledge a n d assump-
tions about the user's WANTS) o r that
the user wants to finish the dialog In
either case the system closes the dialog
A P P L I C A T I O N IN A
C O N S U L T A T I O N SYSTEM
In this section we present the applica-
tion of our method in the NL consul-
ration system WISBER involving rath-
er complex interaction with subdialogs,
requests for explanation, recommenda-
tions, a n d a d j u s t m e n t of proposals
However, it is possible to i n t r o d u c e
some simplifications typical for consul-
ration dialogs These are u r g e n t l y need-
ed in order to reduce the otherwise ex-
cessive a m o u n t of complexity In parti-
cular, we assume t h a t the user does not
lie a n d take h i s / h e r assertions about
real world events as true (the sincerity
condition) Moreover, we t a k e it for
g r a n t e d t h a t the user is h i g h l y interest-
ed in a consultation dialog and, there-
fore, will pay attention to the conversa-
tion on the screen so t h a t it can be rea-
sonably a s s u m e d t h a t he/she is f u l l y
aware of all utterances occurring in the
course of the dialog
Based on these (implicit) expectations,
the following (simplified) assumptions
(1) a n d (2) represent the starting point
for a consultation dialog:
(1) (BELIEVE SYSTEM
(WANT USER
((EXIST X (STATE X))
(HAS-EXPERIENCER X USER)) NOW) NOW)
(2) (BELIEVE SYSTEM
(KNOW USER
(HAS-EXPERIENCER Y USER)))
NOW) NOW)
They express t h a t the user knows some-
t h i n g t h a t 'has to do' (expressed by the
m e t a - p r e d i c a t e RELATED) w i t h s t a t e s (STATE Y) concerning h i m / h e r s e l f a n d
t h a t he/she w a n t s to a c h i e v e a state (STATE X) In assumption 1, (STATE X) is in fact specialized for a consultation sys- tem as a real world state (instead of a
m e n t a l state which is the g e n e r a l as- sumption in a n y dialog system) T h i s state can still be made more concrete when the domain of application is t a k e n into account:
In WISBER, we assume t h a t the u s e r wants his/her money 'to be i n v e s t e d ' The second a s s u m p t i o n e x p r e s s e s (a part of) the competence of the user T h i s
is not of particular importance for m a n y other types of dialog systems In a con- sultation system, however, this is t h e basis for addressing the user in order to ask him]her to m a k e his/her i n t e n t i o n s more precise In the course of the dialog these assumptions are supposed to be confirmed and, moreover, t h e i r content
is expected to become more precise
In the subsequent p a r a g r a p h s we out- line the processing behavior of the sys- tem by e x p l a i n i n g the application a n d the effect of some of the most i m p o r t a n t dialog rules (at least one of each of the three packages introduced in the previ- ous section), thus giving an impression
of the system's coverage In the r u l e s presented below, v a r i a b l e s are s u i t a b l y quantified as they appear for the first time in the precondition In s u b s e q u e n t appearences they are referred to l i k e constants The interpretation of the spe- cial constants denoting t i m e - i n t e r v a l s depends on whether they occur on the left or on the r i g h t side of a rule: in the precondition the associated state/event
m u s t hold/occur during PAST, FUTURE or overlaps NOW; in the effect the state/ event is associated with a time-interval that starts at the reference time-inter- val
In a consultation dialog, the user's wants m a y not always express a direct request for information, but rather re- fer to events and states in the real world F r o m such user wants the sys- tem m u s t derive requests for knowledge useful w h e n attempting to satisfy them.2 Consequently the task of infer-
Trang 5(KNOW MUTUAL
(WANT USER
(EXIST A (ACTION A)) NOW) NOW)
A
(KNOW SYSTEM
(UNIQUE R
(AND (RULE R) (HAS-ACTION R A) (HAS-PRECONDITION R (EXIST 51 (STATE 51))) (HAS-EFFECT R
(EXIST $2 (STATE 52))))) NOW)
=~
(KNOW MUTUAL (WANT USER
51 NOW) NOW)
A (KNOW MUTUAL
R NOW)
A (KNOW MUTUAL (WANT USER s2 NOW) NOW)
Rule 1: Inference d r a w n from a user w a n t referring to an action with u n a m b i -
guous consequences (pursuing a global goal)
r i n g communicative goals is of central
importance for the functionality of the
system
There is, however, a f u n d a m e n t a l dis-
tinction w h e t h e r the content of a w a n t
refers to a state or to an event (to be
more precise, to an action, mostly) In
the l a t t e r case some i m p o r t a n t infer-
ences can be d r a w n depending on the
d o m a i n k n o w l e d g e a b o u t t h e e n v i -
sioned action and the degree of preci-
sion expressed in its specificatiqn If,
a c c o r d i n g to t h e s y s t e m ' s d o m a i n
model, the effect of the specified action
is unambiguous, the user can be expect-
ed to be f a m i l i a r with this relation, so
he/she can be assumed to envision the
r e s u l t i n g state and, possibly, the pre-
condition as well, if it is not yet ful-
filled Thus, in principle, a plan consist-
ing of a sequence of actions could be cre-
a t e d b y a p p l i c a t i o n of s k i l l f u l r u l e
chaining
This is e x a c t l y w h a t Rule 1 asserts:
Given the m u t u a l knowledge t h a t the
user w a n t s a certain action to occur,
a n d the system's knowledge (in form of
a unique rule) about the associated pre-
condition a n d effect, the s y s t e m con-
c l u d e s t h a t t h e u s e r e n v i s i o n s t h e
r e s u l t i n g state and he/she is f a m i l i a r
with the connecting causal relation I f
the uniqueness of the rule c a n n o t be
2 Unlike other systems, e.g., UC [Wilensky et
al 84], which can directly perform some kinds
of actions required by the user, WISBER is
unable to affect any part of the real world in
the domain of application
established, sufficient evidence derived from the p a r t n e r model m i g h t be a n alternative basis to obtain a sufficient categorization of the desired e v e n t so
t h a t a unique rule is found Otherwise the u s e r h a s to be a s k e d to p r e c i s e his/her intention
Let us suppose, to give an example, t h a t the user has expressed a w a n t to invest his/her money According to W I S B E R ' s domain model, there is only one match- ing d o m a i n rule e x p r e s s i n g t h a t t h e user has to possess the m o n e y before but not after investing his/her money, and obtains, in exchange, an asset of a n equivalent value Hence Rule 1 fires The w a n t expressed by the second p a r t
of the conclusion can be i m m e d i a t e l y satisfied as a consequence of the u s e r utterance 'I have inherited 40 000 DM'
by applying Rule 5 (which will be ex- plained later) The r e m a i n d e r p a r t of the conclusion m a t c h e s a l m o s t com- pletely the precondition of Rule 2
This rule states: I f the u s e r w a n t s to achieve a goal state (G) and is informed about the w a y this can be done (he/she knows the specific RULE R and is capable
of performing the r e l e v a n t action), the system is right to assume t h a t the user
is lacking some information which in- hibits him/her from actually doing it Therefore, a w a n t of the user indicating the intention to know more about this transaction is created (expressed by the meta-predicate AUGMENT) I f the neces- sary capability cannot be a t t r i b u t e d to the user a consultation is impossible
If, to discuss another example, the u s e r
h a s expressed a w a n t a i m i n g a t a cer-
Trang 6(KNOW MUTUAL
(WANT USER
(EXIST S (STATE S)) NOW) NOW)
A
(KNOW MUTUAL
(UNIQUE R
(AND (RULE R)
(HAS-EFFECT R S) (HAS-ACTION R (EXIST A (ACTION A))))) NOW)
A
(KNOW MUTUAL
(CAPABILITY USER A) NOW)
=~
(BELIEVE SYSTEM (WANT USER (KNOW USER
(AUGMENT S) FUTURE) NOW) NOW)
Rule 2: Inference drawn from a user want referring to a state, given his/her ac-
q u a i n t a n c e with the associated causal relation (pursuing a global goal) rain state (e.g., 'I w a n t to have m y mon-
ey back'), the a p p l i c a t i o n of a n o t h e r
rule almost identical to Rule 1 is at-
tempted W h e n its successful applica-
tion yields the association of a unique
event, the required causal r e l a t i o n is
established Moreover, the user's fami-
l i a r i t y with this relation m u s t be deri-
v a b l e in order to follow the path indi-
cated by Rule 2 Otherwise, a w a n t of
the user would be created whose con-
tent is to find out about suitable m e a n s
to a c h i e v e the d e s i r e d s t a t e (as ex-
pressed by Rule 3, l e a d i n g to a system
reaction like, e.g., 'you m u s t dissolve
your s a v i n g s account')
It is very frequently the case t h a t the
satisfaction of a w a n t cannot immedi-
ately be achieved because the precision
of its specification is insufficient W h e n the domain-specific p r o b l e m s o l v i n g component indicates a clue about w h a t information would be helpful in this re- spect this triggers the creation of a sys- tem w a n t to get a c q u a i n t e d w i t h it
W h e n e v e r the user's u n i n f o r m e d n e s s in
a particular case is not yet proved, a n d this information falls into his/her com- petence area, control is passed to the ge- neration component to address a suit- able question to the user (as expressed
in Rule 4)
Provided with new information hopeful-
ly obtained by the user's reply the sys- tem tries a g a i n to satisfy the (more pre- cisely specified) user want This process
is repeated u n t i l an adequate degree of specification is achieved at some stage
(KNOW MUTUAL
(WANT USER
(EXIST G (STATE G)) NOW) NOW)
A
(KNOW SYSTEM
(EXIST R
(AND (RULE R) (HAS-EFFECT R G) (HAS-PRECONDITION R (EXIST S (STATE S))) (HAS-ACTION R (EXIST A (ACTION A))))) NOW)
A
(-= (KNOW USER R NOW))
=~
(BELIEVE SYSTEM (WANT USER (KNOW USER
R FUTURE) NOW) NOW)
quaintance with the associated causal relation (pursuing a global goal)
- 3 2 -
Trang 7(WANT SYSTEM
(KNOW SYSTEM
X FUTURE) NOW)
A
(BELIEVE SYSTEM
(KNOW USER
(RELATED X)
NOW) NOW)
A
(-I (KNOW SYSTEM
(-1 (KNOW USER
x NOW)) NOW))
(ASK
SYSTEM
USER
x)
(KNOW MUTUAL (WANT SYSTEM (KNOW SYSTEM X FUTURE)
NOW) NOW)
A (KNOW MUTUAL (BELIEVE SYSTEM (KNOW USER
(RELA TED X) NOW)
NOW) NOW)
Rule 4: Inference drawn from the user's (assumed) competence and a system
want in this area (triggering a speech act)
In the course of the dialog each utter-
ance effects parts of the system's cur-
rent model of the user (concerning as-
sumptions or temporarily established
knowledge) Therefore, these effects are
checked in order to keep the data base
consistent Consider, for in st an c e, a
user w a n t a i m i n g at investing some
money which, after a phase of para-
meter assembling, has led to the system
proposal 'I r e c o m m e n d you to buy
bonds' a p p a r e n t l y accomplishing the
(substitued) goal of obtaining enough
information to perform the envisioned
action Consequently, the state of the
associated u s e r w a n t is s u b j ect to
change which is expressed by Rule 5
Therefore, the mutual knowledge about
the user want is modified (by closing
the associated time-interval) and the
the user's want is marked as being 'fin-
ished' and added to the (new) mutual
knowledge
However, this simplified treatment of
the satisfaction of a want includes the
restrictive assumptions that the accept- ance of the proposal is (implicitly) anti- cipated, and that modifications of a want or of a proposal are not manage- able In a more elaborated version, the goal accomplishment has to be m a r k e d
as provisory If the user expresses his/her acceptance either explicitly or changes the topic (thus implicitly agreeing to the proposal), the appli- cation of Rule 5 is fully justified
Apart from the problem of the increas- ing complexity and the a m o u n t of ne- cessary additional rules, the prelimi- nary status of our solution has m u c h to
do with problems of interpreting the
the representation of a communicative goal according to the derivation by Rule 2: The system is satisfied by finding any additional information augmenting the user's knowledge, but it is not aware of the requirement that the information must be a suitable supplement (which is recognizable by the user's confirmation only)
(KNOW MUTUAL
x NOW)
Rule 5: Inference drawn from a (mutually known) user want which the user
k n o w s to be accomplished (pursuing consistency maintenance)
Trang 8F U T U R E R E S E A R C H
The method described in this paper is
fully implemented and integrated in
the complete NL system WISBER A re-
latively small set of rules has proved
sufficient to guide basic consultation di-
alogs Currently we are extending the
set of dialog control rules to perform
more complex dialogs Our special in-
terest lies on clarification dialogs to
handle misconceptions and inconsisten-
cies The first steps towards handling
inconsistent user goals will be an expli-
cit representation of the interrelation-
ships holding between propositional at-
titudes, e.g., goals being simultaneous
or competing, or one goal being a re-
finement of another goal A major ques-
tion will be specifying the operations
necessary to recognize those interrela-
tionships working on the semantic re-
presentation of the propositional con-
tents As our set of rules grows, a more
sophisticated control mechanism will
become necessary, structuring the deri-
vation process and employing both for-
ward and backward reasoning
R E F E R E N C E S
Allen 83
wick, R.C (eds.): Computational Models of Dis-
course, MIT Press, 1983, pp 107-166
Allen 84
1984, pp 123-154
Appelt 85
Appelt, D.E.: P l a n n i n g E n g l i s h Sentences
Cambridge University Press, 1985
Bergmann, Gerlaeh 88
B e r g m a n n , H., Gerlach, M.: S e m a n t i s c h -
pragmatische Verarbeitung von ~,uflerungen
im nattlrlichsprachlichen Beratungssystem
W I S B E R In: Brauer, W., Wahlster, W (eds.):
Wissensbasierte Systeme - GI-Kongress 1987
Springer Verlag, Berlin, 1987, pp 318-327
Bergmann et al 87
Bergmann, H., Fliegner, M., Gerlach, M.,
14, Universi~t Hamburg, 1987
Carberry 83
of the AAAI-83, Washington, D.C., 1983, pp
59-63
Cohen, Perrault 79
Science 3, 1979, pp 177-212
Gerlach, S p r e n g e r 88
tion of Pragmatic Clues: Connectives, Modal
COLING-88, Budapest, 1988, pp 191-195
Morgan (ed.): Syntax and Semantics, Vol 3: Speech Acts, Academic Press, New York, 1975,
pp 41-58
Grosz, S i d n e r 86
tational Linguistics 12 (3), 1986, pp 175-204
H o r a c e k et al 88 Horacek, H., Bergmann, H., Block, R., Fliegner,
Meaning to Meaning - a Walk through WIS
genz - GWAI-88, Springer Verlag, Berlin, 1988,
pp 118-129
Litman, Allen 84
COLING'84, Stanford, pp 302-311
MeBing, et al 87 Mefling, J., Liermann I., Schachter-Radig M.-J.:
HandIungsschemata in Beratungsdialogen -
A m Gespr(zchsgegenstand orientierte Dialog-
jekt, Dezember 1987, SCS Organisationsbera- tung und Informationstechnik GmbH, Ham- burg
muster und Dialogprozesse H e l m u t Buske Verlag, Hamburg, 1979
Poesio 88
pp, 247-252
Sprenger, Gerlach 88
Sprenger, M., Gerlach, M.: Expectations and Propositional Attitudes - Pragmatic Issues in WISBER In: Proc of the International Com- puter Science Conference '88, ttong Kong, 1988
Werner 88
tion and Cooperation for Multiagent Planning
In: Theoretical Aspects of Reasoning about Knowledge, Proceedings of the 1988 Confer- ence, Morgan Kaufman Publishers, Los Altos,
1988, pp 129-143
Wilensky et al, 84
munications of the ACM, Vol 27, No 6, pp 574-
593
- 3 4 -