... improve sentence- level
prediction, using only labeled sentences for training.
In a similar vein, Sauper et al. (2010) integrated gen-
erative content structure models with discriminative
models for ... both models dominate the supervised
approach for the full range of recall values.
As discussed earlier, and confirmed by Table 2,
the coarse-grained model only performs well on the
predominant sentence- level ... the
predominant sentence- level categories for each docu-
ment category. The supervised model handles nega-
tive and neutral sentences well, but performs poorly
on positive sentences even in positive documents.
The...
... goal sentence
length for summaries.
In the first experiment, the system was given
only a sentence and no sentence length informa-
tion. The sentencecompression problem without
the length information ... sentence
pairs. Thirty-two of these sentences were used for
the human judgments in Knight and Marcu’s ex-
periment, and the same sentences were used for
our human judgments. The rest of the sentences
were ... Noisy-Channel Model for Sentence
Compression
Knight and Marcu proposed a sentence compres-
sion method using a noisy-channel model (Knight
and Marcu, 2000). This model assumes that a long
sentence was...
... “letter”. We look for all replacements of one
sentence by another and check whether one sentence
is a compression of the other.
3
We then run Collins’
parser (1997), using just the sentence pairs ... During
decoding, given a long sentence, we seek the most
likely short sentence that could have generated it.
1
For instance, compressions are more likely to signal op-
tional information than expansions; ... sentences taken from the
Ziff-Davis corpus. We solicited judgments of im-
portance (the value of the retained information), and
grammaticality for our compression, the KM results,
and human compressions...
... KH&CN, TẬP 11, SỐ 03 - 2008
Trang 49
DEVELOPMENT OF AN AUTOUMATIC DATA PROCESSING
FOR TRIAXIAL COMPRESSION TEST
Pham Hong Thom
(1)
, Le Minh Son
(2)
, Phan Tan Tung
(1)
, Nguyen Tan ... exactly the results of experiment and convenience
for examiner but its cost is very high. The second one has lower cost but it is inconvenience
for examiner to get the testing results during test ... sometimes lasts a week, so the examiner must waste so much time for
recording the parameters.
The examiner also wastes time for entering all parameters and calculating the results on
paper...
... of models should be based on the 3D thermo-
hydrodynamical modelsfor each particular environment that contains interested matter.
Development of system of hydrodynamic-environmental modelsfor ... need to build a system of modelsfor each environment which is related to
each other through the boundary conditions. In this case, we can introduce the system
of modelsfor oil slick, oil-in-water ... (2003).
3. Model for suspended particulate matter transport and
bottom bathymetry evolution
The model for SPM and bottom bathymetry evolution includes 2 sub -models: 3D
SPM Model for water column...
... Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission
to reuse the copyright material ... n these models are used for design
purposes, typically within a commercial simulation package. Applications in the oil and gas and chemical
sectors are emphasized but models suitable for polymers ... general thermodynamic models which can describe equally
successfully all types of phase equilibria at all conditions. Suitable modelsfor high- and low-pressure phase
equilibria for simple as well...
... subrule ADJ/
SENTENCE -FOR- SENTENCE TRANSLATION
33
SENTENCE -FOR- SENTENCE TRANSLATION
17
which defines the substructure of the input sentence
in which is contained the information needed ... specifier for the output sentence should be formed.
As a result part of it is formed during the analysis of
the input sentence, another part during the actual pro-
duction of the output sentence ... indicative of the gender of its
SENTENCE -FOR- SENTENCE TRANSLATION
27
[Mechanical Translation and Computational Linguistics, vol. 8, No. 2, February 1965]
Sentence -For- Sentence Translation: An Example*...
... consistent within a document.
Results Results for both settings are shown in Ta-
ble 2. GTM models the latent topics at the document
level, while LTM models each sentence as a separate
document. To evaluate ... unseen for a given domain s, we are already
performing an implicit form of smoothing (when
computing the expected counts), since each docu-
ment has a distribution over all topics, and therefore
we ... probability on how
relevant that translation table is to the sentence. This
allows us to bias the translation toward the topic of
the sentence. For example, if topic k is dominant in
T , p
k
(e|f) may...
... associative information between the
target and the opinion, we consider the document
set as a bag of sentences, and define a sentence
set as
,
,
,,
. For each sentence, we ... indicates sentence C contains a re-
levant opinion. Similarly, we map each sentence
in word pairs by the following rule, and express
the intra -sentence information using word pairs.
For each ... performance.
In this paper, we propose a sentence- based ap-
proach based on a new information representa-
tion, namely topic-sentiment word pair, to cap-
ture intra -sentence contextual information...
... incremental syntactic lan-
guages models are a straightforward and appro-
Moses LM(s) BLEU
n-gram only 18.78
HHMM + n-gram 19.78
Figure 9: Results for Ur-En devtest (only sentences
with 1-20 words) ... Association for Computational Linguistics, pages 620–631,
Portland, Oregon, June 19-24, 2011.
c
2011 Association for Computational Linguistics
Incremental Syntactic Language Modelsfor Phrase-based ... syntactic language models score sen-
tences in a similar left-to-right fashion, and are
therefore a good mechanism for incorporat-
ing syntax into phrase-based translation. We
give a formal definition...
... of person-
alized models, we report the most significant fea-
tures, sorted by Information Gain (IG), for three
sample ASP Pers+Text models (Table 3). Interest-
ingly, whereas for Pers 1 and Pers ... answer are significant, for
Pers 3 non-textual features are most significant.
We also report the top 10 features with the high-
est information gain for the ASP and ASP Group
models (Table 4). Interestingly, ... hour 0.08572 CA ratio ans ques
Table 4: Top 10 features by information gain for ASP
(trained for all askers) and ASP Group (trained for the
group of askers with 20 to 29 questions)
4 Conclusions
We...
... target sentences t
I
1
. Therefore, the
same techniques (translation models, decoder al-
gorithm, etc.) which have been developed for
SMT can be used in CAT.
Note that the statistical models ... framework, real-time is re-
quired.
4 Phrase-based models
The usual statistical translation models can be
classified as single-word based alignment models.
Models of this kind assume that an input word ... Association for Computational Linguistics
Statistical phrase-based modelsfor interactive computer-assisted
translation
Jes
´
us Tom
´
as and Francisco Casacuberta
Instituto Tecnol
´
ogico de Inform
´
atica
Universidad...
... language.
Those lexicon models lack from context infor-
mation that can be extracted from the same paral-
lel corpus. This additional information could be:
Simple context information: information of
the ... surrounding the word pair;
Syntactic information: part-of-speech in-
formation, syntactic constituent, sentence
mood;
Semantic information: disambiguation in-
formation (e.g. from WordNet), cur-
rent/previous ... fact that the
algorithm for computing the
-best lists is sub-
optimal.
Table 8: Preliminary translation results for the
Verbmobil Test-147 for different contextual infor-
mation and different...
...
of the sentence and the discourse are relevant to
the model. Most previous probabilistic models of
parsing assume the probabilities of sentences in a
discourse are independent of other sentences. ... tree that tree T* for which
T" =arg maxp(T, w~) (1)
T6~(~)
where ~(w~) is the set of all parses produced by
the grammar for the sentence w~. Many aspects of
the input sentence that might ... any-consistent rate, defined as the percent-
age of sentences for which the correct parse is
proposed among the many parses that the gram-
mar provides for a sentence. We also measure
the parse base,...
... performance of
SPATTER as a function of sentence length. SPAT-
TER's performance degrades slowly for sentences up
to around 28 words, and performs more poorly and
more erratically as sentences ... some cases long-distance structural
information is also needed. Statistical modelsfor
282
root
- the node is the root of the tree.
For an n word sentence, a parse tree has n leaf
nodes, ... Statistical Pattern Recognition. Doctoral
dissertation. Stanford University, Stanford, Cali-
fornia.
283
Statistical Decision-Tree Modelsfor Parsing*
David M. Magerman
Bolt Beranek and Newman...