... AN AUTOMATICSPEECHRECOGNITIONSYSTEM FOR T! IE ITALIAN LANGUAGE Paolo D'Orta, Marco Ferretti, Alessandro ... Searei IBM Rome Scientific Center via Giorgione 159, ROME (Italy) ABSTRACT 4. An automaticspeechrecognitionsystem for Italian language has been developed at IBM Italy Scientific Center in ... dictionary of 6500 items, dictated by a speaker with short pauses among them. The system is speaker dependent, before using it the speaker has to perform the training stage reading a predefined...
... Fiscus. 1997. A post-processing system to yieldreduced word error rates: Recognizer output votingerror reduction (ROVER). In Proc. IEEE Workshopon AutomaticSpeechRecognition and Understand-ing ... Stăuker, S. Vogel, and A. Waibel. 2006. Opendomain speechrecognition & translation: Lecturesand speeches. In Proc. IEEE Conf. on Acoustics, Speech, and Signal Processing (ICASSP), volume 1,pages ... comple-mentary ASR systems, a technique first proposedin the context of NIST’s ROVER system (Fiscus,1997) with a 12% relative error reduction (RER),and subsequently widely employed in many ASRsystems.This...
... ROBUST SPEECH RECOGNITION 4.1 Automatic adaptation Ultimately, speech recognition systems should be capable of f robust, speaker- independent or speaker- adaptive, continuous speech recognition ... spontaneous speech recognition One of the most important issues for speech recognition is how to create language models (rules) for spontaneous speech. When recognizing spontaneous speech in ... every new task is difficult and costly. 18 5.2 Message-Driven SpeechRecognition State-of-the-art automaticspeechrecognition systems employ the criterion of maximizing P(/4,qX), where W...
... different Automatic Speech Recognition (ASR) systems, along with an Enhanced Majority Rules (EMR) software algorithm. Each of the three individual systems received the same input, performed speech ... 2 Background Automatic speechrecognition systems convert a speech signal into a sequence of words, usually based on the Hidden Markov ... Young 1990; Furui, 2002). There are several systems that used the HMM along with multiple speech recognizers in an effort to improve speech recognition, as discussed next. 2.1 Enhanced Majority...
... running an entire ASR system, i.e. both the language and acoustic models. We use the Sphinx system to train baseball specific acoustic models using parallel acoustic/text data automatically mined ... our grounded language model on a speechrecognition task using video highlights from Major League Baseball games. Results indicate improved per-formance using three metrics: perplexity, word ... improvement). More notably, though, Figure 4 shows that the systemusing the grounded language model performed better than the system using the hand generated closed captioning tran-scriptions...
... vocabulary speech recognition with multispan statistical language mod-els. IEEE Transactions on Speech and Audio Process-ing, 8(1), January.G. Demetriou, E. Atwell, and C. Souter. 2000. Using lexical ... LinguisticsWordNet-based Semantic Relatedness Measures in Automatic Speech Recognition for MeetingsMichael PucherTelecommunications Research Center ViennaVienna, Austria Speech and Signal Processing Lab, TU GrazGraz, ... therescoring of N -best lists. It was shown that speech recognition of multi-party meetings cannot be im-proved compared to a 4-gram baseline model, when using WordNet models.One reason for the poor...
... handwriting recognition system. In this article Iwill highlight a method is used to get the UNIPEN data to the input of a recognizer. A convolution network for capitalletters and numbers recognition ... CreateFileMappingin a C++ DLL and access it inNext High efficient library for online handwriting recognition systemusing UNIPEN database.By Vietdungiitb, 11 Apr 2012This is an old version of the currently ... to study pattern recognition techniques in general and online handwriting recognition techniques in particular. Picture1: convolution network for capital letters and digits recognition Background...
... in speechrecognition and synthesis have been started in recent years. Together with the developing trend of human-computer interaction systems using speech, the optimization of speechrecognition ... Chi Mai, 'HMM/ANN System for Vietnamese Continuous Digit Recognition& apos;[2] Dang Ngoc Duc, Luong Chi Mai, 'Improve the Vietnamese Speech Recognition SystemUsing Neural Network' ... demonstrate the use of speech in HCI we have combined speech recognition together with speech synthesis into our software running in T-Engine. This software allow users to use speech- commands to...
... Results of Speech Recognition: We used 4806 recognition results including errors, from the output of speech recognition (Masataki et al., 96; Shimizu et al., 96) experiment using an ATR ... Dialogue SpeechRecognition using Cross-word Context Constrained Word Graphs. ICASSP 96, pp. 145-148, 1996. Y. Wakita et al., 97. Correct parts extraction from speech recognition results using ... the errors in the results of speech recognition to increase the performance of a speech translation system. This paper proposes a method for correcting errors using the statistical features...
... communication. In this paper, we describethe Automatic Content Linking Device (ACLD), a system that analyzes spoken input from one or morespeakers usingautomaticspeechrecognition (ASR),in order to retrieve ... Internet. Thedocuments are found using keyword-basedsearch or using a semantic similarity measurebetween documents and the words obtainedfrom automaticspeech recognition. Resultsare displayed ... ACL-HLT 2011 System Demonstrations, pages 80–85,Portland, Oregon, USA, 21 June 2011.c2011 Association for Computational LinguisticsA Speech- based Just-in-Time Retrieval Systemusing Semantic...