Monday, September 10, 2007

What is Speech Recognition?



Speech recognition is the process of converting an acoustic signal, captured by a microphone or a telephone, to a set of words. The recognized words can be the final results, as for applications such as commands & control, data entry, and document preparation. Research in this area has attracted a great deal of attention over the past five decades where several technologies are applied and the efforts were made to increase the performance up to marketplace standard so that the users will have the benefit in a variety of ways. During this long research period several key technologies were applied where the combination of hidden Markov Model (HMM) and the stochastic language model produces high performance.

To convert speech to on-screen text or a computer command, a computer has to go through several complex steps. When we speak, we create vibrations in the air. The analog-to-digital converter (ADC) translates this analog wave into digital data that the computer can understand. To do this, it samples, or digitizes, the sound by taking precise measurements of the wave at frequent intervals. The system filters the digitized sound to remove unwanted noise, and sometimes to separate it into different bands of frequency (frequency is the wavelength of the sound waves, heard by humans as differences in pitch). It also normalizes the sound, or adjusts it to a constant volume level. It may also have to be temporally aligned. In addition with these tasks speech end point detection is necessary in order to extract valid speech data from the spoken signal. These tasks are called preprocessing of speech signal. The next tasks are Feature Extraction and Recognition. Significant amount of research work is already done and also continuing in these areas using variety of different approaches.

The area of Automatic Speech Recognition (ASR) is classified into Isolated speech recognition (ISR) and Continuous speech recognition (CSR). An isolated-word speech recognition system requires that the speaker pause briefly between words, whereas a continuous speech recognition system does not. For Isolated word the assumption is that the speech to be recognized comprised a single word or phase and to be recognized as complete entity with no explicit knowledge or regard for the phonetic content of the word or phase. Hence, for a vocabulary of V words (or phases), the recognition algorithm consisted of matching the measured sequence of spectral vectors of the unknown spoken input against each of the set of spectral patterns for V words and selecting the pattern whose accumulated time aligned spectral distance was smallest as the recognized word. The notion of isolated speech recognition can be extended for connected speech recognition if we consider a small vocabulary and solve the co-articulation problem that arises between words. In continuous speech recognition, continuously uttered sentences are recognized. The standard approach continuous speech recognition is to assume a simple probabilistic model of speech production whereby a specified word sequence, W, produce an acoustic observation sequence, so that the decoded string has the maximum a posteriori probability. In continuous speech recognition it is very important to use sophisticated linguistic knowledge. The most appropriate units for enabling recognition success depend on the type of recognition and on the size of the vocabulary. Various units of reference templates/models from phonemes to words have been studied. When words are used as units, word recognition can be expected to be highly accurate; however it requires larger memory and more computation. Using phonemes as units does not greatly increase memory size requirements and also computation.

Some speech recognition systems require speaker enrollment; a user must provide samples of his or her speech before using them, whereas other systems are said to be speaker-independent, in that no enrollment is necessary. Some of the other parameters depend on the specific task. Recognition is generally more difficult when vocabularies are large or have many similar-sounding words. When speech is produced in a sequence of words, language models or artificial grammars are used to restrict the combination of words.

No comments: