ASR-complete
   HOME
*





ASR-complete
ASR-complete is, by analogy to " NP-completeness" in complexity theory, a term to indicate that the difficulty of a computational problem is equivalent to solving the central Automatic Speech Recognition problem, i.e. recognize and understanding spoken language. Unlike "NP-completeness", this term is typically used informally. Such problems are hypothesised to include: * Spoken natural language understanding * Understanding speech from far-field microphones, i.e. handling the reverbation and background noise These problems are easy for humans to do (in fact, they are described directly in terms of imitating humans). Some systems can solve very simple restricted versions of these problems, but none can solve them in their full generality. See also * AI-complete In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


AI-complete
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI.Shapiro, Stuart C. (1992)Artificial IntelligenceIn Stuart C. Shapiro (Ed.), ''Encyclopedia of Artificial Intelligence'' (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".) To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm. AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. T ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

NP-completeness
In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying all possible solutions. # the problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified. The name "NP-complete" is short for "nondeterministic polynomial-time complete". In this name, "nondeterministic" refers to nondeterministic Turing machines, a way of mathematically formalizing the idea of a brute-force search algorithm. Polynomial time refers to an amount of time that is considered "quick" for a deter ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Automatic Speech Recognition
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the main benefit of searchability. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. The reverse process is speech synthesis. Some speech recognition systems require "training" (also called "enrollment") where an individual speaker reads text or isolated vocabulary into the system. The system analyzes the person's specific voice and uses it to fine-tune the recognition of that person's speech, resulting in increased accuracy. Systems that do not use training are called "speaker-independent" systems. Systems that use training are called "speaker dependent". Speech recognition ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Spoken Language
A spoken language is a language produced by articulate sounds or (depending on one's definition) manual gestures, as opposed to a written language. An oral language or vocal language is a language produced with the vocal tract in contrast with a sign language, which is produced with the body and hands. Definition The term "spoken language" is sometimes used to mean only oral languages, especially by linguists, excluding sign languages and making the terms 'spoken', 'oral', 'vocal language' synonymous. Others refer to sign language as "spoken", especially in contrast to written transcriptions of signs. Context In spoken language, much of a speaker's meaning is determined by the context. That contrasts with written language in which more of the meaning is provided directly by the text. In spoken language, the truth of a proposition is determined by common-sense reference to experience, but in written language, a greater emphasis is placed on logical and coherent argument. Similarly, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Natural-language Understanding
Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subtopic of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem. There is considerable commercial interest in the field because of its application to automated reasoning, machine translation, question answering, news-gathering, text categorization, voice-activation, archiving, and large-scale content analysis. History The program STUDENT, written in 1964 by Daniel Bobrow for his PhD dissertation at MIT, is one of the earliest known attempts at natural-language understanding by a computer. Eight years after John McCarthy coined the term artificial intelligence, Bobrow's dissertation (titled ''Natural Language Input for a Computer Problem Solving System'') showed how a computer could understand simple natural language input to solve algebra word problems. A year later, in 1965, Joseph ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]