Total speaking fluency refers to the ability of speakers to speak
the words about specific topics in English effortlessly and efficiently (automaticity)
with meaningful expression that enhances the meaning of the topics (prosody). Fluency takes phonics or word recognition
to the next level. While many speakers can decode words accurately, they may not be fluent or automatic in their word
recognition in simultaneious speech. These speakers tend to expend too much of their limited mental energy on figuring out the pronunciation
and meaning of words, energy that is taken away from that more important task in conveying intelligible ideas — getting to
the topics overall meaning. Thus, the lack of fluency often results in poor comunication.
Fluent speakers, on the other hand, are able to speak words accurately and effortlessly. They speak words and phrases instantly
on spot. A minimal amount of cognitive energy is expended in decoding the words. This means, then, that the maximum amount
of a speaker’s cognitive energy can be directed to the all-important task of making sense of its ideas.
The second component to fluency is prosody , or speaking with expression. A key characteristic of fluent speakers
(or speech, for that matter) is the ability to embed appropriate expression into the speaking.Fluent speakers raise and lower
the volume and pitch of their voices, they speed up and slow down at appropriate places through the course of speech, they speak words in
meaningful groups or phrases, they pause at appropriate places through the course of speech. All these are elements of expression, or what
linguists have termed prosody. Prosody is essentially the melody of language as it is spoken. By embedding prosody in
our oral language (read or spoken), we are adding meaning to the communication.
Latent Semantic Analysis (LSA) is employed for
analyzing speech to find the underlying meaning or concepts of those used words in speech. If each word was only meant one
concept, and each concept was only described by one word, then LSA would be easy since there is a simple mapping from words to concepts.
Since in languages words or a group of words or even spoken words in different intonation have diffirent meaning, the semantic analysis becomes a difficult tast to complete.
It means that the same group of words/words could convey multiple meanings and it might creates sorts of ambiguities in comprehensive communication between people.
At this stage, we use LSA by introducing datasets which are represented as (1) "bags of words", where the order of the words in a document
is not important, only how many times each word appears being considered. (2) Concepts are represented as patterns of words that usually
appear together in documents. (3) vectorizing Words which are transformed to hold only one meaning by considering their neighbouring words.