"how to convert text string to speech sound" Code Answer


you can use .net lib(system.speech.synthesis).

according to microsoft:

the system.speech.synthesis namespace contains classes that allow you to initialize and configure a speech synthesis engine, create prompts, generate speech, respond to events, and modify voice characteristics. speech synthesis is often referred to as text-to-speech or tts.

a speech synthesizer takes text as input and produces an audio stream as output. speech synthesis is also referred to as text-to-speech (tts).

a synthesizer must perform substantial analysis and processing to accurately convert a string of characters into an audio stream that sounds just as the words would be spoken. the easiest way to imagine how this works is to picture the front end and back end of a two-part system.

text analysis

the front end specializes in the analysis of text using natural language rules. it analyzes a string of characters to determine where the words are (which is easy to do in english, but not as easy in languages such as chinese and japanese). this front end also figures out grammatical details like functions and parts of speech. for instance, which words are proper nouns, numbers, and so forth; where sentences begin and end; whether a phrase is a question or a statement; and whether a statement is past, present, or future tense.

all of these elements are critical to the selection of appropriate pronunciations and intonations for words, phrases, and sentences. consider that in english, a question usually ends with a rising pitch, or that the word "read" is pronounced very differently depending on its tense. clearly, understanding how a word or phrase is being used is a critical aspect of interpreting text into sound. to further complicate matters, the rules are slightly different for each language. so, as you can imagine, the front end must do some very sophisticated analysis.

sound generation

the back end has quite a different task. it takes the analysis done by the front end and, through some non-trivial analysis of its own, generates the appropriate sounds for the input text. older synthesizers (and today's synthesizers with the smallest footprints) generate the individual sounds algorithmically, resulting in a very robotic sound. modern synthesizers, such as the one in windows vista and windows 7, use a database of sound segments built from hours and hours of recorded speech. the effectiveness of the back end depends on how good it is at selecting the appropriate sound segments for any given input and smoothly splicing them together.

ready to use

the text-to-speech capabilities described above are built into the windows vista and windows 7 operating systems, allowing applications to easily use this technology. this eliminates the need to create your own speech engines. you can invoke all of this processing with a single function call. see speak the contents of a string.

try this code:

using system.speech.synthesis;

namespace consoleapplication5
    class program

        static void main(string[] args)
            speechsynthesizer synthesizer = new speechsynthesizer();
            synthesizer.volume = 100;  // 0...100
            synthesizer.rate = -2;     // -10...10

            // synchronous
            synthesizer.speak("hello world");

            // asynchronous
            synthesizer.speakasync("hello world");


By Mihai8 on April 19 2022
Only authorized users can answer the Search term. Please sign in first, or register a free account.