This is the fifth day of my participation in the November Gwen Challenge. Check out the details: The last Gwen Challenge 2021

1. Register an account on the official website

Iflytek requires you to register your account and set your app binding to use the SDK, so that an AppID will be generated for you, with which you can use the SDK of Iflytek in your app

Iflytek Open Platform

2. Download the SDK

After registering, you must add your app, and then your exclusive AppID will appear. After entering the SDK download interface, select the voice synthesis SDK package -Android- your app, and then you can download.

3. Add libs (note that Android Studio has a pit)

  • The downloaded folder contains multiple files
Assets: Native UI resources to help you create an official voice Dialog. Libs: Most importantly, you need to move this section to your project's LIBS below sample: official exampleCopy the code
  • Note that for Eclipse development, simply move the liBS below to your project’s LIBS.But for Android Studio (like me), you need to move these files under your LIBSIn your gradle (app), you need to add the following comment, and you will see your main directory under the jniLibs directory, and you are done
  sourceSets {
        main{
            jniLibs.srcDirs = ['libs']
        }
    }
Copy the code

That’s what I’ve been working on all night!

Android apps

1. Apply for permission

The required permissions are as follows

<! - to connect to the INTERNET access, used to perform the cloud voice capability - > < USES - permission android: name = "android. Permission. INTERNET" / > <! - phone recorder access, dictation, recognition and semantic understanding need to use the permissions - > < USES - permission android: name = "android. Permission. RECORD_AUDIO" / > <! - read the state of network information - > < USES - permission android: name = ". Android. Permission ACCESS_NETWORK_STATE "/ > <! - access to current wifi state - > < USES - permission android: name = "android. Permission. ACCESS_WIFI_STATE" / > <! - allows programs to change network connection state - > < USES - permission android: name = "android. Permission. CHANGE_NETWORK_STATE" / > <! --> <uses-permission android:name="android.permission.READ_PHONE_STATE"/> <! --> <uses-permission android:name="android.permission.READ_CONTACTS"/>Copy the code

Remember that after Android6.0, some permissions need to be applied dynamically, mainly mobile phone information permissions, contact permissions.

2. Initialize iFlyTEK

Enter in the Activity’s onCreate

SpeechUtility. CreateUtility (context, SpeechConstant APPID + = "12345678");Copy the code

Change the AppID here to your own ID you applied for earlier!

3. Initialize speech synthesis functions and parameters

//1. Create the SpeechSynthesizer object with the second argument: Local synthesis with InitListener SpeechSynthesizer mTts = SpeechSynthesizer. CreateSynthesizer (context, null); //2. SpeechSynthesizer class mtts.setparameter (speechconstant.voice_name, "xiaoyan") // Set the speechconstant. setParameter(speechconstant. SPEED, "50"); // Set the speechconstant. setParameter(speechconstant. VOLUME, "80"); SetParameter (speechconstant. ENGINE_TYPE, speechconstant.type_cloud); // Set the volume, ranging from 0 to 100. // Set the cloud // set the synthesized audio save location (can be customized save location), save in "./sdcard/iflytek. PCM "// Save in SD card need to add SD card write permission in androidmanifest.xml // If you do not need to save the synthesized audio, SetParameter (speechconstant.tts_audio_path, "./sdcard/iflytek.pcm");Copy the code

The above parameters can be adjusted by themselves. For details, refer to the official SDK documentation

4. Set the listener

/** * private SynthesizerListener mSynListener = new SynthesizerListener() {SynthesizerListener = new SynthesizerListener() { Error is null public void onCompleted(SpeechError error) {if (error! = null) { Log.d("mySynthesiezer complete code:", error.getErrorCode() + ""); } else { Log.d("mySynthesiezer complete code:", "0"); }} // Buffer progress callback // percent indicates the buffer progress from 0 to 100, beginPos indicates the start position of the buffer audio in the text, endPos indicates the end position of the buffer audio in the text, and info indicates additional information. public void onBufferProgress(int percent, int beginPos, int endPos, String Info) {} // Start playback public void onSpeakBegin() {} // Pause playback public void onSpeakPaused() {} // Playback progress callback // Percent indicates the playing progress from 0 to 100, and beginPos indicates the start position of the audio playing in the text. EndPos Indicates the end position of audio playback in the text. Public void onSpeakProgress(int percent, int beginPos, Public void onSpeakSpeakresumed () {} public void onSpeakSpeakresumed () {} public void onEvent(int arg0, int arg1, int arg2, Bundle arg3) { } };Copy the code

The listener is necessary, the specific how to call back can refer to the documentation, just directly for text and voice playback does not need to change.

5. Start playing

 mTts.startSpeaking(content, mSynListener);
Copy the code