Home Page > Media > Paper > Implementation of Pepper 2.9 Tablet Microphone Speech Recognition

Implementation of Pepper 2.9 Tablet Microphone Speech Recognition

1. Install Android Studio Development Software

Install the appropriate Android Studio development software according to the computer operating system.

Note: For the current Android version of the tablet, the latest installation package can be used, but in order to better adapt to the development of Pepper SDK, it is recommended to use the Bumblebee version of Android Studio.


2. SDK Integration Guide

2.1  SpeechDemo Running Steps

SoftBank Robotics has packaged the SpeechDemo application. Download this Android project and import it. It contains a simple and executable Demo.

图1.png

After downloading the SDK, unzip it to the corresponding path. Taking Android Studio integrated development tools as an example, it is recommended to test directly on a real device.


Method 1 (import project):

Open Android Studio, select the current unzipped SDK path in the menu bar File--->new--->import project, and use the online service capability to select import SpeechDemo, as shown below:

图2.png

图3.png

After the import is successful, run the imported Speechdemo directly in Android Studio. The generated apk can be directly installed on the corresponding Pepper robot, as shown in the

following figure:图4.png

If you get the error "ERROR: Plugin with id 'com.android.application' not found." when compiling, add the following code to your build.gradle file.

"""

       buildscript {

                repositories {

                          google()

                          jcenter()

                }

         dependencies {

                  //Please change the version number according to your own gradle plugin version number

                   classpath 'com.android.tools.build:gradle:3.4.0'

                   // NOTE: Do not place your application dependencies here; they belong

                   // in the individual module build.gradle files

                }

}     

"""

 

Method 2 (module import):


Open Android Studio, go to the menu bar File--->new--->import module, select the currently unzipped SDK path, and use the online service capability to select import SpeechDemo. After the import is successful, run sync compilation. If the compilation is correct, you can connect the phone, turn on the USB development debugging mode on the Pepper side, and run the imported Speechdemo directly in Android Studio. The generated apk can be directly installed on the corresponding phone.


2.2. Project integration steps

2.2.1. SDK package description

《Android SDK Directory Structure Overview》

  • manifests

    • android configuration application permission file

  • sample:

    • related online capability demo (voice dictation IatDemo)

  • assets:

    • SDK related resource configuration file

  • Libs:

    • dynamic library and jar package

  • res:

    • UI file and related layout file xml

  • readme description (must read)

  • release version description

 

2.2.2 Import SDK

Copy all sub-files in the libs directory of the Android SDK compressed package to the libs directory of the Android project. As shown in the figure below:

图5.png

Note:

1. The arm version has been phased out. The arm architecture recommends using armeabi-v7a.

2. If you need to push the application to the device, please push the libmsc.so corresponding to the instruction set of the device cpu to /system/lib.

3. To integrate into the project, you need to copy the files under Demo/src/main/ in the sdk to the main of the project. Taking AS as an example, you need to create Jnilibs in the main folder of the project and copy libmsc.so

4. msc.jar needs to be copied to the project libs, and right-click jar to add Add As Library.

5. The folder main/assets/ under the sdk has its own UI page (iflytek folder) and other related service resource files (grammar files, audio examples, vocabulary). When using the built-in UI interface, you can copy the assets/iflytek file to the project;

 

2.2.3 Add user permissions

Add the following permissions in the project AndroidManifest.xml file


"""

<!--Connect to the network, used to execute cloud voice capabilities -->

<uses-permission android:name="android.permission.INTERNET"/>

<!--Get the permission to use the mobile phone recorder. This permission is required for dictation, recognition, and semantic understanding -->

<uses-permission android:name="android.permission.RECORD_AUDIO"/>

<!--Read network information status -->

<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>

<!--Get the current wifi status -->

<uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/>

<!--Allow the program to change the network connection status -->

<uses-permission android:name="android.permission.CHANGE_NETWORK_STATE"/>

<!--Read mobile phone information permission -->

<uses-permission android:name="android.permission.READ_PHONE_STATE"/>

<!--Read contacts permission, this permission is required to upload contacts -->

<uses-permission android:name="android.permission.READ_CONTACTS"/>

<!--External storage write permission, this permission is required to build the syntax -->

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

<!--External storage read permission, this permission is required to build the syntax -->

<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>

<!--Configuration permissions, used to record application configuration information -->

<uses-permission android:name="android.permission.WRITE_SETTINGS"/>

<!--Mobile phone location information is used to provide location for semantic functions and provide more accurate services-->

<!--Location information is sensitive information, and you can turn off the location request through Setting.setLocationEnable(false) -->

<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>

<!--If you need to use face recognition, you also need to add: camera permissions, which are required for taking photos -->

 

<uses-permission android:name="android.permission.CAMERA" />

"""

Note: If you need to obfuscate when packaging or generating APK, please add the following code in proguard.cfg:


"""

-keep class com.iflytek.**{*;}

-keepattributes Signature

"""


2.2.4 Initialization

Initialization means creating a voice configuration object. Only after initialization can you use the various services of MSC. It is recommended to put the initialization at the program entry (such as the onCreate method of Application and Activity). The initialization code is as follows:

"""

// Replace "12345678" with the APPID you applied for, application address: http://www.xfyun.cn

// Do not add any empty characters or escape characters between "=" and appid

SpeechUtility.createUtility(context, SpeechConstant.APPID +"=12345678");

"""


相关推荐

Discover the Future of Robotics: Subscribe Now to the SoftBank Team for the Latest News.

Curious for More Details?

Get Started with a Consultation.

Home Page / Media / Paper / Implementation of Pepper 2.9 Tablet Microphone Speech Recognition