分類:
先說(shuō)下實(shí)現(xiàn)原理,手機(jī)采集到語(yǔ)音后進(jìn)過(guò)Speex編碼,通過(guò)juv以直播形式發(fā)布自己的語(yǔ)音流到red5,,也是通過(guò)juv播放對(duì)方的直播流,,經(jīng)過(guò)Speex解碼后輸出到揚(yáng)聲器,如下圖:
Android端采集編碼和解碼播放Speex,,參考android-recorder,,至于他用的red5客戶端,看了下,,沒(méi)看明白,。。,。
JUV這庫(kù)吧 好上手,,雖然是付費(fèi)的,但是有30天的試用,,可長(zhǎng)期申請(qǐng),。暫時(shí)用下也不錯(cuò)。另外,,國(guó)內(nèi)有破解版,,你懂得。
核心代碼如下:
- public class AudioCenter extends AbstractMicrophone
首先,,音頻處理類繼承自JUV庫(kù)中的 AbstractMicrophone,,便可以使用 fireOnAudioData方法向Red5服務(wù)端發(fā)布音頻數(shù)據(jù),。
- public void encSpeexAudio() {
- new Thread(new Runnable() {
-
- @Override
- public void run() {
- int bufferSize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
- short[] mAudioRecordBuffer = new short[bufferSize];
- AudioRecord mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);
- mAudioRecord.startRecording();
- int bufferRead = 0;
- int len;
- isEncoding = true;
-
- while (isEncoding) {
- bufferRead = mAudioRecord.read(mAudioRecordBuffer, 0, frameSize);
-
- if (bufferRead > 0) {
- try {
- len = speex.encode(mAudioRecordBuffer, 0, processedData, frameSize);
- // LogHelper.d(subTAG, "EncSpeexAudio "+ len);
- byte[] speexData = new byte[len + 1];
- System.arraycopy(SpeexRtmpHead, 0, speexData, 0, 1);
- System.arraycopy(processedData, 0, speexData, 1, len);
- fireOnAudioData(new MediaDataByteArray(20, new ByteArray(speexData)));
- } catch (Exception e) {
- e.printStackTrace();
- }
- }
- }
- mAudioRecord.stop();
- mAudioRecord.release();
- mAudioRecord = null;
- }
- }, "EncSpeexAudio Thread").start();
- }
以上是編碼上傳線程,一個(gè)關(guān)鍵技術(shù)點(diǎn):SpeexRtmpHead,曾經(jīng)困擾我很長(zhǎng)一段時(shí)間,,也是因?yàn)樽约簞傞_(kāi)始解除對(duì)RTMP協(xié)議只停留在api調(diào)用層面,。
- private byte[] SpeexRtmpHead = new byte[] { (byte) 0xB2 };
我們首先要知道“RTMP Packet中封裝的音視頻數(shù)據(jù)流,其實(shí)和FLV封裝音頻和視頻數(shù)據(jù)的方式是相同的,?!?/p>
Audio tag 數(shù)據(jù)區(qū) |
audio信息 |
1byte |
前四位bits表示音頻格式: |
0 – 未壓縮 |
1 = ADPCM |
2 = MP3 |
3 = Linear PCM, little endian |
4 = Nellymoser 16-kHz mono |
5 = Nellymoser 8-kHz mono |
6 = Nellymoser |
7 = G.711 A-law logarithmic PCM |
8 = G.711 mu-law logarithmic PCM |
9 = reserved |
10 = AAC |
11 = Speex |
14 = MP3 8-Khz |
15 = Device-specific sound |
下面兩位bits表示samplerate: |
0 – 5.5kHz |
1 – 11kHz |
2 – 22kHz |
3 – 44kHz |
下面一位bit表示每個(gè)采樣的長(zhǎng)度: |
0 – snd8Bit |
1 – snd16Bit |
下面一位bit表示類型: |
0 – sndMomo |
1 – sndStereo |
audio數(shù)據(jù)區(qū) |
不定 |
if SoundFormat == 10 AACAUDIODATAelse Sound data—varies by format |
由Flv協(xié)議的AudioTag數(shù)據(jù)區(qū)可查得,在數(shù)據(jù)區(qū)前有一個(gè)字節(jié)的audio信息,,我們采用speex編碼,,8KHz采樣,每個(gè)采樣16bit,,單聲道,。那么 得出的數(shù)據(jù)為10110010 十六進(jìn)制:0xB2,將它拼裝到每個(gè)數(shù)據(jù)區(qū)前,通過(guò)fireOnAudioData發(fā)布,,則為標(biāo)準(zhǔn)的Rtmp數(shù)據(jù)上傳到服務(wù)器,。這時(shí)候可以使用red5-publisher測(cè)試,已經(jīng)能聽(tīng)到聲音了,。不清楚red5-publisher使用的朋友,,可以看我上篇關(guān)于Red5的配置。
- public void playSpeexAudio() {
- new Thread(new Runnable() {
- @Override
- public void run() {
- short[] decData = new short[256];
- AudioTrack audioTrack;
- int bufferSizeInBytes = AudioTrack.getMinBufferSize(8000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
- audioTrack = new AudioTrack(AudioManager.STREAM_VOICE_CALL, 8000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, 2 * bufferSizeInBytes, AudioTrack.MODE_STREAM);
- audioTrack.play();
- isPlaying = true;
- while (isPlaying) {
- while (encData.size() > 0) {
- byte[] data = encData.elementAt(0);
- encData.removeElementAt(0);
- int dec;
- dec = speex.decode(data, decData, data.length);
- // LogHelper.d(subTAG, "playSpeexAudio "+ dec);
- if (dec > 0) {
- audioTrack.write(decData, 0, dec);
- }
- }
- try {
- Thread.sleep(10);
- } catch (InterruptedException e) {
- e.printStackTrace();
- }
-
- }
- audioTrack.stop();
- audioTrack.release();
- audioTrack = null;
-
- }
- }, "PlaySpeexAudio Thread").start();
- }
以上為音頻解碼線程,,沒(méi)什么難點(diǎn),,只需注意我使用了一個(gè)Vector 來(lái)緩存juv拉下來(lái)的speex語(yǔ)音數(shù)據(jù)
private Vector encData = new Vector();
采集編碼解碼播放先談到這里,下篇講下juv的連接和數(shù)據(jù)傳輸相關(guān),。
如有疑問(wèn),,歡迎與我交流。
- 猜你在找
- 3樓 LLz. 2014-01-10 11:19發(fā)表 [回復(fù)]
- 您好方便把代碼連接發(fā)一下學(xué)習(xí)學(xué)習(xí)嗎
- Re: LLz. 2014-01-13 13:56發(fā)表 [回復(fù)]
- 回復(fù)kongbaidepao:這個(gè)不知道回音怎么解決
- 2樓 whhpc19891120 2013-04-16 10:34發(fā)表 [回復(fù)]
- while(!isRecording){
//從bufferSize中讀取字節(jié),返回讀取的short個(gè)數(shù) int tmp = record.read(buffer, 0, framesize); Log.d("tmp:",""+tmp); if(tmp >0){ final long ctime = System.currentTimeMillis(); //對(duì)data進(jìn)行speex編碼 int len = speex.encode(buffer, 0, data, framesize); Log.d("len:",len+""); byte[] speexData = new byte[len + 1]; byte[] SpeexRtmpHead = new byte[] { (byte) 0xB6 };
System.arraycopy(SpeexRtmpHead, 0, speexData, 0, 1); System.arraycopy(data, 0, speexData, 1, len);
fireOnAudioData(new MediaDataByteArray(20 , new ByteArray(speexData))); final int spent = (int) (System.currentTimeMillis() - ctime); Log.d("花費(fèi)時(shí)間"," "+spent); } }
- 1樓 whhpc19891120 2013-04-16 10:34發(fā)表 [回復(fù)]
- 您好,,我最近也在學(xué)習(xí)流媒體服務(wù),,根據(jù)您的方法沒(méi)有聲音,android 發(fā)送到fms服務(wù)器, 轉(zhuǎn)發(fā)給flex播放,。
//根據(jù)定義好的幾個(gè)配置,,來(lái)獲取合適的緩沖大小 int bufferSize = AudioRecord.getMinBufferSize(frequence, channelConfig, audioEncoding); Log.d("bufferSize","" + bufferSize); //實(shí)例化AudioRecord AudioRecord record = new AudioRecord(source, frequence, channelConfig, audioEncoding, bufferSize); //定義緩沖 buffer = new short[bufferSize]; data = new byte[size_bufferencode]; //開(kāi)始錄制 Log.d(AUDIO,"開(kāi)始錄制"); record.startRecording();
|