AEC Example

AEC Example

First, sorry by my english.

I have been testing the AEC example. I want to probe the effect of loudspeaker gain management. I put int the Left channel empty and a voice inthe Right channel. I execute de example (with the option SIMULATE_ROOM)and obtain a wave very similar to the Right channel. Is this correct??? I Think no,

Thanks Le-Chuck

22 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.


first of all, what IPP sample do you use? Is it sample for IPP for PCA?


This is a quote from Audio-Processing description:

Normally, the example expects the stereo input file to contain the loudspeaker signal on the left channel and the microphone signal in the right channel

This example removes feedback from loudspeaker to microphone.

For the current input loudspeaker signal is empty and the result is near the same signal as in the right channel.

Because of SIMULATE_ROOM option enable, probably there are slight difference between signal in the right channel and output signal.

Another quote from Audio-Processing description:

When room simulation is enabled, the right channel of the input file is convolved with the impulse response of an actual room and added to the left channel of the input file to form a simulated microphone signal

So with this option for the current input the output signal will be formed from the right channel itself and convolved right channel and will be similar to the right channel.

Because of it I think that the described behavior is correct.



there is answer from developers

1. You are right about channel there are the errors in documentation. Inside the AEC_exmp file description is correct left channel is microphone and right channel is loudspeaker.

In the previous answer I quote text from Audio-Processing.txt where there are mistakes with channels. Ill fix this text ASAP.

2. And about your situation.

I try to run this example two times in SIMULATE_ROOM mode: at first with standard input AECtest_sim.pcm (I hope that this file should be in the package) and then with the same input file but with zeroed left channel.

So in the first case I got loud male voice (it was left channel) and slowly disappearing female voice (it was right channel).

In the second case I got silence (it was left channel) and also slowly disappearing female voice (it was right channel).

Yes, for ideal case you should obtain simply empty channel (silence) but in reality there are some noise from right channel that slow disappears with time but usually not equal to zero.

So if in your case you havent got empty channel as the output it is ok, because of this example is not ideal.

In any case for your input the output signal with time should be quieter than original right channel and this difference you should hear.

If not than possible there are problems in this example or in the input file and in this case it requires further inquiry.


Hi, currently we do not redistribute our media materials due to legal reasons. The best way, I think, to contact with technical support team through I hope, they can help you.


Dear Vladimir,

Can you please send any wav sample that you use and you hear that is really doing AEC?

It do not need send amedia materials due to legal reasons, your own voice is fine.

I am working more then a month and it seems that I got exactly the same resolute as Mr.Le-Chuck has.

I put the speaker in the right channel and mic + Speaker echo in the left, and the resolute has the left change with the echo. No AEC has relay preformed. I try to put speaker in the left and MIC + echo in the right the same. No AEC.

So, if you can, please post here some exe with wav to show that the Intel example really work.

We work more then a month on that and we try all options; It just seems a simple bug at Intel. IPP AEC just do not work. we run it on P4 win XP.

Thank you for your help.



I have the same problem too. I put the speaker in the right channel (reference channel) and mic + speaker echo in the left channel (mic channel), and the result still has echo, so the AEC algorithm does not seem to work.

If I invert the two channels, I have no sound at allin theoutput.

Has somebody succeeded in using the AEC algorithm ?

Guillaume Tamisier

I have the same problem too. I put the speaker in the right channel (reference channel) and mic + speaker echo in the left channel (mic channel), and the result still has echo, so the AEC algorithm does not seem to work.

but if I changed the BYTES_PER_SAMPLE from 2 to 1, it worked, It generated a file with no echo at all. but I think it inreasonable.

The fiel i used to test was a 8k, 16bits, stereo pcm.

Message Edited by on 04-07-2005 02:29 AM

Hi All who isin AEC topic here.
First, I definitely wouldnot recommendyou to touch BYTES_PER_SAMPLE definition, leave itequal to 2 as it mustbe for linear PCM.The AEC sample supports only2 bytes per sample.
Second, regarding to the question does Intel IPP 4.1AECsample actually removean echo or not?Yes, it does,butonlysmall echo tails, less than 16 ms, and definitely itdoes notyet fully comply neither to G168 nor to G167.
The Intel IPP team contunueworking on AEC andnext version of IPP(coming soon) willenchance theAEC sample substantially.
And you mayhelp us to do this even better.
Could you pleaseattach some signals where you feel the Intel IPP AECdoes not works properly?

Thank you in advance.

Yes, I think you are right, I can make it work without changine that constant.
From the code, we can know that changing that constant just made the APP read less data than before.
I just make it read less data than before(a half as before), the code I changed is belowed:
cBytesRead = fread(pReadBuf,1,cStereoBufSize/4,pInputFile);
if(cBytesRead < cStereoBufSize/4)
cBytesWritten = fwrite(tempError, 1, FRAME_BYTES/4, pOutputFile);

With these changes, the APP can remove all the echo and generate a perfect file.I don't know why I have to do so.

Message Edited by on 04-21-2005 10:10 PM

Doing soyou also have to zero the second half of preadBuf:


After your modification only 64 first stereo samples are read, so last 64 to be padded to make sure they are not randomly set.

And OK, generally speaking, adaptation for 128 taps NLMS (16ms echo tail) on half zero padded 128 sample frame is equivalent to that one for NLMS with 64 taps (8ms echo tail) on 64 sample frame w/o zero padding.

So, the code modified that way may work Ok for echo tails <8 ms.



I used the same input file for testing. The problem is if I read 128 samples once, the output file still with echo in it and sounds like the input file, buf if I read 64 samples once, the output file will be a perfect one, with on echo at all. If the adaptation for 128 taps NLMS work, I think it should generate the same output file, but it doesn't. SO my conclusion is the adaptation for 128 taps NLMS doesn't work, do you think so?

TheIPP 4.1 AEC32f (floating-point)sample hasbeen succesfully tested on a variety of input files on IA32. If you still think it is a problem with the sample itwould better to addressan issue to Intel Premier Support

Thanks and regards

I noted that there is a note in the sample says:
/* Note: in a real-time streaming implementation, the loudspeaker signal would be routed to the D/A converter here. */

but I still don't know what should I do exactly in a real-time streaming implemnetation. Must I write a D/A converter? Vbaranni, Can you tell me in details?

The note ismislocated:it is to be moved one line down and be put right above fwrite() statements. So, this just mark thepointwhere you are able to get "echo free" 16bit audio to playout device (equipped with D/A of course) instead of writing it to file.

Is there any article on how to use this EC in a real-time streaming envirentment or how to synchronize the two input signals of this EC in a real-time streaming envirentment? I really need that.

I've been working with Intel AEC code too, and I think that Intel's source code is bugged. Specifically, the problem is with the part that reads new data and makes a new input block based on the new data and the previous input block.

I've fixed this bug by providing only the new input data to the AEC_Process function and making the new block inside the function. I don't have the code at hand, but I can provide more details if necessary.

Hope this helps,


>how to synchronize the two input signals of this EC in a real-time streaming envirentment?

If you're working on a non-realtime platform, good luck with that. I've had the exact same problem, and getting the two signals to be more or less synchronised has been a real pain in the a**. :) I can't provide any details (my company wants to patent my way of doing this) but if you use callback functions to acquire and play data, you'll need to keep close track of the order of the events and use flags to force the order in certain cases.

On linux platforms using ALSA audio drivers there is way to configure ac97 compatible mixer to deliver on left input channnel the output mix and on right the microphone input. It's ideal for
AEC... I din't find yet a way to do it on WIN32 platform.

Hi all,

In AEC example, i found that:
#define DELAY_SAMPLES 0 /* number of samples of delay in reference channel */

so when is this value different to 0? and in which situation?
Thank for your instruction.

Hi all,

I am working with AEC of IPP for 2 weeks.
It's not work at all.
So if someone of you like to get the source code to test.
Free feel to contact to me.

AEC of IPP is died

Dear thales_oliver :

Could you provide the details about the bug you sloved?

Thank you


Leave a Comment

Please sign in to add a comment. Not a member? Join today