I'm trying to use 'ippsResamplePolyphaseFixed' based on the sample code I found here. Most makes sense and I have things almost working. The problem I have (in addition to the outdated API being used in the sample code) is that I am working on sample frames processed in real time as opposed to file I/O. That is, I am not reading a chunk from a file and resampling, I am reading a smaller chunk from a media stream (i.e. roughly 60 msec at 11kHz) and converting to an even smaller chunk (8kHz). The example code uses a base buffer size of 4096 samples and a history of 128 samples . . .
Where do these values come from? If I am providing input chunks of 660 samples should the buffsize / history values change or do I need to fill the buffer with more samples before trying to extract the converted samples? The end result of my current implementation is that about 90% of the chunk is converted properly but I am seeing about 10% of trailing silence in the output for each 60 msec chunk.