clarification sought re. DenoiseCast

clarification sought re. DenoiseCast

Hi,
I need some explanation regarding the interpretation of the contents of pSrcEdge pointer argument to ippiDenoiseCAST filter. Should it point to an edge image? The documentation says "edge detection filtered image", but I am not able to understand if it should point to an image that has just been filtered to make edge extraction easy (like smoothed version etc) or the actual edges have to be calculated and passed to the function?
Thanks,
Sid.

14 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Good day.Sorry for delay.You can usepSrcEdge to pass filtered version of source image to to the function. Filtered image can decrease blur effect on edges. However you can use every image as edge filtered and there is no specific edge detection algorithm you require to use. You can setStrongEdgeThreshold = 255 in parameters structure and edge image will be ignored.You can also tune blur/denoiseby tweakingNonEdgePixelWeight and EdgePixelWeight parameters.Example (ABS(verSobel) + ABS(horSobel)):

                ippiFilterSobelVert_8u16s_C1R(pData, iDataStep, dx, dxStep, size, ippMskSize3x3);
                ippiFilterSobelHoriz_8u16s_C1R(pData, iDatatep, dy, dyStep, size, ippMskSize3x3);
                ippiAbs_16s_C1IR(dx, dxStep, size);
                ippiAbs_16s_C1IR(dy, dyStep, size);
                ippiAdd_16s_C1IRSfs(dx, dxStep, dy, dyStep, size, 0);
                ippiConvert_16s8u_C1R(dy, dyStep, pEdgeData, iEdgeDataStep, size);

                ippiFilterDenoiseCAST_8u_C1R(pData, NULL, iDataStep, 
                    pEdgeData, iEdgeDataStep,
                    size,
                    pDataOut, iDataOutStep, NULL, &param);
Have a nice day.

Quoting Pavel V.Vlasov (Intel)...
You can usepSrcEdge to pass filtered version of source image to to the function. Filtered image can decrease blur effect on edges. However you can use every image as edge filtered and there is no specific edge detection algorithm you require to use. You can setStrongEdgeThreshold = 255 in parameters structure and edge image will be ignored.
...

Thank you, Pavel. Finallyexplained.

Hi Pavel,
Sorry for the late thanks, but I have asked this question so many times before without getting a reply, that I had almost lost hope. So, many thanks for the clarification. It really helps a lot.
Sid.

Hi Pavel,
Thanks. This really helped. Can you also clarify how the other fields in the params structure are affecting the whole filtering process? I have been able to reduce some blurring due to an apparant motion induced by features that change fast in the temporal direction. Can I control them further by playing around with some of the settings through the params structure?
In addition, if you could also point me to some relevant publications related to the method, that would help too.....
Thanks,
Sid.

Hi Sid,

TemporalDifferenceThreshold, NumberOfMotionPixelsThresholdare used to determine whether the block is a static or a motion one: the block is considered a motion one if the number of pixels with values differing from the value of the co-located pixel in the previous frame by more than TemporalDifferenceThreshold exceeds NumberOfMotionPixelsThreshold.

GaussianThreshold: used to select the spatially adjacent pixels to be involved in the smoothing of the current pixel: only the pixels spatially adjacent to the current pixel, with values differing from that of the current pixel by less than GaussianThreshold, participate in the smoothing of the current pixel.
In ippiFilterDenoiseCAST_8u_C1R(), solely GaussianThresholdY is employed, while in ippiFilterDenoiseCASTYUV422_8u_C2R(), GaussianThresholdY is used for luma and GaussianThresholdUV for chroma.

HistoryWeight is the weight of the previous frame in the temporal denoising applied to the pixels of static blocks. If the function is called with pSrcPrev == NULL and pHistoryWeight != NULL, the per-block weights pointed to by pHistoryWeight are initializied to HistoryWeight (and further updated at calls with pSrcPrev != NULL).

As for the publications, the algorithm was developed at Intel, andI don't think that the detailed descriptionis publicly available.

Thanks,
Timofei

Hi Timofei,
Thanks. Your explanation did iron out some kinks in my understanding of the function. Could I also ask, since you mentioned that the method was developed at Intel, if there are any IP restrictuions in using this function in a commercial software?
Thanks again,
Sid.

Also, should the TemporalDifferenceThreshold be indicated as an abs(difference) to account for both advancing and receeding features in the temporal direction?

Could you, or possibly someone from Intel, complete the picture by explaining
1) StrongEdgeThreshold
2) EdgePixelWeight
3) NonEdgePixelWeight.

does the StrongEdgeThreshold divide the pixels into edge and non-edge pixels? Also, I am not very clear of the "weight" in the function: dies it somehow control how much of the previous frame will be used to construct the current frame. With all parameters fixed, I have tried to play around with the edge and non-edge weights, but I do not see any difference in the output.

Thanks,
Sid.

To further clarify:

if I am processing a sequence of frames with this filter, of which ..... F(n-1), F(n), F(n+1) ... is a current context, is the following correct:
1) pSrcCurr = F(n),
2) pSrcPrev = CAST(F(n-1))
3) pSrcEdge = convolve(G, F(n)), where G is a gaussian kernel or pSrcEdge = sobel(F(n))

Thanks,
Sid

Hi Sid,

To the bestif my belief,there are no restrictionson usingIPP functions.

Thanks,
Timofei

Yes, it is the an absolute value to account for both increase and decrease in luminance/chrominance.
Timofei

1) StrongEdgeThreshold: pixel iis considered an edge one if pSrcEdge[i] > StrongEdgeThreshold.
2-3) EdgePixelWeight/NonEdgePixelWeight define the share of the current "edge"/"non-edge" pixel valuein the formation of the output for the pixel, the higher the weight the more contribution is made by the current pixel and the less - by its neighbourhood.

And yes, you are right,the higher the HistoryWeight, the more is taken from the previous frame to form the output.

As for your experiments with the edge weights: perhaps, with the parameters/content you used, all the blocksturned out static, and thuswere denoised purely temporally.
You can try decreasing TemporalDifferenceThreshold and NumberOfMotionPixelsThreshold.

Thanks,
Timofei

Yes, everything is correct except that the convolution with a gaussian kernel will hardly give you the desired edge information.

Thanks,
Timofei

Login to leave a comment.