UIC precision Question

UIC precision Question

In many places where range is being set ther eis code like thisSetRange16u((1 << (image.Precision() - 1)) - 1)doesnt this look wrong ?should it not be (1<< image.Precision())-1

4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Hello,

could you help to provide the file on such code? I did a search. I see in most of them, they are like:
SetAsRange16u((1 << (image.Precision() ) - 1)

Thanks,
Chao

application/picnic/src/jpeg.cpp:318:      imageCn.ColorSpec().DataRange()[i].SetAsRange16u((1 << (image.Precision()-1)) - 1);

application/picnic/src/jpeg2k.cpp:642:      imagePn.ColorSpec().DataRange()[i].SetAsRange16u(1 << (image.Precision()-1));

application/uic_transcoder_con/src/jpeg.cpp:336:      imageCn.ColorSpec().DataRange()[i].SetAsRange16u(1 << (image.Precision()-1));

application/uic_transcoder_con/src/jpeg2k.cpp:642:      imagePn.ColorSpec().DataRange()[i].SetAsRange16u(1 << (image.Precision()-1));

application/wic_uic_codec/src/jpeg.cpp:336:      imageCn.ColorSpec().DataRange()[i].SetAsRange16u(1 << (image.Precision()-1));

this my grep searchim using these files as base of my jpeg2000 and jpeg compression decompression)also another thing i did not see response from you guys is setting min/max signed range in uic_image.cppvoid ImageDataRange::SetAsRangeInt(Ipp64s min, Ipp64s max){ m_min.v64s = min; m_max.v64s = max; if(min < 0) { m_isSigned = true; m_bitDepth = ::BitDepth64(::Max(-min + 1, max)); // was ::Max(-(min+1),max) } else { m_isSigned = false; m_bitDepth = ::BitDepth64(::Max(min, max)); }}i think if using was code (original) it would return 1 bit to small bit depth.(-32768,32767) -> 15 instead of 16bit with modified code

It seems that Depth is 7 for 8-bit channels which is kind of weird.

-- Regards, Igor Levicki If you find my post helpfull, please rate it and/or select it as a best answer where applies. Thank you.

Login to leave a comment.