Auto Contrast in Grey Images

Auto Contrast in Grey Images

Hello All,

I just implemented the histogram equalization technique using intel ipp functions. Since Histogram equalization equalizes within the two extreem values, I was thinking of implementing other methods for auto contrast.

Is there any way that I can perform auto contrast and not histogram equlaiztion?

Thanks,
Sharath

30 posts / 0 new
For more complete information about compiler optimizations, see our Optimization Notice.

I'm also interested in discussing automatic contrast algorithms.
Histogram Equalization, Histogram Stretching, Gamma Correction etc.

The problem is, you start with a bad grayscale image, and how do you present it perfectly, automatically?
The bad image is full of problems: under/over exposed, artifacts etc.

>>...Is there any way that I can perform auto contrast and not histogram equlaiztion?

I would try to do it using a simple algorithm to calculate Center of Gravity of a Histogram represented as a small image. I know that in Point-and-Shot digital cameras and webcams it is done in a very simple way. Which one? I do not know.

Center of gravity is an area (with a fixed width) in the histogram, where the Percentile is highest.

I learnt that "Center of Gravity" of a histogram is the mean of the histogram values. I am not knowing if "mean" would be a good factor in performing the auto contrast. Any thoughts on this?

>>...I learnt that "Center of Gravity" of a histogram is the mean of the histogram values...

Take a look at a generic description of an algorithm...

Imaging a histogram of some Source Image ( SI ) as an Histogram Image ( HI ), for example, with dimensions 128x128 pixels ( _int8 data type ). Now, 3 cases are possible and If:

- The histogram is "shifted" to the left part of the HI than the SI is underexposed
- The "summit" of the histogram is in the "middle" of the HI than the SI is properly exposed
- The histogram is "shifted" to the right part of the HI than the SI is overexposed

A simple calculation of the Center of Gravity for HI could tell you how you should adjust parameters of a camera in order to take a properly exposed image.

The same rules could be applied for Auto-Contrast processing.

Thanks Sergey for that quick response. It totally makes sense to take the center of gravity now. I will try it out and let you know about it.

I was also curious to know if there are any IPP functions to calcualte the "Center of Gravity". If yes, kindly let me know.

Thanks,

Sharath

The Center of Gravity is a very simple algorithm and here it is from my test-cases:

RTvoid CalcCenterOfGravityF( RTfloat *pfMatrix, RTint iR, RTint iC, RTint *piCoGX, RTint *piCoGY );

RTvoid CalcCenterOfGravityF( RTfloat *pfMatrix, RTint iR, RTint iC, RTint *piCoGX, RTint *piCoGY )
{
RTfloat fCoGX = 0.0f;
RTfloat fCoGY = 0.0f;
RTfloat fTotalI = 0.0f;
RTfloat fPxValue = 0.0f;

for( RTint r = 0; r < iR; r++ )
{
RTint iRTemp = ( r * iC );
for( RTint c = 0; c < iC; c++ )
{
fPxValue = *( pfMatrix + ( iRTemp + c ) );

fCoGX += ( fPxValue * c );
fCoGY += ( fPxValue * r );
fTotalI += fPxValue;
}
}

if( fTotalI != 0.0f )
{
*piCoGX = ( RTint )( fCoGX / fTotalI );
*piCoGY = ( RTint )( fCoGY / fTotalI );
}
else
{
*piCoGX = 0;
*piCoGY = 0;
}
}

Please do all the rest clean ups.

Note: a template-based implementation of the function is more flexible.

Sergey, Thanks for posting the code snippet with your test cases. Since I am fairly new to image processing, It took me sometime to understand your code. It would be more helpful if you had comments to walk me through. Nevertheless It is very helpful.

Hi everybody,

I'll post a complete test-case.

>>...I'm also interested in discussing automatic contrast algorithms.

Thomas, I looked at it and some technical details will be provided. By the way, do you have any issues / problems with Auto Contrast algorithm?

Sergey, I have lots of grayscale images that just cannot be displayed "perfectly" automatically,

The problem is always that the histogram of the grayscale image has large parts that must be avoided before doing automatic this-and-that.
I call those part artifacts. One artifact can be a wide tall peak in the white part, or a narrow high peak in the medium gray part. Those must be removed before applying autmatic algorithms. One way of "removing" is to mask the image before the histogra is computed, another way is to patch the histogram, supressing an easy to detect artifact.

I already use "stretching", wherein I move zero or very low black point and zero or very low white point, using percentiles to determine when to stop. To find the grayscale of greatest interest, I use what I described above, sliding a wide band over the histogram, choosing the location having the higest percentile.

Suggestions are welcome.

Could you attach an example of one of these grayscale images with artifacts? Just description is not enough. Also, I was going to talk about a generic algorithm for a such correction. Thanks.

>>I have lots of grayscale images that just cannot be displayed "perfectly" automatically,
>>
>>The problem is always that the histogram of the grayscale image has large parts that must be avoided before
>>doing automatic this-and-that...

Thomas, How do you calculate contrast? What method or formula do you use?

Note: I wonder if you use a classic one widelly used in Machine Vision applications for greyscale ( monochrome ) images:

%Contrast = ( Imax - Imin ) / MaxValueOfDynamicRange

For example, for 8-bit images MaxValueOfDynamicRange is equal to 255.

Contrast is not a property of the image but is a controlling parameter of an image manipulation algorithm. For instance, you can increase the contrast of an image by adjusting a pixel values such that the difference between light and dark is bigger.

In the real world there are images which need adjustment before being displayed perfectly, and I'm looking for automatic methods for this problem.

Attachments:

AttachmentSize
174.72 KB
209.87 KB
162.8 KB
389.6 KB
327.52 KB
356.33 KB

Thomas, You don't need to explain what Image Processing is and I didn't ask for the explanation of a widely known things about Image Processing, right? I have a very good experience with X-Ray imaging and MRI ( 2-D & 3-D / Volume Rendering ).

My simple question was:

>>...How do you calculate contrast? What method or formula do you use?

Sorry for a small deviation because this is Not related directly to IPP.

Thomas, Thank you for uploading a set of images which demonstrate some post image processing and I see now that attempt to use a classic formula:

%Contrast = ( Imax - Imin ) / MaxValueOfDynamicRange

will fail ( actually, I already did a quick test ).

Note: Source images look like MRI images, Not X-Ray ( however I could be wrong... ), with quality issues, like they are too overexposed ( captured structure of bones and, at the same time, soft tissues (!) / to capture soft tissues very low doses of radiation are needed ).

I think a more complex filtering techique needs to be used based on IIR or FIR ( with some Windowing function ), or some Convolution based filtering, or something else. It is clear that some software completed a very good correction of source images.

I will try a Wallis Statistical Filter but I'm Not sure that it will improve overall quality of the source image. I'll let you know as soon as my tests are completed.

My sample images all have lightness problems since we discussed that. Applying filtering (and with that I understand structure/sharpness filtering) will not "fix" the lightning problem, unless you can tell me a nice idea of how.

Further, my images are reduced for presentation here. I could upload real 16 or 12 bpp grayscale originals, if you want.

I do not know about Wallis Statistical Filters, but I googled it, and it seems to also have use in my field, and indeed can possible help with simultanious dark and bright areas.
Can IPP perform Wallis?

I also spotted noise reduction using Wavelets, do you have an IPP code sample that demonstrates this?

>>...My sample images all have lightness problems since we discussed that. Applying filtering (and with that I understand
>>structure/sharpness filtering) will not "fix" the lightning problem...

Did you do a Manual correction to get '_wanted.png' images? These images look significantly better. Please provide details.

Yes, my sample images have original and manually adjusted lightning versions. No filtering is done (structure/sharpening), only contrast/brightness/gamma.

I'm very curious for a Wallis implementation, to test with...  can be outside IPP...

By the way, my sample images are not MRI, just simple digital 2D x-ray.

>>...I'm very curious for a Wallis implementation, to test with... can be outside IPP...

Just checked IPL ( version 2.5 ) and IPP ( versions from 3.x to 7.x ) and they don't have it.

Attached is a screenshot with four corrected images using different Windowing Functions ( WF ). At the top is the original image and from left to right are images processed by:

1. 2-D IIR ( 32x32 kernel ) High-Pass Butterworth filter with Hanning WF
2. 2-D IIR ( 32x32 kernel ) High-Pass Butterworth filter with Hamming WF
3. 2-D IIR ( 32x32 kernel ) High-Pass Butterworth filter with Blackman WF
4. 2-D IIR ( 32x32 kernel ) High-Pass Butterworth filter with Kaiser WF Alpha 7

Attachments:

AttachmentSize
62.91 KB

The corrected images all seems to be Negative?

And what would be the purpose of this exercise?

>>The corrected images all seems to be Negative?..

No, they simply dark because it was a really quick set of tests.

>>And what would be the purpose of this exercise?

That wasn't a manual processing ( or editing in some Image Editor ). All corrections are done in a frequency domain with a DSP software subsystem and it supports FIR, IIR, FFT/IFFT, lots of WFs ( Window Functions ), etc, processing of 1-D / 2-D data sets.

I still think the images are "Negative", or "Inverted". Simply look at a few spots:
1- The ears: original has light ears on darker background, and filtered has the reverse.
2- The head: original is dark, filtered is bright.
3- The corners: original has lighter graqy corners, filter has darker gray corners.

I undestand your filters, however, in this discussion, we are talking about "fixing" the problem that parts are very bright and other parts are very dark, and how can you display both types properly at the same time. In my opinion, filtering is more about enhancing the edges, or structure, to better see detail, but that is an altogether different topic.

Anyway, I'm happy to see that you are looking at all this :)

>>...I still think the images are "Negative", or "Inverted"...

IIR processing did not make that. Another thing is, your uncorrected and corrected images have 32-bit depth. I've completed quick IIR tests on image with 8-bit depth ( converted to a raw 8-bit format ), that is, a significantly reduced dynamic range. However, after IIR processing some bones are visible in the middle of the head.

If IIR processing did not make that, then what did?
I think that, for this histogram/contract/brightness discussion, we should work images that are proper.

I was not aware that my sample images were 32 bit, I intended them to be 8 bit grayscale, but I could redo them if required.

That bone structure is more visible is simply because its edges are enhanced by the filtering. However, in this discussion, we are actually talking about contract/brightness and histograms...

In essence, Negation is a mapping process. That is, in case of an image with dynamic range from 0 to 255 a pixel with value 0 is changed to a value 255, a pixel with value 1 is changed to a value 254, and so on. Once again, IIR is a more complex processing and it is done in a Frequency Domain and lots of different input parameters are used.

If you consider these images as negatives that is OK with me.

I agree to your definition of Negative.

Its just that I want to ensure we talk about the same thing, since automatic optimalisation is a complex matter.