Site Loader

 

Extraction by ??????????RLS
Extraction by HLS

The original image

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

?In the case of delicate hues like the image below, the extraction
results differ between RGB and HLS. It is good to use them according to
your needs.

? Page Top

?In selecting the colors of the above three examples, trying to specify
the range with RGB, it is impossible for R and G to be 0 to 108 and B to 147 to
255. 
?If specified by HLS,
you can specify H as 160 only, L as 120 only, S as 36 to 240 only.

 H: 160 L: 120
S: 36 R: 108 G: 108 B: 147

 H: 160 L: 120
S: 125 R: 61 G: 61 B: 194

 H: 160 L: 120
S: 240 R: 0 G: 0 B: 255

?For example, if you select blue (R: 0 G: 0 B: 255) in the color setting
dialog box and select it in the direction directly below the color selection
range on the right side of the screen, the value of hue or brightness does not
change However, the appearance of colors and RGB are totally different values.

?RGB is suitable for extracting a uniform color to some extent, but it is
very difficult to select the extraction range, for example, if you want to
extract even the same color even if the brightness is different.

?HLS can see similar values ??when opening the color setting dialog box
in Windows. In the lower right corner of the screen there are items such
as “Tint”, “Vividness” and
“Brightness”. “Hue” is the same as H (hue).

?In the ” Digital Image ” page, we explained
pixel values ??of Red (Red) / Green (Green) / Blue (Blue) as explanation of
color images. When handling color images, you may use H (hue) / L
(brightness) / S (vividness) in addition to this RGB.

 Range specification in HLS

? Page Top

??
?????Image from
which ?????????only
the original image red color is
extracted

So far image processing has been done on gray
images, but recently as industrial color cameras have also increased,
processing for color images has come to be commonly seen. Processing to
extract a range of a certain color from a color image instead of grayscale
converting a color image as before is also one such process.

?By labeling, you can check the area and length for each chunk and select
according to the result.

??
Original image ???????????labeling processing

?When binarized image is labeled, it becomes as follows. The same
numbers are attached to the connected parts.

Adding a label (number) for each clump in the image
is called labeling. Labeling is useful for identifying objects and
checking the number.

???
???????????After contraction processing
after original image expansion processing
???????????????

?In the example below, we connect figures that were broken.

?Expansion is performed N times, then shrinkage is performed N times is
called closing. Closing effects such as filling in figures and joining
cutting parts are obtained.

Closing

? Page Top

???
???????????After expansion process
after original image contraction process
???????????????
?????????????????

?In the example below, we remove small pixels around the figure, leaving
only the shape we want to extract.

?N times of shrinking and then N times of inflation is called opening. By
opening, you can get the effect of removing protruding parts of the figure and
separating the coupling part.

opening

? Page Top

???
Original image ???????????contraction processing 
???????????????
?????????????????Contraction processed image

?In the example below, contrary to the expansion process above, pixels on
the top, bottom, left and right of the pixel are contracted.

Shrinkage treatment

? Page Top

???
Original image ???????????expansion processing Image 
???????????????
?????????????????after expansion processing

?There are several methods for expansion processing, but in the example
below, one pixel is expanded one pixel up, down, left, and right for one pixel. One
square in the figure below represents one pixel.

Expansion process

?The
process of combining expansion and contraction several times is called
morphological processing, which is effective for smoothing binarized images
(reducing unevenness to make them smoother), removing isolated points (filling
up), and so on.

?Processing that inflates a figure in a binary black and white image by
one pixel, and contrarily reduces it by one pixel is called contraction.

Morphology

?In addition, simple binarization processing is performed with one
threshold as described above, but you can also specify the two thresholds and
extract the range of luminance between them.

  
Original image????????????????binarized image

?By binarizing the image, it becomes easy to extract the detection target
from the image. In addition, judgment processing etc. can be executed at
high speed.

  
?????????????An image binarized with the
value threshold value “100” of each pixel

?Binarization of an image is a process of converting an image with
shading to two tones of white and black. We define a certain threshold
value and replace it with white if the value of each pixel is above the
threshold and black if it is below. One square in the figure below
represents one pixel.

Binarization

??

?The median filter replaces the value of each pixel with the median value
of the surrounding pixels. This process produces images that do not damage
the edges of the input image compared to moving average filters.

Median filtering

? Page Top

??
??????????????After applying original
image average filter

?The moving average filter replaces the value of each pixel with the
average value of neighboring pixels. When this processing is done, an
image with an edge blurring as a whole is generated.

Moving average filter processing

? Page Top

Typical processing of noise removal includes moving average filter processing and median filter processing . Select the
noise removal method according to what you extract in the image processing.

?In order to efficiently extract target information from images, it is
necessary to remove noise as much as possible. This preprocessing is
called noise removal or smoothing.

Noisy image example

The image may contain noise (noise) caused by
defects in the camera’s image pickup device or the like. Noise is a random
fine fluctuation component, it is not useful for image processing or it
interferes with processing.

?Which element is extracted when image processing is performed is decided
by criteria such as the feature of the substance to be inspected appears more
conspicuously. 
?Alternatively, when
processing image data using all element values ??of RGB as it is with color
image data, or when using a monochrome camera instead of a color camera at the
time of shooting.

?I think that you can see that the dark red part and the dark blue part
have the same brightness. 
?Do not you feel that
there is less discomfort compared to the image with only the RGB elements
extracted for the above color image? What do you think. I was
watching with such a color in the era of old black and white television.

Conversion by NTSC
weighted average method

?In order to convert to human eyes without any sense of incompatibility,
we assign a constant weight to each value (decide the proportion of the three
values) and change that value to grayscale. This weighting coefficient
(NTSC Coefficients) is the same as the standard used for television
broadcasting in Japan and the United States.

?Instead of extracting only one of the R element value, G element value,
and B element value, there is a method of taking the average of the three
values. However, an image which is simply converted from R + G + B divided
by 3 gives a feeling of discomfort to the original color image. This is
because the manner in which the brightness is different is different depending
on the shade, such as the human eye can understand the change in the brightness
of the green well but the change in the brightness of the blue is insensitive.

NTSC weighted average method

? Page Top

?
Extract only Green
element Only extract ????Blue element?

Similarly, when extracting only the G element and
extracting only the B element, it becomes the following image. It becomes
an image with brightness different from the above image. The whitish
portion of the original image is whiter (close to 255) even after conversion
because all values ??of the RGB values ??are higher (closer to 255).

?
??????????????Extract only the original
image Red element?

?Below the left is the original image, the right is the image extracted
only the R element value. The red part becomes whitish because the R
element value is high (closer to 255). Conversely, in the blue part, the R
value is low (it is closer to 0), so it becomes darker.

?There is a conversion method that takes only the R element value from
each pixel of the color image and adopts it as an 8-bit grayscale
value. For example, if the value of a pixel of a color image is (R 255, G
0, B 0), the pixel value at that position is 255, and if it is (R 128, G 0, B
255), the pixel value at that position is 128 … and so on, extract only the
pixel value of R.

Extract R element

? Page Top

?If you convert a color image with 24-bit RGB to an 8-bit grayscale
image, you can process it faster. For grayscale conversion, there
are a method of extracting one element value of RGB or a NTSC (Television Broadcasting Standard) weighted average
method which averages by taking constant weighting to each
element of RGB, and so on.

In image processing, in order to efficiently
perform calculation processing, we use grayscale images more than color images.

?For people who are aware, image data has header information that
contains the size (width and height) of the entire image and the number of bits
of the image, in addition to the data representing the value of each pixel I
will. 
?Since there are
information necessary for display and printing, such as whether it is an 8-bit
image or a 24-bit image, what is the size of the image … etc., the computer
displays and prints based on that information is.

?A color image is 24 bits per pixel, and a grayscale image is 8 bits per
pixel. Why is it displayed properly when displaying or printing on the
screen of a PC, although the size (number of bits) of one pixel is
different? Do not you have any doubts?

 How to distinguish between
color and grayscale images?

? Page Top

256 gradations of gray
scale image

?For RGB color images, images that express black and white shading are
called grayscale images. A grayscale image represents one pixel by 8 bits,
does not include color information but contains only brightness information. In
this 8-bit image, it is possible to express the shading to 2 8 8 = 256
tones. The pixel value 0 is black, and the pixel value 255 is white.

 Grayscale image

? Page Top

?By the way, in color
printing etc., “subtractive color mixing” is used which uses the
three primary colors C (cyan), M (magenta), and Y (yellow) and darkens as the
colors are mixed.

RGB elements are mixed by “additive color mixture” to generate color. The
overlapping color becomes white. The white part is (R 255, G 255, B 255).

?For example, the red part of the figure below is (R 255, G 0, B 0). The
blue part is (R 0, G 0, B 255), and the green part is (R 0, G 255, B 0).

?In a color image, the color of one pixel is represented by three primary
colors of R (red), G (green), and B (blue). A 24-bit image is often used,
each representing 8 bits of RGB elements of one pixel. That is, in a
24-bit image, one pixel consists of 24 bits (8 bits × 3 colors).

 Color image

? Page Top

Color image and
grayscale image

?The value of each pixel is called a pixel value, and the image is classified
into a color image , a grayscale image, etc. depending on the
capacity and properties of the pixel value .

?The smallest elements (grain of the upper image) that constitute the
image, which are arranged in this lattice form, are called pixels (pixels =
pixels). Each pixel expresses light intensity and color depending on the
numerical value.

Grains of various colors are lined up
like this ….

A digital image is composed of elements arranged in
a lattice pattern. 
For example, when enlarging the portion surrounded by the yellow rectangle of
the image below …

?Selection of cameras, lenses, lighting conditions and so on is important
in order to extract objects efficiently with simple image processing

?The image processing introduced here is an example, and there are
various other processes. Although it is possible to accomplish the purpose
through such complicated image processing, in order to increase processing
speed, image processing by a program should be as simple as possible and
processing must be done efficiently.

Digital image Color image and grayscale image
Grayscale conversion
Noise removal
Binarization
Morphology
labeling
Color extraction

Various image
processing

?We are developing a system to shoot with industrial area camera and line
sensor camera and perform various inspection and measurement by image
processing.

You can extract the outline of the object by
examining the image taken with the camera, and check the length and area.

?
Shot image and processed
image

High-resolution cameras perform inspection and
measurement on behalf of human eyes. 
In addition, it is also possible to inspect fine parts and the like which are
hard to examine with human eyes.

“Over the human eye, replacing the
human eye”

A method of recognizing and measuring objects by
converting digital image data is called image processing. By image
processing, defect detection and color judgment of industrial products etc. can
be done.

Image Processing

Post Author: admin

x

Hi!
I'm Sonya!

Would you like to get a custom essay? How about receiving a customized one?

Check it out