A smoothed and blurred image can be obtained by averaging the pixel values around the pixel of interest in the image using the following filter.
\left(
\begin{matrix}
1/9 & 1/9 & 1/9 \\
1/9 & 1/9 & 1/9 \\
1/9 & 1/9 & 1/9
\end{matrix}
\right)
However, this makes the image blurry overall, so the edges are equally blurry. If you want to keep the edges but also reduce the noise, use the filter for it.
Weighted averaging is not simply averaging the pixel values around the pixel of interest, but rather focusing on what is near the pixel of interest. Then, when it comes to how to distribute the weights, the Gaussian filter should not be assigned according to the normal distribution. The Gaussian distribution with mean 0 and variance ρ is expressed as follows.
\frac{1}{ \sqrt{2 \pi \sigma}} \exp
\begin{pmatrix}
-
\frac{x^2}{2\sigma ^ 2}
\end{pmatrix}
This will be one-dimensional, so if you expand it to two dimensions
\frac{1}{ 2 \pi \sigma ^2} \exp
\begin{pmatrix}
-
\frac{x^2 + y ^2}{2\sigma ^ 2}
\end{pmatrix}
The Gaussian filter weights the distance between the pixel of interest and the peripheral pixels approximated by the Gaussian function, but the bilateral filter also weights the difference between the pixel values of the pixel of interest and the peripheral pixels with the Gaussian function. ing. If the difference in pixel value from the pixel of interest is small (= similar hue and brightness), the weight is large, and if the difference in pixel value from the pixel of interest is large, the weight is small. If the input image is f (i, j) and the output is g (i, j)
g(i,j) =
\frac{ \sum_{n=-w}^{w} \sum_{m=-w}^{w} w(i,j,m,n) f(i+m, j+n) }
{\sum_{n=-w}^{w} \sum_{m=-w}^{w} w(i,j,m,n)}
w(i,j,m,n) = \exp
\begin{pmatrix}
- \frac{m^2 + n^2}{2 \sigma_{1}^2}
\end{pmatrix}
\exp
\begin{pmatrix}
- \frac{(f(i,j)-f(i+m, j+n))^2}
{2 \sigma_{2}^2}
\end{pmatrix}
It becomes a ridiculous formula. The weighting of the distance between the pixel of interest and the peripheral pixels is represented by exp in the first half of the formula for w, and the difference between the pixel values of the pixel of interest and the pixel to be calculated is represented by exp in the second half of the formula for w. It's a filter for averaging, but w isn't set to add up to 1, so you need a denominator to add up all the kernel values to get 1.
Reference: http://imagingsolution.net/imaging/bilateralfilter/
cv2.bilateralFilter(src, d, sigmaColor, sigmaSpace[, dst[, borderType]]) → dst
http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html?highlight=laplacian#bilateralfilter
--src: Input image --d: Area used to blur the pixel of interest --sigmaColor: Standard deviation for color. If this is large, a large weight is adopted even if the difference in pixel values is large. --sigmaSpace: Standard deviation of distance. If this is large, a large weight is adopted even if the distance between pixels is wide.
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('images.jpg', cv2.IMREAD_COLOR)
bi = cv2.bilateralFilter(img, 15, 20, 20)
bi2 = cv2.bilateralFilter(bi, 15, 20, 20)
bi3 = cv2.bilateralFilter(bi2, 15, 20, 20)
plt.subplot(2,2,1),plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.title("original")
plt.xticks([]),plt.yticks([])
plt.subplot(2,2,2),plt.imshow(cv2.cvtColor(bi, cv2.COLOR_BGR2RGB))
plt.title("bi")
plt.xticks([]),plt.yticks([])
plt.subplot(2,2,3),plt.imshow(cv2.cvtColor(bi2, cv2.COLOR_BGR2RGB))
plt.title("bi2")
plt.xticks([]),plt.yticks([])
plt.subplot(2,2,4),plt.imshow(cv2.cvtColor(bi3, cv2.COLOR_BGR2RGB))
plt.title("bi3")
plt.xticks([]),plt.yticks([])
plt.show()
The result of filtering multiple times. After applying it three times, the noise-like thing disappears, but the pseudo-contour-like thing appears. The image might have been a little bad ...
This is an image with almost no noise filtered. As the number of times increases, the pseudo outline is emphasized and becomes more like an illustration.
The bilateral filter weights according to the difference between the pixel value of the pixel of interest and the pixel value of the peripheral pixel, but in the non-local mean filter, the area including the peripheral pixel is the peripheral area of the pixel of interest like template matching. The weight is determined by how similar they are.
The explanation with concrete images is easy to understand here. http://opencv.jp/opencv2-x-samples/non-local-means-filter
ceremony
w(i,j,m,n) = \exp
\begin{pmatrix}
\frac{
\sum_{t=-w}^{w} \sum_{s=-w}^{w} (f(i+s,j+t) -f(i+m+s, j+n+t))^2
}
{}
\end{pmatrix}
Can be applied to g (i, j) in the above equation. It seems that the area around the pixel of interest and the area around the peripheral pixels are similar, but honestly I don't understand the meaning of the formula ...
cv2.fastNlMeansDenoisingColored(src[, dst[, h[, hColor[, templateWindowSize[, searchWindowSize]]]]])
http://docs.opencv.org/3.0-beta/modules/photo/doc/denoising.html
--src: Input image (color) --templateWindowSize: Template size of peripheral area --searchWindowSize: Area size to search for weights --h: The degree of smoothing of the filter of the brightness component, if it is large, the noise will decrease, but it will also affect the edge part.
--hColor: The degree of smoothing of the color component filter, 10 is sufficient
Reference: http://ishidate.my.coocan.jp/opencv310_6/opencv310_6.htm
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('images.jpg', cv2.IMREAD_COLOR)
dst = cv2.fastNlMeansDenoisingColored(img,None,10,10,7,21)
plt.subplot(2,1,1),plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.title("original")
plt.xticks([]),plt.yticks([])
plt.subplot(2,1,2),plt.imshow(cv2.cvtColor(dst, cv2.COLOR_BGR2RGB))
plt.title("NLMeans")
Noise can be reduced considerably even with one shot.
Recommended Posts