Hi,
I want to get difference between the first image and the second,
I want to cut the numbers from the image.
I am getting the difference between the pixels but the result is:
But what I want is:
Is it possible cut the image like this?
Here is what I did:
import cv2
import numpy as np
from PIL import Image
import pytesseract
import os
import sys
img = Image.open("recherche.png").convert("RGBA")
pattern = Image.open("pattern.png").convert("RGBA")
pixels = img.load()
pixelsPattern = pattern.load()
new = Image.open("new.png").convert("RGBA")
pixelNew = new.load()
for i in range(img.size[0]):
for j in range(img.size[1]):
if(pixels[i,j] != pixelsPattern[i,j]):
pixelNew[i,j] = pixels[i,j]
I am directly getting the bit difference, but It is not giving me what I want, I tried medianBlur and similar things to do it like 4th image but I am not able to make it sharp like in the 4th image. (I created 4th image manually with paint.)
It's a challenging problem, because the pattern was deliberately designed to make it difficult to solve by software.
I suggest the following steps:
Convert img and pattern to binary images (gray levels are not part of the number).
Compute absolute difference of img and pattern.
Apply closing morphological operation for closing small gaps.
Here is the code:
import cv2
import numpy as np
# Read image and pattern as Grayscale images (output of cv2.imread is numpty array).
img = cv2.imread("recherche.png", cv2.IMREAD_GRAYSCALE)
pattern = cv2.imread("pattern.png", cv2.IMREAD_GRAYSCALE)
# Convert img and pattern to binary images (all values above 1 goes to 255)
_, img = cv2.threshold(img, 1, 255, cv2.THRESH_BINARY)
_, pattern = cv2.threshold(pattern, 1, 255, cv2.THRESH_BINARY)
# Compute absolute difference of img and pattern (result is 0 where equal and 255 when not equal)
dif = cv2.absdiff(img, pattern)
# Apply closing morphological operation
dif = cv2.morphologyEx(dif, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5)));
dif = 255 - dif # Inverse polarity
# Display result
cv2.imshow('dif', dif)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
As you can see, solution is not perfect, but getting a perfect result is very challenging...
Related
import numpy as np
from imageio import imread, imwrite
im1 = imread('https://api.sofascore.app/api/v1/team/2697/image')[...,:3]
im2 = imread('https://api.sofascore.app/api/v1/team/2692/image')[...,:3]
result = np.hstack((im1,im2))
imwrite('result.jpg', result)
Original images opening directly from the url's (I'm trying to concatenate the two images into one and keep the background white):
As can be seen both have no background, but when joining the two via Python, the defined background becomes this moss green:
I tried modifying the color reception:
im1 = imread('https://api.sofascore.app/api/v1/team/2697/image')[...,:1]
im2 = imread('https://api.sofascore.app/api/v1/team/2692/image')[...,:1]
But the result is a Black & White with the background still looking like it was converted from the previous green, even though the PNG's don't have such a background color.
How should I proceed to solve my need?
There is a 4th channel in your images - transparency. You are discarding that channel with [...,:1]. This is a mistake.
If you retain the alpha channel this will work fine:
import numpy as np
from imageio import imread, imwrite
im1 = imread('https://api.sofascore.app/api/v1/team/2697/image')
im2 = imread('https://api.sofascore.app/api/v1/team/2692/image')
result = np.hstack((im1,im2))
imwrite('result.png', result)
However, if you try to make a jpg, you will have a problem:
>>> imwrite('test.jpg', result)
OSError: JPEG does not support alpha channel.
This is correct, as JPGs do not do transparency. If you would like to use transparency and also have your output be a JPG, I suggest a priest.
You can replace the transparent pixels by using np.where and looking for places that the alpha channel is 0:
result = np.hstack((im1,im2))
result[np.where(result[...,3] == 0)] = [255, 255, 255, 255]
imwrite('result.png', result)
If you want to improve image quality, here is a solution. #Brondy
# External libraries used for
# Image IO
from PIL import Image
# Morphological filtering
from skimage.morphology import opening
from skimage.morphology import disk
# Data handling
import numpy as np
# Connected component filtering
import cv2
black = 0
white = 255
threshold = 160
# Open input image in grayscale mode and get its pixels.
img = Image.open("image.jpg").convert("LA")
pixels = np.array(img)[:,:,0]
# Remove pixels above threshold
pixels[pixels > threshold] = white
pixels[pixels < threshold] = black
# Morphological opening
blobSize = 1 # Select the maximum radius of the blobs you would like to remove
structureElement = disk(blobSize) # you can define different shapes, here we take a disk shape
# We need to invert the image such that black is background and white foreground to perform the opening
pixels = np.invert(opening(np.invert(pixels), structureElement))
# Create and save new image.
newImg = Image.fromarray(pixels).convert('RGB')
newImg.save("newImage1.PNG")
# Find the connected components (black objects in your image)
# Because the function searches for white connected components on a black background, we need to invert the image
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(np.invert(pixels), connectivity=8)
# For every connected component in your image, you can obtain the number of pixels from the stats variable in the last
# column. We remove the first entry from sizes, because this is the entry of the background connected component
sizes = stats[1:,-1]
nb_components -= 1
# Define the minimum size (number of pixels) a component should consist of
minimum_size = 100
# Create a new image
newPixels = np.ones(pixels.shape)*255
# Iterate over all components in the image, only keep the components larger than minimum size
for i in range(1, nb_components):
if sizes[i] > minimum_size:
newPixels[output == i+1] = 0
# Create and save new image.
newImg = Image.fromarray(newPixels).convert('RGB')
newImg.save("new_img.PNG")
If you want to change the background of a Image, pixellib is the best solution because it seemed the most reasonable and easy library to use.
import pixellib
from pixellib.tune_bg import alter_bg
change_bg = alter_bg()
change_bg.load_pascalvoc_model("deeplabv3_xception_tf_dim_ordering_tf_kernels.h5")
change_bg.color_bg("sample.png", colors=(255,255,255), output_image_name="colored_bg.png")
This code requires pixellib to be higher or the same as 0.6.1
I'm trying the following to get the mask out of this image, but unfortunately I fail.
import numpy as np
import skimage.color
import skimage.filters
import skimage.io
# get filename, sigma, and threshold value from command line
filename = 'pathToImage'
# read and display the original image
image = skimage.io.imread(fname=filename)
skimage.io.imshow(image)
# blur and grayscale before thresholding
blur = skimage.color.rgb2gray(image)
blur = skimage.filters.gaussian(blur, sigma=2)
# perform inverse binary thresholding
mask = blur < 0.8
# use the mask to select the "interesting" part of the image
sel = np.ones_like(image)
sel[mask] = image[mask]
# display the result
skimage.io.imshow(sel)
How can I obtain the mask?
Is there a general approach that would work for this image as well. without custom fine-tuning and changing parameters?
Apply high contrast (maximum possible value)
convert to black & white image using high threshold (I've used 250)
min filter (value=8)
max filter (value=8)
Here is how you can get a rough mask using only the skimage library methods:
import numpy as np
from skimage.io import imread, imsave
from skimage.feature import canny
from skimage.color import rgb2gray
from skimage.filters import gaussian
from skimage.morphology import dilation, erosion, selem
from skimage.measure import find_contours
from skimage.draw import polygon
def get_mask(img):
kernel = selem.rectangle(7, 6)
dilate = dilation(canny(rgb2gray(img), 0), kernel)
dilate = dilation(dilate, kernel)
dilate = dilation(dilate, kernel)
erode = erosion(dilate, kernel)
mask = np.zeros_like(erode)
rr, cc = polygon(*find_contours(erode)[0].T)
mask[rr, cc] = 1
return gaussian(mask, 7) > 0.74
def save_img_masked(file):
img = imread(file)[..., :3]
mask = get_mask(img)
result = np.zeros_like(img)
result[mask] = img[mask]
imsave("masked_" + file, result)
save_img_masked('belt.png')
save_img_masked('bottle.jpg')
Resulting masked_belt.png:
Resulting masked_bottle.jpg:
One approach uses the fact that the background changes color only very slowly. Here I apply the gradient magnitude to each of the channels and compute the norm of the result, giving me an image highlighting the quicker changes in color. The watershed of this (with sufficient tolerance) should have one or more regions covering the background and touching the image edge. After identifying those regions, and doing a bit of cleanup we get these results (red line is the edge of the mask, overlaid on the input image):
I did have to adjust the tolerance, with a lower tolerance in the first case, more of the shadow is seen as object. I think it should be possible to find a way to set the tolerance based on the statistics of the gradient image, I have not tried.
There are no other parameters to tweak here, the minimum object area, 300, is quite safe; an alternative would be to keep only the one largest object.
This is the code, using DIPlib (disclaimer: I'm an author). out is the mask image, not the outline as displayed above.
import diplib as dip
import numpy as np
# Case 1:
img = dip.ImageRead('Pa9DO.png')
img = img[362:915, 45:877] # cut out actual image
img = img(slice(0,2)) # remove alpha channel
tol = 7
# Case 2:
#img = dip.ImageRead('jTnVr.jpg')
#tol = 1
# Compute gradient
gm = dip.Norm(dip.GradientMagnitude(img))
# Compute watershed with tolerance
lab = dip.Watershed(gm, connectivity=1, maxDepth=tol, flags={'correct','labels'})
# Identify regions touching the image edge
ll = np.unique(np.concatenate((
np.unique(lab[:,0]),
np.unique(lab[:,-1]),
np.unique(lab[0,:]),
np.unique(lab[-1,:]))))
# Remove regions touching the image edge
out = dip.Image(lab.Sizes(), dt='BIN')
out.Fill(1)
for l in ll:
if l != 0: # label zero is for the watershed lines
out = out - (lab == l)
# Remove watershed lines
out = dip.Opening(out, dip.SE(3, 'rectangular'))
# Remove small regions
out = dip.AreaOpening(out, filterSize=300)
# Display
dip.Overlay(img, dip.Dilation(out, 3) - out).Show()
I have 2 images like below. I want to have differences two of them.
I tried some codes with threshold. But no threshold different. two images threshold image is all black. How can I make differences are with color white on thresh?
(the difference is just on top-left like a small dark)
before = cv2.imread('1.png')
after = cv2.imread('2.png')
threshb = cv2.threshold(before, 0, 255, cv2.THRESH_BINARY_INV)[1]
thresha = cv2.threshold(after, 0, 255, cv2.THRESH_BINARY_INV)[1]
cv.imshow("before",threshb)
cv.imshow("after",thresha)
Not: I used "structural_similarity" link here for finding differences but it founds a lot of differences :(
I don't need small pixel differences. I need differences like seen with human eyes.
The way to handle that is to do absdiff followed by some gain in Python/OpenCV. When you do the absdiff with these images, the difference is small so will not be much above black. So you have to increase the gain to see the difference. (Or you could set an appropriate threshold)
Input 1:
Input 2:
import cv2
import numpy as np
from skimage import exposure as exposure
# read image 1
img1 = cv2.imread('gray1.png', cv2.IMREAD_GRAYSCALE)
# read image 2
img2 = cv2.imread('gray2.png', cv2.IMREAD_GRAYSCALE)
# do absdiff
diff = cv2.absdiff(img1,img2)
# apply gain
result1 = cv2.multiply(diff, 5)
# or do threshold
result2 = cv2.threshold(diff, 10, 255, cv2.THRESH_BINARY)[1]
# save result
cv2.imwrite('gray1_gray2_diff1.png', result1)
cv2.imwrite('gray1_gray2_diff2.png', result2)
# display result
cv2.imshow('diff', diff)
cv2.imshow('result1', result1)
cv2.imshow('result2', result2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result 1 (gain):
Result 2 (threshold):
Does anyone know how I can get these results better?
Total Kills: 15,230,550
Kill Details: (recorded after 2019/10,/Z3]
993,151 331,129
1,330,450 33,265,533
5,031,168
This is what it returns however it is meant to be the same as the image posted below, I am new to python so are there any parameters that I can add to make it read the image better?
img = cv2.imread("kills.jpeg")
text = pytesseract.image_to_string(img)
print(text)
This is my code to read the image, Is there anything I can add to make it read better? Also, the black boxes are to cover images that were interfering with the reading. I would like to also say that I have added the 2 black boxes to see if the images behind them were causing the issue, but I still get the same issue.
The missing knowledge is page-segmentation-mode (psm). You need to use them, when you can't get the desired result.
If we look at your image, the only artifacts are the black columns. Other than that, the image looks like a binary image. Suitable for tesseract to recognize the characters and the digits.
Lets try reading the image by setting the psm to 6.
6 Assume a single uniform block of text.
print(pytesseract.image_to_string(img, config="--psm 6")
The result will be:
Total Kills: 75,230,550
Kill Details: (recorded after 2019/10/23)
993,161 331,129
1,380,450 33,265,533
5,031,168
Update
The second way to solve the problem is getting binary mask and applying OCR to the mask features.
Binary-mask
Features of the binary-mask
As we can see the result is slightly different from the input image. Now when we apply OCR result will be:
Total Kills: 75,230,550
Kill Details: (recorded after 2019/10/23)
993,161 331,129
1,380,450 33,265,533
5,031,168
Code:
import cv2
import numpy as np
import pytesseract
# Load the image
img = cv2.imread("LuKz3.jpg")
# Convert to hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Get the binary mask
msk = cv2.inRange(hsv, np.array([0, 0, 0]), np.array([179, 255, 154]))
# Extract
krn = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 3))
dlt = cv2.dilate(msk, krn, iterations=5)
res = 255 - cv2.bitwise_and(dlt, msk)
# OCR
txt = pytesseract.image_to_string(res, config="--psm 6")
print(txt)
# Display
cv2.imshow("res", res)
cv2.waitKey(0)
I wrote a little script to transform pictures of chalkboards into a form that I can print off and mark up.
I take an image like this:
Auto-crop it, and binarize it. Here's the output of the script:
I would like to remove the largest connected black regions from the image. Is there a simple way to do this?
I was thinking of eroding the image to eliminate the text and then subtracting the eroded image from the original binarized image, but I can't help thinking that there's a more appropriate method.
Sure you can just get connected components (of certain size) with findContours or floodFill, and erase them leaving some smear. However, if you like to do it right you would think about why do you have the black area in the first place.
You did not use adaptive thresholding (locally adaptive) and this made your output sensitive to shading. Try not to get the black region in the first place by running something like this:
Mat img = imread("desk.jpg", 0);
Mat img2, dst;
pyrDown(img, img2);
adaptiveThreshold(255-img2, dst, 255, ADAPTIVE_THRESH_MEAN_C,
THRESH_BINARY, 9, 10); imwrite("adaptiveT.png", dst);
imshow("dst", dst);
waitKey(-1);
In the future, you may read something about adaptive thresholds and how to sample colors locally. I personally found it useful to sample binary colors orthogonally to the image gradient (that is on the both sides of it). This way the samples of white and black are of equal size which is a big deal since typically there are more background color which biases estimation. Using SWT and MSER may give you even more ideas about text segmentation.
I tried this:
import numpy as np
import cv2
im = cv2.imread('image.png')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
grayout = 255*np.ones((im.shape[0],im.shape[1],1), np.uint8)
blur = cv2.GaussianBlur(gray,(5,5),1)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
wcnt = 0
for item in contours:
area =cv2.contourArea(item)
print wcnt,area
[x,y,w,h] = cv2.boundingRect(item)
if area>10 and area<200:
roi = gray[y:y+h,x:x+w]
cntd = 0
for i in range(x,x+w):
for j in range(y,y+h):
if gray[j,i]==0:
cntd = cntd + 1
density = cntd/(float(h*w))
if density<0.5:
for i in range(x,x+w):
for j in range(y,y+h):
grayout[j,i] = gray[j,i];
wcnt = wcnt + 1
cv2.imwrite('result.png',grayout)
You have to balance two things, removing the black spots but balance that with not losing the contents of what is on the board. The output I got is this:
Here is a Python numpy implementation (using my own mahotas package) of the method for the top answer (almost the same, I think):
import mahotas as mh
import numpy as np
Imported mahotas & numpy with standard abbreviations
im = mh.imread('7Esco.jpg', as_grey=1)
Load the image & convert to gray
im2 = im[::2,::2]
im2 = mh.gaussian_filter(im2, 1.4)
Downsample and blur (for speed and noise removal).
im2 = 255 - im2
Invert the image
mean_filtered = mh.convolve(im2.astype(float), np.ones((9,9))/81.)
Mean filtering is implemented "by hand" with a convolution.
imc = im2 > mean_filtered - 4
You might need to adjust the number 4 here, but it worked well for this image.
mh.imsave('binarized.png', (imc*255).astype(np.uint8))
Convert to 8 bits and save in PNG format.