PIL to OpenCV MAT causes color shift - python

When loading a png image with PIL and OpenCV, there is a color shift. Black and white remain the same, but brown gets changed to blue.
I can't post the image because this site does not allow newbies to post images.
The code is written as below rather than use cv.LoadImageM, because in the real case the raw image is received over tcp.
Here is the code:
#! /usr/bin/env python
import sys
import cv
import cv2
import numpy as np
import Image
from cStringIO import StringIO
if __name__ == "__main__":
# load raw image from file
f = open('frame_in.png', "rb")
rawImage = f.read()
f.close()
#convert to mat
pilImage = Image.open(StringIO(rawImage));
npImage = np.array(pilImage)
cvImage = cv.fromarray(npImage)
#show it
cv.NamedWindow('display')
cv.MoveWindow('display', 10, 10)
cv.ShowImage('display', cvImage)
cv. WaitKey(0)
cv.SaveImage('frame_out.png', cvImage)
How can the color shift be fixed?

OpenCV's images have color channels arranged in the order BGR whereas PIL's is RGB. You will need to switch the channels like so:
import PIL.Image
import cv2
...
image = np.array(pilImage) # Convert PIL Image to numpy/OpenCV image representation
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # You can use cv2.COLOR_RGBA2BGRA if you are sure you have an alpha channel. You will only have alpha channel if your image format supports transparency.
...
#Krish: Thanks for pointing out the bug. I didn't have time to test the code the last time.
Hope this helps.

Change
pilImage = Image.open(StringIO(rawImage))
to
pilImage = Image.open(StringIO(rawImage)).convert("RGB")

Light alchemist's answer did not work, but it did explain the issue. Wouldn't the reverse be screwed up by the Apha channel, i.e. it changes BRGA to AGRB. I would think Froyo's answer would solve it, but it did not change the displayed image at all. What did work was reversing the colors in OpenCV. I'm too much of a newbie to know why. They seem equivalent to me. Reversing the colors in numpy would be preferred as additional processing is planned in numpy. But thanks for the help, the answers steered me in the right direction.
pilImage = Image.open(StringIO(rawImage));
bgrImage = np.array(pilImage)
cvBgrImage = cv.fromarray(bgrImage)
# Reverse BGR
cvRgbImage = cv.CreateImage(cv.GetSize(cvBgrImage),8,3)
cv.CvtColor(cvBgrImage, cvRgbImage, cv.CV_BGR2RGB)
#show it
cv.ShowImage('display', cvRgbImage)
cv. WaitKey(30) # ms to allow display

Related

Why does tesseract fail to read text off this simple image?

I have read mountains of posts on pytesseract, but I cannot get it to read text off a dead simple image; It returns an empty string.
Here is the image:
I have tried scaling it, grayscaling it, and adjusting the contrast, thresholding, blurring, everything it says in other posts, but my problem is that I don't know what the OCR wants to work better. Does it want blurry text? High contrast?
Code to try:
import pytesseract
from PIL import Image
print pytesseract.image_to_string(Image.open(IMAGE FILE))
As you can see in my code, the image is stored locally on my computer, hence Image.open()
Trying something along the lines of
import pytesseract
from PIL import Image
import requests
import io
response = requests.get('https://i.stack.imgur.com/J2ojU.png')
img = Image.open(io.BytesIO(response.content))
text = pytesseract.image_to_string(img, lang='eng', config='--psm 7')
print(text)
with --psm values equal or larger than 6 did yield "Gm" for me.
If the image is stored locally (and in your working directory), just drop the response variable and change the definition of text with the lines
image_name = "J2ojU.png" # or whatever appropriate
text = pytesseract.image_to_string(Image.open(image_name), lang='eng', config='--psm 7')
There are several reasons:
Edges are not sharp and continuous (By sharp I mean smooth, not with teeth)
Image is too small, you need to resize
Font is missing (not mandatory, but trained font incredibly improve possibility of recognition)
Based on points 1) and 2) I was able to recognize text.
1) I resized image 3x and 2) I blurred the image to make edges smooth
import pytesseract
import cv2
import numpy as np
import urllib
import requests
pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files (x86)/Tesseract-OCR/tesseract'
from PIL import Image
def url_to_image(url):
resp = urllib.request.urlopen(url)
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
return image
url = 'https://i.stack.imgur.com/J2ojU.png'
img = url_to_image(url)
retval, img = cv2.threshold(img,200,255, cv2.THRESH_BINARY)
img = cv2.resize(img,(0,0),fx=3,fy=3)
img = cv2.GaussianBlur(img,(11,11),0)
img = cv2.medianBlur(img,9)
cv2.imshow('asd',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
txt = pytesseract.image_to_string(img)
print('recognition:', txt)
>> recognition: Gm
Note:
This script is good for testing any image on web
Note 2:
All processing is based on your posted image
Note 3:
Text recognition is not easy. Every recognition requires special processing. If you try this steps with different image, it may not work at all. Important is to try a lot of recognition on images so you understand what tesseract wants

Error when overlaying two images in OpenCV and or PIL

I've tried overlaying two images in openCV both in openCV and in PIL, but to no avail. I'm using a 1000x1000x3 array of np.zeros for the background (aka, a black background) and this random image of my monitor, but I really can't get it to work for some reason unbeknownst to me.
Trying with OpenCV only: (result(if you pay attention, you can see a couple of weird lines and dots in the middle))
base_temp = np.zeros((1000,1000,3))
foreground_temp = cv2.imread('exampleImageThatILinkedAbove.png')
base_temp[offset_y:offset_y+foreground_temp.shape[0], offset_x:offset_x+foreground_temp.shape[1]] = foreground_temp
Trying with PIL: (The result is literally the same as the OpenCV version)
base_temp = cv2.convertScaleAbs(self.base) #Convert to uint8 for cvtColor
base_temp = cv2.cvtColor(base_temp, cv2.COLOR_BGR2RGB) #PIL uses RGB and OpenCV uses BGR
base_temp = Image.fromarray(base_temp) #Convert to PIL Image
foreground_temp = cv2.cvtColor(cv2.convertScaleAbs(self.last_img), cv2.COLOR_BGR2RGB)
foreground_temp = Image.fromarray(foreground_temp)
base_temp.paste(foreground_temp, offset)
I'm using python3.5 and and OpenCV3.4 on Windows 10, if that's any help.
I'd like to avoid any solutions that require saving the cv2 images and then reloading them in another module to convert them but if it's unavoidable that's okay too. Any help would be appreciated!
If you check the type of base_temp, you will see it is float64 and that is going to cause you problems when you try to save it as a JPEG which expects unsigned 8-bit values.
So the solution is to create your base_temp image with the correct type:
base_temp = np.zeros((1000,1000,3), dtype=np.uint8)
The complete code and result look like this:
import cv2
import numpy as np
from PIL import Image
# Make black background - not square, so it shows up problems with swapped dimensions
base_temp=np.zeros((768,1024,3),dtype=np.uint8)
foreground_temp=cv2.imread('monitor.png')
# Paste with different x and y offsets so it is clear when indices are swapped
offset_y=80
offset_x=40
base_temp[offset_y:offset_y+foreground_temp.shape[0], offset_x:offset_x+foreground_temp.shape[1]] = foreground_temp
Image.fromarray(base_temp).save('result.png')

Skimage SimilarityTransform spoils image

I have a code which recognizes faces on images (dlib implementation, 68 points)
I want to rotate a bit some images but after a get following trouble: my image becomes somehow spoiled
from skimage import io
from skimage import transform as tf
img = io.imread(f)
tform = tf.SimilarityTransform(rotation=np.deg2rad(10),translation=(10,12))
img = tf.warp(img,tform)
I plot image with 2 ways:
plt.imshow(img) #the right picture (matplotlib)
win = dlib.image_window() #the left picture (dlib)
win.set_image(img) #the left picture
As you can see dlib image is broken. Also algorithm which can find facial keypoints stopps working.
Without SimilarityTransform dlib works correctly.
Help me please! I want to rotate an image and to pass it to dlib
I found the solution.
I simply needed to convert image with img_as_ubyte
from skimage import img_as_ubyte
img = img_as_ubyte(tf.warp(img,tform))

Using python to save a JPG image that was edited in the script

Referring to the answer to this question, I tried to save my own JPG image files, after some basic image processing. I've only applied a rotation and a shear. This is my code:
import numpy as np
import sys
from skimage import data, io, filter, color, exposure
import skimage.transform as tf
from skimage.transform import rotate, setup, AffineTransform
from PIL import Image
mypath = PATH_TO_FILENAME
readfile = FILENAME
img = color.rgb2gray(io.imread(mypath + readfile))
myimg = rotate(img, angle=10, order=2)
afine_tf = tf.AffineTransform(shear=0.1)
editedimg = tf.warp(myimg, afine_tf)
# IF I UNCOMMENT THE TWO LINES BELOW, I CAN SEE THE EDITED IMAGE AS EXPECTED
#io.imshow(editedimg)
#io.show()
saveimg= np.array(editedimg)
result = Image.fromarray((saveimg).astype(np.uint8))
newfile = "edited_" + readfile
result.save(path+newfile)
I know that the image processing was fine because if I display it before saving, it's just the original image with a bit of rotation and shearing, as expected. But I'm doing something wrong while saving it. I tried without the astype(np.uint8)) part but got an error. Then I removed some of the code from the link mentioned above because I guessed it was particularly for Fourier Transforms, since when I included some of their code, then I got an image that was all gray but with white lines in the direction of the shear I'd applied. But now the image that gets saved is just 2KB of nothing but blackness.
And when I tried something as simple as:
result = Image.fromarray(editedimg)
result.save(path+newfile)
then I got this error:
raise IOError("cannot write mode %s as JPEG" % im.mode)
IOError: cannot write mode F as JPEG
I don't really need to use PIL, if there's another way to simply save my image, I'm fine with that.
Look into the PIL fork, Pillow, is is not as outdated and what you should probably be using for this.
Also depending on your operating system you may need a few other libraries to compile PIL with JPEG support properly, see here
This may also help Says you need to convert your image to RGB mode before saving.
Image.open('old.jpeg').convert('RGB').save('new.jpeg')

convert openCV image into PIL Image in Python (for use with Zbar library)

I'm trying to use the Zbar library's QR code detection methods on images I extract with OpenCV's camera methods. Normally the QR code detection methods work with images (jpg, png, etc.) on my computer, but I guess the captured frames of OpenCV are different.
Is there a way of making the captured frame into a PIL Image?
Thank you.
from PIL import Image
import zbar
import cv2.cv as cv
capture = cv.CaptureFromCAM(1)
imgSize = cv.GetSize(cv.QueryFrame(capture))
img = cv.QueryFrame(capture)
#SOMETHING GOES HERE TO TURN FRAME INTO IMAGE
img = img.convert('L')
width, height = img.size
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
zbar_img = zbar.Image(width, height, 'Y800', img.tostring())
# scan the image for barcodes
scanner.scan(zbar_img)
for symbol in zbar_img:
print symbol.data
With the python CV2, you can also do this:
import Image, cv2
cap = cv2.VideoCapture(0) # says we capture an image from a webcam
_,cv2_im = cap.read()
cv2_im = cv2.cvtColor(cv2_im,cv2.COLOR_BGR2RGB)
pil_im = Image.fromarray(cv2_im)
pil_im.show()
I think I may have found the answer. I'll edit later with results.
OpenCV to PIL Image
import Image, cv
cv_im = cv.CreateImage((320,200), cv.IPL_DEPTH_8U, 1)
pi = Image.fromstring("L", cv.GetSize(cv_im), cv_im.tostring())
Source: http://opencv.willowgarage.com/documentation/python/cookbook.html
Are you trying to obtain a RGB image? If that is the case, you need to change your parameters from this:
cv_im = cv.CreateImage((320,200), cv.IPL_DEPTH_8U, 1)
pi = Image.fromstring("L", cv.GetSize(cv_im), cv_im.tostring())
to that:
cv_im = cv.CreateImage((320,200), cv.IPL_DEPTH_8U, 3)
pi = Image.fromstring("RGB", cv.GetSize(cv_im), cv_im.tostring())
since it is documented almost nowhere, but the 'L' parameter of Image.fromstring is for 8-bit B&W images. Besides, you need to change the argument of your cv.CreateImage function from 1 (single channel image) to 3 (3 channels=RGB).
Hope it works for you.
Cheers
A simple way is to directly swap the channels. Suppose you are trying to convert a 3-channel image file between OpenCV format and PIL format. You can just use:
img[...,[0,2]]=img[...,[2,0]]
In this way, you won't be bothered with cv2.cvtColor as this function only works on images with certain depth.

Categories