I have a function which returns a pointer to the beginning of a sequence of bytes which correspond to 8 bit greyscale pixels. I'm trying to use PIL's frombuffer() function to create an image out of this. Looking here I did the following:
image_data = (1280*960*ctypes.c_ubyte)()
image_data = Frameptr
im = Image.frombuffer("L", (1280, 960), image_data, "raw", "L", 0, 1)
However, I still get this error message
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 314, in 'calling callback function'
File "C:\Desktop\Program_2013\camera\framegrab.py", line 44, in FrameDataCallBack
im = Image.frombuffer("L", (1280, 960), image_data, "raw", "L", 0, 1)
File "C:\Python27\lib\site-packages\PIL\Image.py", line 1853, in frombuffer
core.map_buffer(data, size, decoder_name, None, 0, args)
ValueError: buffer is not large enough
Any help would be appreciated!
Related
I'm following the tutorial here:
in order to create a python program that will create a deep-dream style img and save in onto disk. I thought that changes to the following lines should do the trick:
img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
tf.compat.v1.enable_eager_execution()
fname = '2.jpg'
with tf.compat.v1.Session() as sess:
enc = tf.io.encode_jpeg(img)
fwrite = tf.io.write_file(tf.constant(fname), enc)
result = sess.run(fwrite)'
the key line being encode_jpeg, however this gives me the following error:
Traceback (most recent call last):
File "main.py", line 246, in <module>
enc = tf.io.encode_jpeg(img)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/ops/gen_image_ops.py", line 1496, in encode_jpeg
name=name)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/framework/op_def_library.py", line 470, in
_apply_op_helper
preferred_dtype=default_dtype)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-
packages/tensorflow/python/framework/ops.py", line 1465, in convert_to_tensor
raise RuntimeError("Attempting to capture an EagerTensor without "
RuntimeError: Attempting to capture an EagerTensor without building a function.
You can simply convert the "img" tensor into numpy array and then save it as you have eager execution enabled (its enabled by default in tf 2.0)
So, the modified code for saving the image will be:
img = run_deep_dream_with_octaves(img=original_img, step_size=0.01)
display.clear_output(wait=True)
img = tf.image.resize(img, base_shape)
img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8)
fname = '2.jpg'
PIL.Image.fromarray(np.array(img)).save(fname)
You don't have to use sessions in tf2.0 to get the values from tensor.
I'm trying to use Python and PIL to add some text to an image. I am failing on saving the resultant image as JPG.
I've based it on the example given on
https://pillow.readthedocs.io/en/5.2.x/reference/ImageDraw.html#example-draw-partial-opacity-text
from PIL import Image, ImageDraw, ImageFont
def example():
base = Image.open('test.jpg').convert('RGBA')
txt = Image.new('RGBA', base.size, (255,255,255,0))
fnt = ImageFont.truetype('/Library/Fonts/Chalkduster.ttf', 40)
drw = ImageDraw.Draw(txt)
drw.text((10,10), "HELLO", font=fnt, fill=(255,0,0,128))
result= Image.alpha_composite(base, txt)
result.convert('RGB')
print ('mode after convert = %s'%result.mode)
result.save('test1.jpg','JPEG')
example()
Running this prints mode after convert = RGBA
which is then followed by
Traceback (most recent call last):
File "/Users/carl/miniconda3/envs/env0/lib/python3.7/site-packages/PIL/JpegImagePlugin.py", line 620, in _save
rawmode = RAWMODE[im.mode]
KeyError: 'RGBA'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "example.py", line 14, in <module>
example()
File "example.py", line 12, in example
result.save('test1.jpg','JPEG')
File "/Users/carl/miniconda3/envs/env0/lib/python3.7/site-packages/PIL/Image.py", line 2007, in save
save_handler(self, fp, filename)
File "/Users/carl/miniconda3/envs/env0/lib/python3.7/site-packages/PIL/JpegImagePlugin.py", line 622, in _save
raise IOError("cannot write mode %s as JPEG" % im.mode)
OSError: cannot write mode RGBA as JPEG
The image is still RGBA after the convert to RGB function.
What am I doing wrong?
You missed to assign the output to result. Change this code below
old:
result.convert('RGB')
new:
result = result.convert('RGB')
how to read an image from a function in PIL ? in this scenerio i'm passing a image through paste_image function but it won't support PIL
def paste_image(image):
for i in range(epoches):
im2 = Image.open('/home/navaneeth/work/oneon/1.png')
x, y = im2.size
image.paste(im2, (0, 0, x, y))
image.save("test_"+str(i)+".jpg", "JPEG")
and i'm getting this error
Traceback (most recent call last):
File "main.py", line 109, in <module>
paste_image(image)
File "main.py", line 98, in paste_image
image.paste(im2, (0, 0, x, y))
AttributeError: 'numpy.ndarray' object has no attribute 'paste'
from PIL import Image
im2 = Image.open("/home/navaneeth/work/oneon/1.png")
You can use this code to get the result you want.
So I am trying to create a Python Program to detect similar details in two images using Python's OpenCV. I have the two images and they are in my current directory, and they exist (see the code in lines 6-17). But I am getting the following error when I try running it.
import numpy as np
import matplotlib.pyplot as plt
import cv2
import os
path1 = "WIN_20171207_13_51_33_Pro.jpg"
path2 = "WIN_20171207_13_51_43_Pro.jpg"
if os.path.isfile(path1):
img1 = cv2.imread('WIN_20171207_13_51_33_Pro.jpeg',0)
else:
print ("The file " + path1 + " does not exist.")
if os.path.isfile(path2):
img2 = cv2.imread('WIN_20171207_13_51_43_Pro.jpeg',0)
else:
print ("The file " + path2 + " does not exist.")
orb = cv2.ORB_create()
kpl1, des1 = orb.detectAndCompute(img1,None)
kpl2, des2 = orb.detectAndCompute(img2,None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key=lambda x:x.distance)
img3 = cv2.drawMatches(img1,kpl1,img2,kpl2,matches[:10],None, flags=2)
plt.imshow (img3)
plt.show()
Here is the error I keep on getting...
Traceback (most recent call last):
File "C:\Users\jweir\source\repos\BruteForceFeatureDetection\BruteForceFeatureDetection\BruteForceFeatureDetection.py", line 31, in <module>
plt.imshow (img3)
File "C:\Program Files\Python36\lib\site-packages\matplotlib\pyplot.py", line 3080, in imshow
**kwargs)
File "C:\Program Files\Python36\lib\site-packages\matplotlib\__init__.py", line 1710, in inner
return func(ax, *args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\matplotlib\axes\_axes.py", line 5194, in imshow
im.set_data(X)
File "C:\Program Files\Python36\lib\site-packages\matplotlib\image.py", line 600, in set_data
raise TypeError("Image data cannot be converted to float")
TypeError: Image data cannot be converted to float
Can someone please explpain to me why I am getting this error, what it means, and how to fix it.
You're not actually reading in an image.
Check out what happens if you try to display None in matplotlib:
plt.imshow(None)
Traceback (most recent call last):
File ".../example.py", line 16, in <module>
plt.imshow(None)
File ".../matplotlib/pyplot.py", line 3157, in imshow
**kwargs)
File ".../matplotlib/__init__.py", line 1898, in inner
return func(ax, *args, **kwargs)
File ".../matplotlib/axes/_axes.py", line 5124, in imshow
im.set_data(X)
File ".../matplotlib/image.py", line 596, in set_data
raise TypeError("Image data can not convert to float")
TypeError: Image data can not convert to float
You're reading WIN_20171207_13_51_33_Pro.jpeg but you're checking if WIN_20171207_13_51_33_Pro.jpg exists. Note the different extensions. Why do you have the filename written twice (and differently)? Just simply write:
if os.path.isfile(path1):
img1 = cv2.imread(path1, 0)
else:
print ("The file " + path1 + " does not exist.")
Note that even if you put a bogus file into cv2.imread(), the resulting image will just be None, which doesn't error in any of the subsequent function calls until matplotlib tries to draw it. If you print(img1) after reading, you'll see it's None and not reading properly.
I do not know whether it is relevant with your case but since we assigning the file path in string type like cv2.imread("filepathHere"), if arguments like "\b" or "\r" occurs in the file path it causes program to pop an error such as this.
When I encountered such an error before, I changed the file name from file / brick.png to ibrick.png and the problem was resolved.
I am trying to display an image in python and I am not 100% sure why imshow() is throwing an error.
The error trace is :
Traceback (most recent call last):
File "knn.py", line 65, in <module>
digit_axes.imshow(paths[0],cmap = cm.Greys_r)
File "/usr/local/lib/python2.7/site-packages/matplotlib/__init__.py", line 1892, in inner
return func(ax, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/matplotlib/axes/_axes.py", line 5118, in imshow
im.set_data(X)
File "/usr/local/lib/python2.7/site-packages/matplotlib/image.py", line 545, in set_data
raise TypeError("Image data can not convert to float")
TypeError: Image data can not convert to float
The code is as follows:
paths = []
paths.append('./images/image1.png')
digit_axes = main_figure.add_subplot(211)
digit_axes.get_xaxis().set_visible(False)
digit_axes.get_yaxis().set_visible(False)
digit_axes.set_title('Image')
digit_axes.imshow(paths[0],cmap = cm.Greys_r)
I think I found the solution by using imread()
img = imread(paths[0])
digit_axes.imshow(img,cmap = cm.Greys_r)