Dash is a great Python package for interactive visualization. My feelings with this library is that it is great when it comes to structure data analysis but when it comes to unstructured data like images and videos it is not so great.
As a work around, if there is a need to show images when working with Dash, I use matplotlib library. I show the following example to make my point clear:
#app.callback([Output("id_badframe_video", "children")],
[Input("id_generate_badframe_video_button","n_clicks")],
[State("id_dataset_name_list","value"),
State('id_video_name_list','value'),
State('id_video_index_list', 'value'])
def generate_bad_video(nclick, dataset_name, video_name, frame_index):
if nclick:
img = read_image(dataset_name, video_name, frame_index)
import matplotlib.pyplot as plt
plt.close()
fig, ax = plt.subplots(1)
ax.imshow(img, cmap='gray')
In the above code, I want to demonstrate an image based on the user's inputs: dataset name, video name and video index. Then when I press Show Image button, then the image will be shown in a separated window (not within http://127.0.0.1:8050/). At the beginning I thought it was a great idea, but then I found after showing several images the program will crash with the following error messages:
Error on request:
Traceback (most recent call last):
File "/home/lib/python2.7/site-packages/werkzeug/serving.py", line 303, in run_wsgi
execute(self.server.app)
File "/home/lib/python2.7/site-packages/werkzeug/serving.py", line 294, in execute
write(data)
File "/home/lib/python2.7/site-packages/werkzeug/serving.py", line 257, in write
self.send_header(key, value)
File "/home/lib/python2.7/BaseHTTPServer.py", line 412, in send_header
self.wfile.write("%s: %s\r\n" % (keyword, value))
IOError: [Errno 32] Broken pipe
Tcl_AsyncDelete: cannot find async handler
Any ideas on how to solve this problem? Thanks.
You can't call matplotlib's imgshow inside a Dash callback. Even if it didn't throw that error, it still wouldn't get the image where you need it. Remember that your callback needs to return some data that (in the case of your code) will be injected into DOM element with ID id_badframe_video. So you need to somehow get the image data and return it from the callback.
The matplotlib docs have a recipe for base64 encoding the image data and creating an img element with the image data in the src attribute. I would suggest adapting this recipe so that your callback returns something like the following, (assuming you have the base64 encoded image in a variable img_data):
html.Img(src=f"data:image/png;base64,{data}")
(It's possible you don't actually need matplotlib for any of this)
Related
I am writing a bit of code to allow me to remove seams from an image. Currently, my code allows me to find, highlight, and remove seams from a greyscale image. I am needing to remove seams from colored images and my code will not allow me to do that. Can anyone help me modify my code to allow me to do this?
My code:
import numpy as np
import cv2
import math
import time
def getEdgeImage(img,margin=10):
kernel=np.float64([[-1,0,1]])
Ix=cv2.filter2D(img,cv2.CV_64F,kernel)
Iy=cv2.filter2D(img,cv2.CV_64F,kernel)
I=np.hypot(Ix,Iy)
m=I.max()
I[:,:margin]=m
I[:,-margin:]=m
return I
def getEnergyMap(img,repulseMask=None,attractMask=None):
edges=getEdgeImage(img)
if attractMask is not None:
edges[attractMask==1]=-10
if repulseMask is not None:
edges[repulseMask==1]=235
kernel=np.ones(3,np.float64)
for i in range(1,len(edges)):
minAbove=cv2.erode(edges[i-1],kernel).T[0]
edges[i]+=minAbove
return edges
def getSeam(img,repulseMask=None,attractMask=None):
energyMap=getEnergyMap(img,repulseMask,attractMask)
y=len(energyMap)-1
x=np.argmin(energyMap[y])
seam=[(x,y)]
while len(seam)<len(energyMap):
x,y=seam[-1]
newY=y-1
newX=x+np.argmin(energyMap[newY,x-1:x+2])-1
seam.append((newX,newY))
return seam
img1=cv2.imread("image.jpg") #[::2,::2]
# attractMask=img1*0
# repulseMask=img1*0
seam=getSeam(img1)
The attract and repulse masks are unimportant to the code currently, they are just used so I can manually plug in pixel coordinates to increase or decrease the amount of seams going in that coordinate plane.
The error I get when I run this code:
Traceback (most recent call last):
File "Program.py", line 110, in <module>
seam=getSeam(img1)
File "Program.py", line 62, in getSeam
energyMap=getEnergyMap(img,repulseMask,attractMask)
File "Program.py", line 58, in getEnergyMap
edges[i]+=minAbove
ValueError: operands could not be broadcast together with shapes (960,3) (960,) (960,3)
Is there anyway that I can get this to work with my code? I'll modify the functions if that is what I need to do.
Then try this, these are separate channels given to functions individually.
r=img1[:,:,0]
seam_r=getSeam(r)
g=img1[:,:,1]
seam_g=getSeam(g)
b=img1[:,:,2]
seam_b=getSeam(b)
After this, pass the results to your post function individually.
Basically I followed this tutorial to stream processed video (not just retrieving frames and broadcasting) and it works for me (I'm new to html and flask). But I want to save some computation here:
I wonder if it's possible to avoid saving opencv image object to a jpeg file and then reading again? Is it a waste of computation?
I think it's even better if flask/html template could render the image by using raw 3 data channels RGB of the image.
Any idea? Thanks!
P/S: I actually tried this following code:
_, encoded_img = cv2.imencode('.jpg', img, [ int( cv2.IMWRITE_JPEG_QUALITY ), 95 ] )
But it gives the following error:
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
File "/home/trungnb/virtual_envs/tf_cpu/lib/python3.5/site-packages/werkzeug/wsgi.py", line 704, in next
return self._next()
File "/home/trungnb/virtual_envs/tf_cpu/lib/python3.5/site-packages/werkzeug/wrappers.py", line 81, in _iter_encoded
for item in iterable:
File "/home/trungnb/workspace/coding/Mask_RCNN/web.py", line 25, in gen
if frame == None:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
You would want to compress it to JPEG anyway as sending the raw RGB data would be slower due to the data size.
You could try using cv::imencode to compress the image. Then you may be able send the image in a similar way to flask return image created from database
Now I'm trying to mix 100+ pictures to one pic(like .png) with pillow(fork PIL)
And I know "PIL.Image.blend(im1, im2, alpha)" can fix all pictures ,however the picture's color was too light.
I want to fix the 100+ pictures with the color, not the transparency(alpha).
I know another API for fix is "PIL.Image.composite(image1, image2, mask)"
but when using it, it tell something was wrong.
"im = Image.composite(p, im, "RGBA")" # Am I using right?
p and im was two open Image object open by API "PIL.Image.open(fp, mode='r')"
File "GA_engine.py", line 187, in test_create
im = Image.composite(p, im, "RGBA")
File "/Library/Python/2.7/site-packages/PIL/Image.py", line 2313, in composite
image.paste(image1, None, mask)
File "/Library/Python/2.7/site-packages/PIL/Image.py", line 1313, in paste
mask.load()
AttributeError: 'str' object has no attribute 'load'
It's very unclear what you mean by "fix" the images, but it looks like you're trying to combine them in some way. I can't say if you're using the right tool, since I don't know what you're trying to accomplish, but I can say that you're not using the tool correctly:
If you read the docs for Pillow, you'll see that Image.composite requires, as the third argument, another image to use as a transparency mask.
So in place of "RGBA", you need to supply another Image object of the same size as the other two. Does that answer your question?
I use uvccapture to take pictures and want to process them with the help of python and the python imaging library (PIL).
The problem is that PIL can not open those images. It throws following error message.
Traceback (most recent call last):
File "process.py", line 6, in <module>
im = Image.open(infile)
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 1980, in open
raise IOError("cannot identify image file")
IOError: cannot identify image file
My python code looks like this:
import Image
infile = "snap.jpg"
im = Image.open(infile)
I tried to save the images in different formats before processing them. But this does not help. Also changing file permissions and owners does not help.
The only thing that helps is to open the images, for example with jpegoptim, and overwriting the old image with the optimized one. After this process, PIL can deal with these images.
What is the problem here? Are the files generated by uvccapture corrupt?
//EDIT: I also found out, that it is not possible to open the images, generated with uvccapture, with scipy. Running the command
im = scipy.misc.imread("snap.jpg")
produces the same error.
IOError: cannot identify image file
I only found a workaround to this problem. I processed the captured pic with jpegoptim and afterwords PIL could deal with the optimized image.
I'm trying to take an fft of an image in python, alter the transformed image and take a reverse fft. Specifically, I have a picture of a grid that I'd like to transform, then black out all but a central, narrow vertical slit of the transform, then take a reverse fft.
The code I'm working with now, for no alteration to transform plane:
import os
os.chdir('/Users/terra/Desktop')
import Image, numpy
i = Image.open('grid.png')
i = i.convert('L') #convert to grayscale
a = numpy.asarray(i) # a is readonly
b = abs(numpy.fft.rfft2(a))
j = Image.fromarray(b)
j.save('grid2.png')
As of now, I'm getting an error message:
Traceback (most recent call last):
File "/Users/terra/Documents/pic2.py", line 11, in
j.save('grid2.png')
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/PIL/Image.py", line 1439, in save
save_handler(self, fp, filename)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/PIL/PngImagePlugin.py", line 506, in _save
raise IOError, "cannot write mode %s as PNG" % mode
IOError: cannot write mode F as PNG
I'm very new to programming and Fourier transforms, so most related threads I've found online are over my head. Very specific help is greatly appreciated. Thanks!
The main problem is that the array contains floats after the FFT, but for it to be useful for PNG output, you need to have uint8s.
The simplest thing is to convert it to uint8 directly:
b = abs(numpy.fft.rfft2(a)).astype(numpy.uint8)
This probably will not produce the image you want, so you'll have to normalize the values in the array somehow before converting them to integers.