In my Qt application I have image data as a numpy.ndarray. Usually that comes from cv2.imread(), which I then convert to a QImage as follows:
height, width, channel = cvImg.shape
bytesPerLine = 3 * width
qImg = QImage(cvImg.data, width, height, bytesPerLine, QImage.Format_RGB888)
This works fine, the QImage can be converted to a pixmap and painted to a label. Now in some cases I don't get the image data from a file via imread(), but instead directly from a camera. This data is also a numpy.ndarray, I can save it via cv2.imwrite() (and then open it in an image viewer). However, using the code above I cannot convert that image data directly to a QImage, the result is a red-ish image without any details, just some vertical lines.
Now since I can save that camera image data it seems to be valid, I just need to find the correct image format when calling the QImage constructor (I guess). I tried several of them, but none worked. So how can I determine in which format this image data is?
Related
I have pictures that I want to resize as they are currently quite big. The pictures are supposed to be going to Power BI and Power BI has the maximum limitation of around 32k base64 string. I created a function to resize the image but the image has become blurry and less visible after resizing. The length of the base64 image of 1 picture was around 150,000 which came down to around 7000.
# Converting into base64
outputBuffer = BytesIO()
img2.save(outputBuffer, format='JPEG')
bgBase64Data = outputBuffer.getvalue()
# Creating a new column for highlighted picture as base64
#image_base64_highlighted = base64.b64encode(bgBase64Data).decode() ## http://stackoverflow.com/q/16748083/2603230
#print(img2)
resize_factor = 30000/len(base64.b64encode(bgBase64Data))
im = Image.open(io.BytesIO(bgBase64Data))
out = im.resize( [int(resize_factor * s) for s in im.size] )
output_byte_io = io.BytesIO()
out.save(output_byte_io, 'JPEG')
final = output_byte_io.getvalue()
image_base64_highlighted = base64.b64encode(final).decode()
I think it is shrinking the image too much. Is there anyway I can improve the visibility of the image. I want to be able to see at least the text in the image. I cannot post the images due to PII. Any idea?
Encoding with base64 adds around 30% to your image size, so you should aim for a JPEG size of 24kB to ensure it remains under 32kB when encoded.
You can reduce an image to a target size of 24kB using my answer here.
You can also use wand to reduce the quality of a JPEG till it reaches a certain size:
from wand.image import Image
import io
# Create a canvas with noise to make incompressible
with Image(width=640, height=480, pseudo='xc:') as canvas:
canvas.noise('random')
# This is the critical line that enforces a max size for your JPEG
canvas.options['jpeg:extent'] = '72kb'
jpeg = canvas.make_blob('jpeg')
print(f'JPEG size: {len(jpeg)}')
You can do the same thing in the command-line by shelling out to ImageMagick:
magick INPUT.JPG -define jpeg:extent=24kb OUTPUT.JPG
I think you can do that with pygame itself. But its recommended for you to try open-cv python for this. I think you should use cv2.resize(). And the parameters are;
source : Input Image array (Single-channel, 8-bit or floating-point)
dsize : Size of the output array
dest : Output array (Similar to the dimensions and type of Input image array)
fx : Scale factor along the horizontal axis
fy : Scale factor along the vertical axis
interpolation: One of the above interpolation methods
We are using the tkinter library and Image class to display images from a file. In the program we change the pixels (so we have an array with new pixels) and want to display it in the tkinter window as well. (we can't use plt.show() or smth like this, we need to change pixels in Image, because it works only with it)
image = Image.open(files_name)
img = ImageTk.PhotoImage(image)
disp_img.config(image=img)
disp_img.image = img
The best option we've seen is .putpixel. But 1) changing each pixel separately is too long 2) it has a strange parameters, and we are not sure about using it
I need to save an image with x and y dimensions, I am using pillow to do so, the problem is that it is saving in default dimension, in my case 16x16, I tried using resize like this:
new_image = image.resize((40, 40))
but still the same result, the only difference is that in the preview of the image it gets smaller, but it stays 16x16, Does anyone have ideas?
image_byte = b"image_bytes"
b = base64.b64decode(image_byte)
image = Image.open(io.BytesIO(b))
new_image = image.resize((40, 40))
new_image.save(icon_path)
Based on the discussion in the comments:
When saving ICO files, you will need to specify the sizes to save as (since ICOs can contain multiple sizes and formats of the same (or different!) image):
new_image.save('icon.ico', sizes=[(256, 256), (128, 128)])
If you don't need an ICO file, just use e.g. PNG (which contains a single format and size):
new_image.save('icon.png')
After searching for a few hours, I ended up on this link. A little background information follows.
I'm capturing live frames of a running embedded device via a hardware debugger. The captured frames are stored as raw binary files, without headers or format. After looking at the above link and understanding, albeit perfunctorily, the NumPY and Matplotlib, I was able to convert the raw binary data to an image successfully. This is important because I'm not sure if the link to the raw binary file will help any one.
I use this code:
import matplotlib.pyplot as plt # study documentation
import numpy as np # " "
iFile = "FramebufferL0_0.bin" # Layer-A
shape = (430, 430) # length and width of the image
dtype = np.dtype('<u2') # unsigned 16 bit little-endian.
oFile = "FramebufferL0_0.png"
fid = open(iFile, 'rb')
data = np.fromfile(fid, dtype)
image = data.reshape(shape)
plt.imshow(image, cmap = "gray")
plt.savefig(oFile)
plt.show()
Now, the image I'm showing is black and white because the color map is gray-scale (right?). The actual captured frame is NOT black and white. That is, the image I see on my embedded device is "colorful".
My question is, how can I calculate actual color of each pixel from the raw binary file? Is there a way I can get the actual color map of the image from the raw binary? I looked into this example and I'm sure that, if I'm able to calculate the R, G and B channels (and Alpha too), I'll be able to recreate the exact image. An example code would be of much help.
An RGBA image has 4 channels, one for each color and one for the alpha value. The binary file seems to have a single channel, as you don't report an error when performing the data.reshape(shape) operation (the shape for the corresponding RGBA image would be (430, 430, 4)).
I see two potential reasons:
The image actual does have colour information but when you are grabbing the data you are only grabbing one of the four channels.
The image is actually a gray-scale image, but the embedded device shows a pseudocolor image, creating the illusion of colour information. Without knowing what the colourmap is being used, it is hard to help you, other than point you towards matplotlib.pyplot.colormaps(), which lists all already available colour maps in matplotlib.
Could you
a) explain the exact source / type of imaging modality, and
b) show a photo of the output of the embedded device?
PS: Also, at least in my hands, the pasted binary file seems to have a size of 122629, which is incongruent with an image shape of (430,430).
Here is my current code (language is Python):
newFrameImage = cv.QueryFrame(webcam)
newFrameImageFile = cv.SaveImage("temp.jpg",newFrameImage)
wxImage = wx.Image("temp.jpg", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
wx.StaticBitmap(self, -1, wxImage, (0,0), (wxImage.GetWidth(), wxImage.GetHeight()))
I'm trying to display an iplimage captured from my webcam in a wxPython window. The problem is I don't want to store the image on hard disk first. Is there any way to convert an iplimage into another image format in memory? Any other solution?
I found a few "solutions" to this problem in other languages, but I'm still having trouble with this issue.
Thanks.
What you have to do is:
frame = cv.QueryFrame(self.cam) # Get the frame from the camera
cv.CvtColor(frame, frame, cv.CV_BGR2RGB) # Color correction
# if you don't do this your image will be greenish
wxImage = wx.EmptyImage(frame.width, frame.height) # If your camera doesn't give
# you the stream size, you might have to use (640, 480)
wxImage.SetData(frame.tostring()) # convert from cv.iplimage to wxImage
wx.StaticBitmap(self, -1, wxImage, (0,0),
(wxImage.GetWidth(), wxImage.GetHeight()))
I figured oyt out how to do this by looking at the Python OpenCV cookbook and at the wxPython wiki.
Yes, this question is old but I came here like everybody else searching for the answer. Several versions of wx, numpy, and opencv after the above solutions I figured I'd share a fast solution using cv2 and numpy images.
This is how to convert a NumPy array style image as used in OpenCV2 into a bitmap you can then set to a display element in wxPython (as of today):
import wx, cv2
import numpy as np
# Start with a numpy array style image I'll call "source"
# convert the colorspace to RGB from cv2 standard BGR, ensure input is uint8
img = cv2.cvtColor(np.uint8(source), cv2.cv.CV_BGR2RGB)
# get the height and width of the source image for buffer construction
h, w = img.shape[:2]
# make a wx style bitmap using the buffer converter
wxbmp = wx.BitmapFromBuffer(w, h, img)
# Example of how to use this to set a static bitmap element called "bitmap_1"
self.bitmap_1.SetBitmap(wxbmp)
Tested 10 minutes ago :)
This uses the built in wx function BitmapFromBuffer and takes advantage of the NumPy buffer interface so that all we have to do is swap the colors to get those in the expected order.
You could do with StringIO
stream = cStringIO.StringIO(data)
wxImage = wx.ImageFromStream(stream)
you can check more detail in \wx\lib\embeddedimage.py
just my 2 cents.