I'm new to Python and PIL. I am trying to follow code samples on how to load an image into to Python through PIL and then draw its pixels using openGL. Here are some line of the code:
from Image import *
im = open("gloves200.bmp")
pBits = im.convert('RGBA').tostring()
.....
glDrawPixels(200, 200, GL_RGBA, GL_UNSIGNED_BYTE, pBits)
This will draw a 200 x 200 patch of pixels on the canvas. However, it is not the intended image-- it looks like it is drawing pixels from random memory. The random memory hypothesis is supported by the fact that I get the same pattern even when I attempt to draw entirely different images.Can someone help me? I'm using Python 2.7 and the 2.7 version of pyopenGL and PIL on Windows XP.
I think you were close. Try:
pBits = im.convert("RGBA").tostring("raw", "RGBA")
The image first has to be converted to RGBA mode in order for the RGBA rawmode packer to be available (see Pack.c in libimaging). You can check that len(pBits) == im.size[0]*im.size[1]*4, which is 200x200x4 = 160,000 bytes for your gloves200 image.
Have you tried using the conversion inside the tostring function directly?
im = open("test.bmp")
imdata = im.tostring("raw", "RGBA", 0, -1)
w, h = im.size[0], im.size[1]
glDrawPixels(w, h, GL_RGBA, GL_UNSIGNED_BYTE, imdata)
Alternatively use compatibility version:
try:
data = im.tostring("raw", "BGRA")
except SystemError:
# workaround for earlier versions
r, g, b, a = im.split()
im = Image.merge("RGBA", (b, g, r, a))
Thank you for the help. Thanks to mikebabcock for updating the sample code on the Web. Thanks to eryksun for the code snippet-- I used it in my code.
I did find my error and it was Python newb mistake. Ouch. I declared some variables outside the scope of any function in the module and naively thought I was modifying their values inside a function. Of course, that doesn't work and so my glDrawPixels call was in fact drawing random memory.
Related
I am following these instructions: https://www.geeksforgeeks.org/python-pil-imagedraw-draw-line/ .
from PIL import Image, ImageDraw
w, h = 220, 190
shape = [(40, 40), (w - 10, h - 10)]
# creating new Image object
img = Image.new("RGB", (w, h))
# create line image
img1 = ImageDraw.Draw(img)
img1.line(shape, fill ="red", width = 0)
img.show()
I then tried to add this line immediately before img.show():
img1 = img1.resize((1024, 1024), Image.BOX)
Which is what I usually do to resize Image objects (I know, if this worked it would distort the image since it's a square, but I don't care about that right now).
When I run the code I get the AttributeError: 'ImageDraw' object has no attribute 'resize'.
So, either there is a different method to resize ImageDraw objects or I need to convert the ImageDraw object back into an Image object. In both cases I couldn't find a solution, can you help me out?
Using
img1 = img1.resize((1024, 1024), Image.BOX)
you're trying to call a resize method on some ImageDraw object. And, the error message tells you, that ImageDraw objects don't have such a method.
Let's have a look at the different modules, classes, and objects involved:
Pillow has an Image module providing an Image class, that "represents an image object". Instances of that class, i.e. Image objects, have a resize method.
Also, there is an ImageDraw module, that "provides simple 2D graphics for Image objects". From the documentation on ImageDraw.Draw:
Creates an object that can be used to draw in the given image.
Note that the image will be modified in place.
The first sentence tells you, that the created ImageDraw object is linked to your actual Image object, and that any draw operation is performed in that image (Image object). The second sentence tells you, that any modification is instantly performed. There's no need to explicitly "update" the Image object, or to somehow "convert" the ImageDraw object (back) to some Image object. (It's also simply not possible.)
So, fixing your code is very easy now. Simply call resize on your actual Image object img, and not on your ImageDraw object img1:
img = img.resize((1024, 1024), Image.BOX)
Okay, I'm trying to extract my local binary pattern image from a processed normal face image and then show it in my QT Gui. The following code to do this is:
def extractFace(self):
try:
self.lbpface = self.model.lbpface(self.face)
height, width = self.self.lbpface.shape[:2]
#plt.imshow(self.lbpface, cmap= 'gray')
#plt.show()
img = QtGui.QImage(self.lbpface,
width,
height,
QtGui.QImage.Format_Indexed8)
return QtGui.QPixmap.fromImage(img)
except:
return QtGui.QPixmap("nosignal.jpg")
But this results in: .
Now, If i uncomment the plt.imshow I get the following result (Which is what I want to show in my GUI):
I've tried various stuff and got the best result if I tried adding:
self.lbpface = np.asarray(self.model.lbpface(self.face), dtype = np.uint8)
resulting in:
Any ideas how to fix this? I mean, it shows fine on the matplot figure, but somehow gets badly distorted once turned into a QImage
Will also add that I'm totally new with QT4.
After trying various stuff I ended up doing this, although not very optimal and probably a tad slow, but it works and shows me a faceimage:
def extractFace(self):
try:
self.lbpface = self.model.lbpface(self.face)
cv2.imwrite("TEST.JPG",self.lbpface)
return QtGui.QPixmap('TEST.JPG')
except:
return QtGui.QPixmap("nosignal.jpg")
I'm trying to blur an image with Pillow, using the ImageFilter as follows:
from PIL import ImageFilter
blurred_image = im.filter(ImageFilter.BLUR)
This works fine, except that it has a set radius which is way too small for me. I want to blur the image so much that it can be barely recognised anymore. In the docs I see that the radius is set to 2 by default, but I don't really understand how I can set it to a larger value?
Does anybody have any idea how I could increase the blur radius with Pillow? All tips are welcome!
Image.filter() takes an ImageFilter so you can create an ImageFilter.GaussianBlur instance with whatever radius you want, passed in as a named argument.
blurred_image = im.filter(ImageFilter.GaussianBlur(radius=50))
You can even make it more concise like so:
blurred_image = im.filter(ImageFilter.GaussianBlur(50))
I need to take a single snapshot from webcam. I choice SimpleCV for this task.
Now i try to get a single image and show it:
from SimpleCV import Camera
cam = Camera()
img = cam.getImage()
img.show()
But i see only black image. I think camera is not ready at this moment, because if I call time.sleep(10) before cam.getImage() all works good.
What the right way for this? Thank you!
Have you installed PIL? I had similar problems, but on installing PIL everything works fine. Hope this helps.
You can download PIL from Pythonware
I ran into this same issue and came up with the following work-around. Basically it grabs an image, tests the middle pixel to see if it's black (0,0,0). If it is, then it waits 1 second and tries again.
import time
from SimpleCV import Camera
cam = Camera()
r = g = b = 0
while r + g + b < 0.01:
img = cam.getImage()
r, g, b = img[img.width/2, img.height/2]
print("r: {} g: {} b: {}".format(r,g,b))
time.sleep(1)
img.save(file_name)
Your problem may be that when the script ends the camera object doesn't seem to release the webcam for other programs. When you wait to run the script Windows frees it up (maybe?) so it will work again. If you use it will the old script still thinks it owns the webcam, it shows up as black. Anyway the solution here seems to work. Add "del cam" to the end of your script which will make the camera object go away and let you use the camera again.
try this
from SimpleCV import Camera
cam = Camera(0)
while True:
img = cam.getImage()
img.show()
I am creating a Beta Testers reporting module so they can send in thier comments on my software, but I would like to have the option to include a screenshot with the report. How do I take a screenshot of the screen with Python on Windows? I have found several examples on Linux, but haven't had much luck on Windows.
Another approach that is really fast is the MSS module. It is different from other solutions in the way that it uses only the ctypes standard module, so it does not require big dependencies. It is OS independant and its use is made easy:
from mss import mss
with mss() as sct:
sct.shot()
And just find the screenshot.png file containing the screen shot of the first monitor. There are a lot of possibile customizations, you can play with ScreenShot objects and OpenCV/Numpy/PIL/etc..
Worth noting that ImageGrab only works on MSWindows.
For cross platform compatibility, a person may be best off with using the wxPython library.
http://wiki.wxpython.org/WorkingWithImages#A_Flexible_Screen_Capture_App
import wx
app = wx.App() # Need to create an App instance before doing anything
screen = wx.ScreenDC()
size = screen.GetSize()
bmp = wx.Bitmap(size[0], size[1])
mem = wx.MemoryDC(bmp)
mem.Blit(0, 0, size[0], size[1], screen, 0, 0)
del mem # Release bitmap
bmp.SaveFile('screenshot.png', wx.BITMAP_TYPE_PNG)
This can be done with PIL. First, install it, then you can take a full screenshot like this:
import PIL.ImageGrab
im = PIL.ImageGrab.grab()
im.show()
You can use the ImageGrab module. ImageGrab works on Windows and macOS, and you need PIL (Pillow) to use it. Here is a little example:
from PIL import ImageGrab
snapshot = ImageGrab.grab()
save_path = "C:\\Users\\YourUser\\Desktop\\MySnapshot.jpg"
snapshot.save(save_path)
For pyautogui users:
import pyautogui
screenshot = pyautogui.screenshot()
A simple way to take a screenshot is through Pygame.
pygame.image.save(Surface, filename)
Where 'Surface' is the surface you are taking a screenshot of, and 'filename' is the file path, name, and type where you save thew image.
You can export as BMP, TGA, PNG, or JPEG. As of Pygame 1.8, PNG, and JPEG also work.
If no file extension is specified it will default to a .TGA file.
You can even use the 'os' library for saving to specific file directories.
An example:
import os
import pygame
surface = pygame.display.set_mode((100, 100), 0, 32)
surface.fill((255, 255, 255))
pygame.draw.circle(surface, (0, 0, 0), (10, 10), 15, 0)
pygame.display.update()
pygame.image.save(surface, os.path.expanduser("~/Desktop/pic.png"))
This saves anything on the 'surface' Surface to the user's desktop as pic.png
If you want to snap particular running Windows app you’ll have to acquire a handle by looping over all open windows in your system.
It’s easier if you can open this app from Python script.
Then you can convert process pid into window handle.
Another challenge is to snap the app that runs in particular monitor. I have 3 monitor system and I had to figure out how to snap display 2 and 3.
This example will take multiple application snapshots and save them into JPEG files.
import wx
print(wx.version())
app=wx.App() # Need to create an App instance before doing anything
dc=wx.Display.GetCount()
print(dc)
#e(0)
displays = (wx.Display(i) for i in range(wx.Display.GetCount()))
sizes = [display.GetGeometry().GetSize() for display in displays]
for (i,s) in enumerate(sizes):
print("Monitor{} size is {}".format(i,s))
screen = wx.ScreenDC()
#pprint(dir(screen))
size = screen.GetSize()
print("Width = {}".format(size[0]))
print("Heigh = {}".format(size[1]))
width=size[0]
height=size[1]
x,y,w,h =putty_rect
bmp = wx.Bitmap(w,h)
mem = wx.MemoryDC(bmp)
for i in range(98):
if 1:
#1-st display:
#pprint(putty_rect)
#e(0)
mem.Blit(-x,-y,w+x,h+y, screen, 0,0)
if 0:
#2-nd display:
mem.Blit(0, 0, x,y, screen, width,0)
#e(0)
if 0:
#3-rd display:
mem.Blit(0, 0, width, height, screen, width*2,0)
bmp.SaveFile(os.path.join(home,"image_%s.jpg" % i), wx.BITMAP_TYPE_JPEG)
print (i)
sleep(0.2)
del mem
Details are here
import pyautogui
s = pyautogui.screenshot()
s.save(r'C:\\Users\\NAME\\Pictures\\s.png')
First of all, install PrtSc Library using pip3.
import PrtSc.PrtSc as Screen
screenshot=PrtSc.PrtSc(True,'filename.png')