I am currently working on a project to capture and process photos on a raspberry Pi.
The photos are 6000X4000 about 2 mb, from a nikon D5200 camera.
Everything is working fine, i have made a proof of concept in Java and want to transform this to python or C depending on which language is faster on the raspberry.
No the problem is that the images need to be cropped and re-sized, this takes a very long time in the raspberry. In java the whole process of reading the image, cropping and writing the new image takes about 2 minutes.
I have also tried ImageMagick but in command-line this even takes up to 3 minutes.
With a small python script i made this is reduces to 20 seconds, but this is still a bit to long for my project.
Currently i am installing OpenCV to check if this is faster, this process takes around 4 hours so i thought in the meantime i can ask a question here.
Does anybody have any good idea's or libraries to speed up the process of cropping and re-sizing the images.
Following is the python code i used
import Image
def crop_image(input_image, output_image, start_x, start_y, width, height):
"""Pass input name image, output name image, x coordinate to start croping, y coordinate to start croping, width to crop, height to crop """
input_img = Image.open(input_image)
box = (start_x, start_y, start_x + width, start_y + height)
output_img = input_img.crop(box)
output_img.save(output_image +".jpg")
def main():
crop_image("test.jpg","output", 1000, 0, 4000, 4000)
if __name__ == '__main__': main()
First approach (without sprites)
import pyglet
#from pyglet.gl import *
image = pyglet.resource.image('test.jpg')
texture = image.get_texture()
## -- In case you plan on rendering the image, use the following gl set:
#gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_NEAREST)
texture.width = 1024
texture.height = 768
texture.get_region(256, 192,771, 576)
texture.save('wham.png') # <- To save as JPG again, install PIL
Second attempt (with sprites, unfinished)
import pyglet, time
start = time.time() #DEBUG
texture = pyglet.image.load('test.jpg')
print('Loaded image in',time.time()-start,'sec') #DEBUG
sprite = pyglet.sprite.Sprite(texture)
print('Converted to sprite in',time.time()-start,'sec') #DEBUG
print(sprite.width) #DEBUG
# Gives: 6000
sprite.scale = 0.5
print('Rescaled image in',time.time()-start,'sec') #DEBUG
print(sprite.width) #DEBUG
# Gives: 3000
Both solutions end up around 3-5 seconds on an extremely slow PC with a shitty mechanical disk running under Windows XP with.. i can't even count the number of applications running including active virus scans etc.. But note that I can't remember how to save a sprite to disk, you need to access to AbstractImage data container within the sprite to get it out.
You will be heavily limited to your disk/memory-card I/O.
My image was 16MB 6000x4000 pixels.. Which i was suprised it whent as fast as 3 seconds to load.
Have you tried jpegtran. It provides for lossless cropping of jpeg. It should be in the libjpeg-progs package. I suspect that decoding the image to crop it, then re-encoding it is too much for the SD card to take.
Related
I'm capturing my screen using OpenCV on windows. It works fine but when I try to play my captured video it plays too fast. i.e. I capture from video for 60 seconds but when I play it OpenCV recorded longer and sped up to fit the additional times content into 60 seconds of video i.e. sped up
import cv2
import numpy as np
import pyautogui
time = 10
# display screen resolution, get it using pyautogui itself
SCREEN_SIZE = tuple(pyautogui.size())
# define the codec
fourcc = cv2.VideoWriter_fourcc(*"XVID")
# frames per second
fps = 30.0
# create the video write object
out = cv2.VideoWriter("output.avi", fourcc, fps, (SCREEN_SIZE))
for i in range(int(time * fps)):
# make a screenshot
img = pyautogui.screenshot()
# convert these pixels to a proper numpy array to work with OpenCV
frame = np.array(img)
# convert colors from BGR to RGB
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# write the frame
out.write(frame)
# make sure everything is closed when exited
cv2.destroyAllWindows()
out.release()
I tried different fps which did not change much. Please let me know why this happens. Any answers and help welcome.
you need to wait in-between taking screen shots. There may be a more ideal solution, but this would probably suffice:
from time import time, sleep
record_time = 10 #don't overwrite the function we just imported
start_time = time()
for i in range(int(record_time * fps)):
# wait for next frame time
next_shot = start_time + i/fps
wait_time = next_shot - time()
if wait_time > 0:
sleep(wait_time)
# make a screenshot
img = pyautogui.screenshot()
...
notes
time.time's resolution depends on the OS. Make sure you get a number with fractional seconds. Otherwise you may need to use something like time.perf_counter or time.time_ns.
This cannot make the loop run faster, only slower. If you can't acquire frames fast enough, the recording will last longer than record_time seconds, and the playback will seem "sped up". the only way to fix that is to find a faster way to acquire screenshots (like lowering resolution perhaps?).
Probably an unusual question, but I am currently looking for a solution to display image files with PIL slower.
Ideally so that you can see how the image builds up, pixel by pixel from left to right.
Does anyone have an idea how to implement something like this?
It is a purely optical thing, so it is not essential.
Here an example:
from PIL import Image
im = Image.open("sample-image.png")
im.show()
Is there a way to "slow down" im.show()?
AFAIK, you cannot do this directly with PIL's Image.show() because it actually saves your image as a file to /var/tmp/XXX and then passes that file to your OS's standard image viewer to display on the screen and there is no further interaction with the viewer process after that. So, if you draw in another pixel, the viewer will not be aware and if you call Image.show() again, it will save a new copy of your image and invoke another viewer which will give you a second window rather than updating the first!
There are several possibilities to get around it:
use OpenCV's cv2.imshow() which does allow updates
use tkinter to display the changing image
create an animated GIF and start a new process to display that
I chose the first, using OpenCV, as the path of least resistance:
#!/usr/bin/env python3
import cv2
import numpy as np
from PIL import Image
# Open image
im = Image.open('paddington.png')
# Make BGR Numpy version for OpenCV
BGR = np.array(im)[:,:,::-1]
h, w = BGR.shape[:2]
# Make empty image to fill in slowly and display
d = np.zeros_like(BGR)
# Use "x" to avoid drawing and waiting for every single pixel
x=0
for y in range(h):
for x in range(w):
d[y,x] = BGR[y,x]
if x%400==0:
cv2.imshow("SlowLoader",d)
cv2.waitKey(1)
x += 1
# Wait for one final keypress to exit
cv2.waitKey(0)
Increase the 400 near the end to make it faster and update the screen after a greater number of pixels, or decrease it to make it update the screen after a smaller number of pixels meaning you will see them appear more slowly.
As I cannot share a movie on StackOverflow, I made an animated GIF to show how that looks:
I decided to try and do it with tkinter as well. I am no expert on tkinter but the following works just the same as the code above. If anyone knows tkinter better, please feel free to point out my inadequacies - I am happy to learn! Thank you.
#!/usr/bin/env python3
import numpy as np
from tkinter import *
from PIL import Image, ImageTk
# Create Tkinter Window and Label
root = Tk()
video = Label(root)
video.pack()
# Open image
im = Image.open('paddington.png')
# Make Numpy version for simpler pixel access
RGB = np.array(im)
h, w = RGB.shape[:2]
# Make empty image to fill in slowly and display
d = np.zeros_like(RGB)
# Use "x" to avoid drawing and waiting for every single pixel
x=0
for y in range(h):
for x in range(w):
d[y,x] = RGB[y,x]
if x%400==0:
# Convert the video for Tkinter
img = Image.fromarray(d)
imgtk = ImageTk.PhotoImage(image=img)
# Set the image on the label
video.config(image=imgtk)
# Update the window
root.update()
x += 1
I am new to PsychoPy, having previously worked with Pygame for several months (I switched to enable stimuli to be presented on multiple screens).
I am trying to figure out how to use PsychoPy to display an animation created using a sequence of images. I previously achieved this in Pygame by saving the entire sequence of images in a single large png file (a spritesheet) and then flipping only a fraction of that image (eg. 480 x 480 pixels) per frame, while moving onto the next equally sized section of the image in the next frame. This is roughly what my code looked like in Pygame. I would be really keen to hear if there is an equivalent way of generating animations in PsychoPy by selecting only parts of an image to be displayed with each frame. So far, googling this has not provided any answers!
gameDisplay=pygame.display.set_mode((800, 480))
sequence=pygame.image.load('C:\Users\...\image_sequence.png')
#This image contains 10 images in a row which I cycle through to get an animation
image_width=480
image_height=480
start=time.time()
frame_count=0
refresh=0
while time.time()<=start+15:
gameDisplay.blit(sequence,(160,0),(frame_count*image_width,0,image_width,image_height))
if time.time()>= start+(refresh*0.25): #Flip a new image say every 250 msec
pygame.display.update()
frame_count+=1
refresh+=1
if frame_count ==10:
frame_count=0
You could use a square aperture to restrict what's visible and then move the image. So something like this (untested, but could give you some ideas):
from psychopy import visual
win = visual.Window(units='pix') # easiest to use pixels as unit
aperture = visual.Aperture(win, shape='rect', size=(480, 480))
image = visual.ImageStim('C:\Users\...\image_sequence.png')
# Move through x positions
for x in range(10):
image.pos = [(-10.0/2*+0.5+x)*480, 0] # not sure this is right, but it should move through the x-positions
image.draw()
win.flip()
If you have the original images, I think that it would be simpler to just display the original images in sequence.
import glob
from psychopy import visual
image_names = glob.glob('C:\Users\...\*.png')
# Create psychopy objects
win = visual.Window()
image_stims = [visual.ImageStim(win, image) for image in image_names]
# Display images one by one
for image in image_stims:
image.draw()
win.flip()
# add more flips here if you want a lower frame rate
Perhaps it is even fast enough to load them during runtime without dropping frames, which would simplify the code and load memory less:
# Imports, glob, and win here
# Create an ImageStim and update the image each frame
stim = visual.ImageStim(win)
for name in image_names:
stim.image = name
stim.draw()
win.flip()
Actually, given a spritesheet you might be able to do something funky and more efficient using the GratingStim. This loads an image as a texture and then allows you to set the spatial frequncy (sf) and phase of that texture. If 1.0/sf (in both dimensions) is less than the width of the stimulus (in both dimensions) only a fraction of the texture will be shown and the phase determines which fraction that will be. It isn't designed for this purpose - it's usually used to create more than one cycle of texture not less than one - but I think it will work.
I have a mp4 video of 720x1280, and I want it in different sizes like:
0.66%, 0.5% and 0.33%.
For each of these sizes I use:
clip = mp.VideoFileClip(file)
clip_resized1 = clip.resize(height=int(clip.h * float(0.66666)))
clip_resized1.write_videofile(name + '-2x' + ext)
I do this for each of the sizes but some of them work and some not. The 0.66 not works, just like the 0.33. The 0.5% works just fine.
It creates the files for every size, but they are corrupt, and can't open them (except 0.5 as I said, which works ok).
Any clue on this? Any better solution for resizing in Python?
The issue I believe is that most video player cannot play mp4 if one of the dimensions of the clip is an odd number. For instance 720x1280 works on all players but 721x1280 will only play on some video players like VLC.
So make sure that clip.h and clip.w are both even before writing to a video file. There are several ways you can do that, either indicate the new dimensions of the clip yourself, like clip.resize((844, 476)), or redimension the clip of 66% and add a 1px black margin at the top, like clip.resize(0.66).margin(top=1)
I am working on stream generator for my video mapping set, but I am not able to get the image steady.
I open a v4l2loopback device with python-v4l2 and generate a video stream through it based on png, so can generate live video's in my vj set and still video map them and apply effects.
Test case:
1) load v4l2loopback module
2) run python:
import fcntl, numpy
from v4l2 import *
from PIL import Image
height = 600
width = 634
device = open('/dev/video4', 'wr')
print(device)
capability = v4l2_capability()
print(fcntl.ioctl(device, VIDIOC_QUERYCAP, capability))
print("v4l2 driver: " + capability.driver)
format = v4l2_format()
format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
format.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB32
format.fmt.pix.width = width
format.fmt.pix.height = height
format.fmt.pix.field = V4L2_FIELD_NONE
format.fmt.pix.bytesperline = format.fmt.pix.width * 4
format.fmt.pix.sizeimage = format.fmt.pix.width * format.fmt.pix.height * 4
format.fmt.pix.colorspace = V4L2_COLORSPACE_SRGB
print(fcntl.ioctl(device, VIDIOC_S_FMT, format))
img = Image.open('img/0.png')
img = img.convert('RGBA')
while True:
device.write(numpy.array(img))
3) run Cheese or other v4l2 stream viewer.
The result is a proper colored and sized image, but it jumps every frame from left to right and always a little more to the left so you get a sliding and jumpy video result.
What am I doing wrong?
Best regards,
Harriebo
ps: if you woul like to see the results check: link So far the LiVES, puredate, gem video mapping setup is working greath with the v4l2 streams.
So I got it a sort of working, but not sure if it's the right way. What I need to do for a stable video stream:
1) don't use custom resolutions, they get messy.
2) send every frame twice. I think this has to do with interlacing / top / bottom frame.
3) for 640x480 shift all pixels 260 spaces to the left in the array, other wise the image is not straight, not for 1024x768 doh... not sure why this is.
4) play is at a slightly lower frame rate as the program can generate.
After all that it is a 99% stable every 10 sec. or so there is one buggy frame. I think it has to do that the framerate the program generates is not 100% stable.
Suggestions on why or how I can do this better are still welcome.
For updates see: https://github.com/umlaeute/v4l2loopback/issues/32