In python PIL lib, I'm using
>>> draw.ellipse((x - r, y - r, x + r, y + r))
to draw circle as nodes. However, since there are thousands of nodes to draw, this takes too long.
Is there faster way to draw all my nodes?
For background: to draw a tree-like graph, with circles as nodes.
If you draw that many ellipses every cycle it will be slow.
Is there a specific reason you need to use PIL? From your question details, I am not sure PIL is suited for your task: you need a graphics library, not an image manipulation library. There is a difference.
PyGame is a SDL wrapper suited for high performance drawing routines.
Some tricks include drawing to a surface once, and then only redrawing dirty regions. A tutorial of this can be found here:
Most people new to graphics programming use the first option - they update the whole screen every frame. The problem is that this is unacceptably slow for most people.
The solution is called 'dirty rect animation'. Instead of updating the whole screen every frame, only the parts that changed since the last frame are updated.
I found image.paste() improve speed to about 5 times. there's a transparent problem to avoid overlapping, so mask make speed a little slower.
def init_circle(r):
center = [r] * 2
im = Image.new(IMG_MODE, [r * 2] * 2, FG_COLOR)
draw = ImageDraw.Draw(im)
draw.setink(BG_COLOR)
draw.ellipse((1, 1, r * 2 - 1, r * 2 - 1), fill=NODE_COLOR)
mask = invert(im.convert('L'))
return im.crop((0, 0) + (r * 2, r * 2)), mask
to use it:
im.paste(circle, (x, y, x + 2 * r, y + 2 * r), mask=mask)
Related
I am building a video game overlay that sends data back to the player to create a custom HUD, just for fun.
I am trying to read an image of a video game compass and determine the exact orientation of the compass to be a part of my HUD.
Example photo which shows the compass at the top of the screen:
(The circle currently facing ~170°, NOTE: The position of the compass is also fixed)
Example photo which shows the compass at the top of the screen:
Obviously, when I image process on the compass I will only be looking at the compass and not the whole screen.
This has been more challenging for me compared to previous computer vision aspects of my HUD. I have been trying to process the image using cv2 and from there use some object detection to find the "needle" of the compass.
I am struggling to get a triangle shape detection on either needle that will help me know my orientation.
The solution could be lower-tech and hackier, perhaps just searching for the pixel on the edge of the compass and determining that is the end of the needle.
One solution I do not think is viable is using object detection to find a picture of a compass facing true north and then calculating the rotation of the current compass. This is due to the fact that the background of the compass does not rotate only the needle does.
So far I have applied Hough Circle Transform as seen here:
https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.html#hough-circles
Which has helped me get a circle around my compass as well as the middle of my compass. However, I cannot find a good solution for finding the facing of the needle compared to the middle of the compass.
I understand this is a pretty open-ended question but I am looking for any theoretical solutions that would help me implement a solution. Anything would help as this is a strange problem for me and I am struggling to think how to go about solving it.
In general I would suggest to look at a thin ring just beneath the border or your compass (This will give you lowest error). Either you could work on an image which is a polar transform of this ring or directly on that ring, looking for the center of gravity of the color red. This center of gravity with respect to the center of your compass should give you the angle. Most likely you don't even need the polar transform.
im = cv.imread("RPc9Q.png")
(x,y,w,h) = (406, 14, 29, 29)
warped = cv.warpPolar(
src=im,
dsize=(512, 512),
center=(x + (w-1)/2, y + (h-1)/2),
maxRadius=(w-1)/2,
flags=cv.WARP_POLAR_LINEAR | cv.INTER_LINEAR
)
Here's some more elaboration on the polar warp approach.
polar warp
take a column of pixels, being a circle in the source picture
plot to see what's there
argmax to find the red bits of the arrow
im = cv.imread("RPc9Q.png") * np.float32(1/255)
(x,y,w,h) = (406, 14, 29, 29)
# polar warp...
steps_angle = 360 * 2
steps_radius = 512
warped = cv.warpPolar(
src=im,
dsize=(steps_radius, steps_angle),
center=(x + (w-1)/2, y + (h-1)/2),
maxRadius=(w-1)/2,
flags=cv.WARP_POLAR_LINEAR | cv.INTER_LANCZOS4
)
# goes 360 degrees, starting from 90 degrees (east) clockwise
# sample at 85% of "full radius", picked manually
col = int(0.85 * steps_radius)
# for illustration
imshow(cv.rotate(cv.line(warped.copy(), (col, 0), (col, warped.shape[0]), (0, 0, 255), 1), rotateCode=cv.ROTATE_90_COUNTERCLOCKWISE))
signal = warped[:,col,2] # red channel, that column
# polar warp coordinate system:
# first row of pixels is sampled at exactly 90 degrees (east)
samplepoints = np.arange(steps_angle) / steps_angle * 360 + 90
imax = np.argmax(signal) # peak
def vertex_parabola(y1, y2, y3):
return 0.5 * (y1 - y3) / (y3 - 2*y2 + y1)
# print("samples around maximum:", signal[imax-1:imax+2] * 255)
imax += vertex_parabola(*signal[imax-1:imax+2].astype(np.float32))
# that slice will blow up in your face if the index gets close to the edges
# either use np.roll() or drop the correction entirely
angle = imax / steps_angle * 360 + 90 # ~= samplepoints[imax]
print("angle:", angle) # 176.2
plt.figure(figsize=(16,4))
plt.xlim(90, 360+90)
plt.xticks(np.arange(90, 360+90, 45))
plt.plot(
samplepoints, signal, 'k-',
samplepoints, signal, 'k.')
plt.axvline(x=angle, color='r', linestyle='-')
plt.show()
I have been able to solve my question with the feedback provided.
First I grab the image of the compass:
step_1
After I process the image crop out the middle and edges of the compass as seen here:
step_2
Now I have a cropped compass with only a little bit of red showing where the compass needle points. I masked out the red part of the image.
step_3
From there it is a simple operation to find the center of the blob which roughly outputs where the needle is pointing. Although this is not perfectly accurate I believe it will work for my purposes.
step_4
Now that I know where the needle end is it should be easy to calculate the direction based on that.
Some references:
Finding red color in image using Python & OpenCV
https://www.geeksforgeeks.org/python-opencv-find-center-of-contour/
I'm writing an interactive (zoom/pan) mandelbrot set viewer in python, and I'm having some performance issues. I'm currently using pyglet and PyOpenGL to render the pixels since I like how it handles mouse events. I generate the pixel values using numpy, and after some searching on stack exchange/docs/other places, I'm currently using glDrawPixels to draw the pixels. The application is horribly slow, taking ~1.5s to draw. I've heard that using textures is much faster, but I have no experience with them, and learning that much OpenGL seems like it should be unnecessary. Another approach I have considered is using vertex lists and batched rendering with pyglet, but it seems wrong to created a new GL_POINT at every single pixel on the screen. Am I going about this all wrong? Is there a better way to render something to the screen when pixels change so frequently? Code below:
# this code all is in a class that subclasses pyglet.window.Window
# called every 1/10.0 seconds, update 10 frames
def update_region(self):
# this code just computes new mandelbrot detail
if self.i < self.NUM_IT:
for _ in range(10): # do 10 iterations every update, can be customizable
self.z = np.where(np.absolute(self.z) < self.THRESHOLD,
self.z ** 2 + self.reg, self.z)
self.pixels = np.where(
(self.pixels == self.NUM_IT) & (np.absolute(self.z) >
self.THRESHOLD), self.i, self.pixels)
self.i = self.i + 1
def update_frame(self, x, y):
self.update_region()
# color_pixels is what will actually be rendered
self.color_pixels = self.cmap.to_rgba(self.pixels).flatten()
def on_draw(self): # draw method called every update (set to .1s)
start = time.time()
glClear(GL_COLOR_BUFFER_BIT)
glDrawPixels(2 * self.W, 2 * self.H, GL_RGBA, GL_FLOAT, (GLfloat * len(self.color_pixels))(*self.color_pixels))
glutSwapBuffers()
print('on_draw took {0} seconds'.format(time.time() - start))
Are you sure it's the glDrawPixels slowing you down? In your code for update_frame there's a cmap.to_rgba() which I assume is mapping the single value calculated by Mandelbrot into an RGB triple, and then whatever .flatten() does. Copying the entire image, twice, won't help.
For drawing raster images that don't need 3D scaling, pyglet has the image module and .blit()
You are right that a vertex list of points would not help.
Loading the image into a texture would be a bit more OpenGL code, but not too much. You could then zoom in OpenGL, and do the Mandelbrot -> RGB conversion on the GPU as it is drawn in a fragment shader.
I am writing a python program to do graphic animations, I am using the python cairo/rsvg libraries. I have reached a speed issue and wanted to accelerate some parts of the code by moving some of the rendering code in a cython library.
Ideally I would like to augment the cairo library methods to add a few that are optimized for my needs.
For example i have a function that draws an svg on screen centered at a point with a bounding box that forces its size, usually this function is called by an external loop which draws tens of svgs, and this function is one of the most expensive in my code:
def svg(ctx, path, pos, angle, width, height):
"""Draws an svg file at coordinates pos with at a specific angle, and a
maximum bounding box of size width x height"""
if width == 0 or height == 0:
return
svg = rsvg.Handle(file=path) #any svg file
ctx.save()
#put in position
ctx.translate(pos.x, pos.y)
#rotate
ctx.rotate(angle)
#resize
if svg.props.width != width or svg.props.height != height:
ratiow = (width *1.0) / (svg.props.width*1.0)
ratioh = (height * 1.0) / (svg.props.height * 1.0)
ctx.scale(ratiow, ratioh)
#move center to 0,0 of image
ctx.translate(-svg.props.width / 2, - svg.props.height / 2)
#draw image
svg.render_cairo(ctx)
ctx.restore()
What I would like to do is to write a cython function that given the list of svgs will draw them all at once on the screen.
Could this kind of optimization be done with cython?
For what I understand it would be extremely beneficial to define a type for the cairo context object (ctx), but how is the correct way to do it?
I'm using PyGame's graphics library to display graphics on an LED display.
My LED display is 25x7. For testing without hardware, I've been able to use PyGame's display to view a 40x amplified version:
screen = pygame.display.set_mode((1000,280))
I'd like to actually draw boxes, rects, etc on a 25x7 grid and then project it onto a larger 1000x280 window on my computer, which will look like a 25x7 grid of 40x40 "pixels". That way, if I start working with hardware, I can skip the large projection, and just write 25x7=175 pixels worth of data out to my LEDs.
I came up with a solution, but it's prohibitively slow:
xDim = 25
yDim = 7
pixelMultiplier = 40
surf = pygame.Surface((xDim,yDim))
bigSurf = pygame.Surface((pixelMultiplier*xDim,pixelMultiplier*yDim))
# ... manipulate surf ...
# map surf to bigSurf
for x in range(xDim):
for y in range(yDim):
# grab color of pixel
tmpColor = surf.get_at((x,y))
# write to all pixels in larger surface
for i in range(pixelMultiplier):
for j in range(pixelMultiplier):
bigSurf.set_at((pixelMultiplier*x+i,pixelMultiplier*y+j),tmpColor)
Is there some built-in PyGame function that does this efficiently?
Right after writing the post, this dawned on me:
Create a surface to represent the pixel, and just blit that to the larger surface. Instead of running through 1600 set_at() functions for all 175 pixels, do it with a built-in function that already does that.
# map surf to larger version
for x in range(xDim):
for y in range(yDim):
# grab color of pixel
tmpColor = surf.get_at((x,y))
# create square surface to represent pixel
pixelSurf = pygame.Surface((pixelMultiplier,pixelMultiplier))
pixelSurf.fill(tmpColor)
# blit big pixel onto bigSurf
bigSurf.blit(pixelSurf,(pixelMultiplier*x,pixelMultiplier*y))
This runs much faster, but I'm still not sure it's the optimal solution.
Edit - Best Method
PyGame does have a built-in function!
pygame.transform.scale(surface,(newX,newY))
I'm trying to use PIL for a task but the result is very dirty.
What I'm doing is trying to fill a part of a piece of a circle, as you can see on the image.
Here is my code:
def gen_image(values):
side = 568
margin = 47
image = Image.open(settings.MEDIA_ROOT + "/i/promo_circle.jpg")
draw = ImageDraw.Draw(image)
draw.ellipse((margin, margin, side-margin, side-margin), outline="white")
center = side/2
r = side/2 - margin
cnt = len(values)
for n in xrange(cnt):
angle = n*(360.0/cnt) - 90
next_angle = (n+1)*(360.0/cnt) - 90
nr = (r * values[n] / 5)
max_r = r
min_r = nr
for cr in xrange(min_r*10, max_r*10):
cr = cr/10.0
draw.arc((side/2-cr, side/2-cr, side/2+cr, side/2+cr), angle, next_angle, fill="white")
return image
It's been a while since I used PIL, but in various other graphics libraries, there's often an aliasing problem when drawing arcs.
Have you tried enabling anti-aliasing or drawing with thicker lines?
[Edit] Having a quick look over the PIL library, I think you're right about line width etc.
Sounds like the easiest thing to do here is to build up a polygon which covers each area. So a pair of points at each end and then a load round the middle to stop the edges looking jagged. Does that make sense?
Instead of erasing with white, consider drawing a mask of just the areas you want to show. Here's an example of this for a circular mask.
How do I generate circular thumbnails with PIL?
You draw a circle on a separate image and then remove the slice of the circle you don't want by creating a triangle over that area and removing it from the image (make it transparent). Then you copy this circle segment into the image where you want it.
If you want a partial ring, draw a circle and then draw a smaller circle to cut from the first, then use radial lines to make a triangle to remove the parts you don't need in the same way.