I recently startet getting into pyglet and rabbyt from pygame, but I have hit something of a brick wall.
I created a basic example where one Sprite (of the type found in pyglet.sprite.Sprite) is displayed at 60 frames per second. The problem is that this simple program is using up 50% of the CPU time somehow. I repeated the experiment with the sprite type found in the rabbyt library with the same result.
I decided to render 1000, then 10 000 sprites at 60 frames per second, and to my surprise the CPU usage stays at 50%. The only thing is that moving or animating a sprite results in slight stuttering.
Lastly, I tried running at 360 frames per second. Same result, 50% usage.
Here is the sample code:
import pyglet
import rabbyt
def on_draw(dt):
window.clear()
spr.render()
global window
window = pyglet.window.Window(800, 600)
spr = rabbyt.Sprite('ship.png')
spr.x = 100
spr.y = 100
pyglet.clock.schedule_interval(on_draw, 1.0/60.0)
if __name__ == '__main__':
pyglet.app.run()
I am using a Core 2 Duo with an ATI HD 3500 card.
Any advice/ideas are appreciated.
Be aware that the default pyglet event handler will fire an 'on_draw' event every time it clears the event queue.
http://www.pyglet.org/doc/programming_guide/the_application_event_loop.html
The pyglet application event loop dispatches window events (such as for mouse and keyboard input) as they occur and dispatches the on_draw event to each window after every iteration through the loop.
This means that any event can trigger a redraw.
So if you're moving the mouse about or doing anything that fires events, you will get massive slow down as it begins to trigger render calls.
This also caused problems because I was doing my own render calls, so I would get the two buffers fighting which created a 'ghost' effect on the screen. Took me a while to realise this was the cause.
I monkey patched the event loop to not do this.
https://github.com/adamlwgriffiths/PyGLy/blob/master/pygly/monkey_patch.py
Be aware that this patched event loop will no longer render on it's own, you must manually flip the buffers or trigger an 'on_draw' event.
It might be the case that, although you've hooked in at 60fps, but the internal render loop is ticking at the maximum possible rate.
I dislike code that takes away control, hence my patch lets me decide when render events occur.
Hmm.. You might want to know the fps at which the game runs, if it helps:
cldis = pyglet.clock.ClockDisplay()
Then add this to your on_draw function:
cldis.draw()
it draws the current fps at the bottomleft corner of the screen in a semi-transparent color.
I know that in Pygame there is the built-in called "Clock". You can put a limit on how many times the game loops per second using the tick method. In my example I have put a limit of 30 FPS. This prevents your CPU being on constant demand.
clock = pygame.time.Clock()
While 1:
clock.tick(30) # Puts a limit of 30 frames per second on the loop
In pyglet there appears to be something similar:
pyglet.clock.schedule_interval(on_draw, 1.0/60.0)
clock.set_fps_limit(60)
Hope that helps!
edit: documentation on fps limit: http://pyglet.org/doc/api/pyglet.clock-module.html#set_fps_limit
Related
I am working on a basic simulation in pyglet, which I need to re-run and visualise multiple times. The simulation adjusts the focus "f" of a lens until it reaches the desired location (see gif here https://imgur.com/Ui06B1c). I have put an instance of the simulation inside a for loop along with a pyglet.app.run() loop running inside it also, so that it can be reset and run for 10 episodes.
num_episodes=10
for i in range(num_episodes):
update_rate=1/59
simulation=PivotPointSimulation()
pyglet.clock.schedule_interval(update, update_rate)
pyglet.app.run()
I need to make an instance of the "simulation" so that the draw and update methods can be called by pyglet.app.run(). By making a new instance each loop, I can also reset the variables when the simulation starts again. I terminate the simulation when the focus "f" of the lens reaches the desired position with pyglet.app.exit().
The problem is each time the for loop iterates, the framerate of my pyglet application speeds up (see gif for example https://imgur.com/Ui06B1c). I have checked if the update_rate is changing and it is staying the same. Peculiarly enough, when I use spyders stop button to stop the code, and run the simulation again with "play" the framerate of the simulation also gets faster and faster each time I hit stop and play.
I am wondering if there is a way to properly terminate an app, so that each time it runs again it is reset fully? I am unsure if pyglet.app.exit() is resetting everything properly. But I am not sure how the app is speeding up, since pyglet.clock.schedule_interval(update, update_rate) should update at a constant update rate.
I have tried playing around with Vsync settings (setting from false to true) and it hasn't helped. I have also tried using pyglet.clock.set_fps_limit(60) to no avail. So I am not sure what is going on here.
Has anyone encountered this problem before?
Every time you call pyglet.clock.schedule_interval(update, update_rate), it adds the callback to be called at the rate.
So at the end of your loop of 10, you have the same callback scheduled ten times.
One way to prevent this is to unschedule the function before so that it isn't scheduled a second time:
num_episodes=10
for i in range(num_episodes):
update_rate=1/59
simulation=PivotPointSimulation()
pyglet.clock.unschedule(update)
pyglet.clock.schedule_interval(update, update_rate)
pyglet.app.run()
I'm writing a real-time interactive graphics application using SDL2 and OpenGL in Python (pysdl2, PyOpenGL). The application continuously produces frames, which may change in response to keyboard / mouse input or based on time.
My main event loop, copied from some obscure source on the web that I can't find again, looks (simplified) like this:
event = sdl2.SDL_Event()
running = True
while running:
# process events
while sdl2.SDL_PollEvent(ctypes.byref(event)) != 0:
if event.type == sdl2.SDL_QUIT:
running = False
# render frame
<some OpenGL commands>
sdl2.SDL_GL_SwapWindow(window)
sdl2.SDL_Delay(10)
From what I understand, the delay of 10 ms is intended to give the CPU some breathing room, and indeed, when I remove it the CPU usage doubles.
What I don't understand is why it is necessary. The graphics display is double buffered and buffer swaps are synchronized with the vertical retrace (60 Hz = 16 ms). Naively one would expect that if <some OpenGL commands> take less than 16 ms, then SDL_GL_SwapWindow will introduce a delay anyway, so SDL_Delay is not necessary. And if they use more than that, then the program is struggling to keep up with the display framerate, and introducing a delay would hurt.
Now from what I've been told in response to another question, the buffer swap and therefore the retrace synchronization doesn't happen when the SDL_GL_SwapWindow is executed, but this only puts a "sync & swap" instruction into an OpenGL queue, and this instruction gets executed when everything before has finished. The same holds for <some OpenGL commands>. But this instruction queue is finite, and therefore at some point my program, instead of waiting for the retrace, will wait for there to be space in the instruction queue. The end effect should be the same: If one execution of my event loop needs on average less than 16 ms, then the program will on average delay long enough to make it 16 ms per loop execution. So again, why is the explicit delay necessary?
As a second question: Considering the delay might hurt the framerate, is there a better way to let the CPU rest?
You don't need use sdl2.SDL_Delay(10) because SDL_Delay is timer for SDL2 framework if you use event-loop then it doesn't need with SDL_Delay.
My code for Python 3.10 looks example with KeyPress and Close window by mouse click
event = sdl2.SDL_Event()
running = True
while running:
while sdl2.SDL_PollEvent(ctypes.byref(event)) != 0:
if event.type == sdl2.SDL_QUIT:
running = False
if event.type == sdl2.SDL_KEYDOWN:
if event.key.keysym.sym == sdl2.SDLK_ESCAPE:
running = False
... GL Functions ...
sdl2.SDL_GL_SwapWindow(window)
Remember that event-loop should like other in C/C++,. Java, C# or other programming langauges.
Without event-loop means
you should use SDL_Delay() then game window will close automacally.
I hope you understand - I found Python examples but they look like weird and wrong - they write like "Latin".
I would like to improve who understands better like other programming languages.
PS: I will work hard my github with clear examples :)
I'm working with pygame on a graphical application, involving some video computation, and mouse events listening. I'm using a raspberry 3, raspbian jessie and python2.7.
As the title said : i'm loosing some mouse events, especially when the CPU load is high. I managed to reproduce this behavior in this small exemple :
import pygame
import time
pygame.init()
pygame.display.set_caption('Crash!')
window = pygame.display.set_mode((300, 300))
running = True
Rectplace = pygame.draw.rect(window, (255, 0, 0),(100, 100, 100, 100))
pygame.display.update()
while running:
time.sleep(0.1)
for event in pygame.event.get():
print(`event`)
if event.type == pygame.QUIT:
running = False
When running this script, most of the mouse wheel events (buttons 4&5) are discarded on fast roll. Removing the time.sleep(0.1),that simulate CPU load, make the event listener perfectly reliable.
As i can't remove the slow computation part, nor optimize it more, what should i do to get back these events ?
Thank you for sharing your brains !
My guess is that pygame use a limited size circular event queue. When full, each new event replaces the oldest event. If you get more events than you can ever handle, then let them go, as you will have to discard them anyway.
If you have sporadic slow computations, so that catching up may be feasible, then you must break up the computation into pieces short enough in time that you can get events before the default queue is full. When you get them, either process immediately or put into a larger catch_up queue. The best way to do that depends on the details of the code.
Or investigate the suggested thread solution.
photo1photo2 I'm making a game using pygame where the user controls a player using the mouse. To shoot you left click. I need a way of creating a basic sprite and make it drop from the top of the screen at an increasing rate
I have tried doing this, but when I spawn a sprite in a loop, it just spawns so rapidly that the user would not have time to shoot all of them.
Have tried inserting a time.sleep function, but that just slows the game down completely.
Any ideas on how to go about this would be much appreciated.
Thanks
I'm not familiar with pygame, but I'm assuming there is some sort of game loop or tick that happens every frame. If that is the case, you could use a persistent variable to keep track of the next spawn time, and write an updateSpawn() method that spawns the sprite when it is ready. Something like this:
def updateSpawn():
if currentTime > nextSpawnTime:
# Make the spawn rate faster
timeBetweenSpawns -= SPAWNRATE_SPEEDUP
# Update the next spawn time
nextSpawnTime = currentTime + timeBetweenSpawns
# Spawn the sprite
spawnSprite()
I'm trying to make a day/night feature for my game where every 10 minutes the background changes, but when I run it it just crashes on startup. Here's the buggy code.
bg = pygame.image.load("bg.png")
bgg = pygame.image.load("bbg.png")
def bg1():
screen.blit(bg, (0, 0))
def bbg1():
screen.blit(bbg, (0, 0))
def fbg():
bg1()
pygame.time.wait(10000)
bbg1()
screen.fill((0,0,0))
fbg()
I have the screen.fill((0,0,0)) because there is a rect there as well which moves around.
Your problem is that the pygame.time.wait call just stops the execution of the current thread for 10,000 milliseconds. You need to have another thread that actually runs the game.
The documentation states that:
pygame.time.wait()
Will pause for a given number of milliseconds. This function sleeps the process to share the processor with other programs. A program that waits for even a few milliseconds will consume very little processor time. It is slightly less accurate than the pygame.time.delay() function.