I'm writing a real-time interactive graphics application using SDL2 and OpenGL in Python (pysdl2, PyOpenGL). The application continuously produces frames, which may change in response to keyboard / mouse input or based on time.
My main event loop, copied from some obscure source on the web that I can't find again, looks (simplified) like this:
event = sdl2.SDL_Event()
running = True
while running:
# process events
while sdl2.SDL_PollEvent(ctypes.byref(event)) != 0:
if event.type == sdl2.SDL_QUIT:
running = False
# render frame
<some OpenGL commands>
sdl2.SDL_GL_SwapWindow(window)
sdl2.SDL_Delay(10)
From what I understand, the delay of 10 ms is intended to give the CPU some breathing room, and indeed, when I remove it the CPU usage doubles.
What I don't understand is why it is necessary. The graphics display is double buffered and buffer swaps are synchronized with the vertical retrace (60 Hz = 16 ms). Naively one would expect that if <some OpenGL commands> take less than 16 ms, then SDL_GL_SwapWindow will introduce a delay anyway, so SDL_Delay is not necessary. And if they use more than that, then the program is struggling to keep up with the display framerate, and introducing a delay would hurt.
Now from what I've been told in response to another question, the buffer swap and therefore the retrace synchronization doesn't happen when the SDL_GL_SwapWindow is executed, but this only puts a "sync & swap" instruction into an OpenGL queue, and this instruction gets executed when everything before has finished. The same holds for <some OpenGL commands>. But this instruction queue is finite, and therefore at some point my program, instead of waiting for the retrace, will wait for there to be space in the instruction queue. The end effect should be the same: If one execution of my event loop needs on average less than 16 ms, then the program will on average delay long enough to make it 16 ms per loop execution. So again, why is the explicit delay necessary?
As a second question: Considering the delay might hurt the framerate, is there a better way to let the CPU rest?
You don't need use sdl2.SDL_Delay(10) because SDL_Delay is timer for SDL2 framework if you use event-loop then it doesn't need with SDL_Delay.
My code for Python 3.10 looks example with KeyPress and Close window by mouse click
event = sdl2.SDL_Event()
running = True
while running:
while sdl2.SDL_PollEvent(ctypes.byref(event)) != 0:
if event.type == sdl2.SDL_QUIT:
running = False
if event.type == sdl2.SDL_KEYDOWN:
if event.key.keysym.sym == sdl2.SDLK_ESCAPE:
running = False
... GL Functions ...
sdl2.SDL_GL_SwapWindow(window)
Remember that event-loop should like other in C/C++,. Java, C# or other programming langauges.
Without event-loop means
you should use SDL_Delay() then game window will close automacally.
I hope you understand - I found Python examples but they look like weird and wrong - they write like "Latin".
I would like to improve who understands better like other programming languages.
PS: I will work hard my github with clear examples :)
Related
I'm learning OpenCV and I decided to make a snake game using it. It's almost done but there is a slight problem that seems simple but I couldn't find a solution.
while True:
move()
cv2.imshow('Snake Game', frame)
cv2.waitKey(250)
It's supposed to wait 250 miliseconds before the next frame but key presses break the waiting so game speeds up when I hold down a key. How can I make it ignore the keyboard events and only use time?
I would be very surprised if waitKey didn't stop the waiting after key presses. In fact the name itself suggest that. So basically it's like calling a function called max and then expect the minimum.
From your code and what you've described, you're using waitKey for two reasons:
waiting for some fixed time. That means you're using it to synchronize your game loop.
using it (maybe) to handle key presses for user interaction with the game.
In my opinion, first thing to do is to stop waiting and just keep showing frames continuously as soon as it is ready. And for synchronization you just need to save time for each frame printing. And using that time you update after user interaction or deciding how to process frame or ... One place to help you in that is to look at how game loops are implemented.Take a look here : https://gamedev.stackexchange.com/questions/651/how-should-i-write-a-main-game-loop
I have a psychopy script for a psychophysical experiment in which participants see different shapes and have to react in a specific way. I would like to take physiological measures (ECG) on a different computer and implement event points in my measures. So whenever the participant is shown a stimulus, I would like this to show up on the ECG.
Basically, I would like to add commands for parallel port i/o.
I don't even know where to start, to be honest. I have read quite a bit about this, but I'm still lost.
Here is probably the shortest fully working code to show a shape and send a trigger:
from psychopy import visual, parallel, core
win = visual.Window()
shape = visual.ShapeStim(win)
# Show it
shape.draw()
win.flip()
# Immediately send trigger
parallel.setData(10) # Trigger code, here 10
core.wait(0.020) # Number of seconds to send this trigger code (enough for the hardware to send and the receiver to recognize it)
parallel.setData(0) # Stop sending trigger.
You could, of course, extend this by putting the stimulus presentation and trigger in a loop (running trials), and do various other things alongside it, e.g. collecting responses and saving data. This is just a minimal example of stimulus presentation and sending a trigger.
It is very much on purpose that the trigger code is located immediately after flipping the window. On most hardware, sending a trigger is very fast (voltage on the port changes within 1 ms of the line being run in the script) whereas the monitor only updates its image around every 16.7 ms. and win.flip() waits for the latter. I've made some examples of good and bad practices for timing triggers here.
For parallel port io, you should use pyparallel. Make continuous ECG measurements, store those and store the timestamp and whatever metadata you want per stimulus. Pseudo code to get you started:
while True:
store_ecg()
if time_to_show():
stimulus()
time.sleep(0.1) # 10 Hz
I'm building a Python GUI app using tkinter.
Basically I'm starting and integrating with a different thread, while communication goes using input and output queues.
In the GUI side (the main thread where tkinter's mainloop() goes) I want to add a function which will be called on every iteration of the mainloop (I'm processing and displaying information on real-time).
So my function does something like that:
def loop(self):
try:
output_type, data = wlbt.output_q.get_nowait()
pass # if got something out of the queue, display it!
except Queue.Empty:
pass
self.loop_id = self.after(1, self.loop)
While when starting the program I just call self.loop_id = self.after(1, self.loop).
So two things that bother me:
The loop function raise the CPU usage by 30%-50%. If I disable it then it's good.
I want to be able to use after_idle() to maximize the refresh-rate, but I wasn't able to just replace it - got and error.
I'm sensing there's something I don't fully understand. What can be done to address these issues?
When you call self.after(1, self.loop) you are asking for a function to be run roughly once per millisecond. It's not at all surprising that the CPU usage goes up since you are making 1000 function calls per second.
Given that humans cannot perceive that many changes, if all you're doing is updating the display then there's no reason to do that more than 20-30 times per second.
I'm new to Python as well as event-driven/GUI programming in general. As far as I can tell, all the event choices are things like mouse clicks and key presses.
I've written a set of functions in a separate library that read from an I2C device (on Raspberry Pi). The functions return -1 if nothing is read. So basically, I want to loop, calling the read function each time, until something besides -1 is returned.
My first instinct was to write something like:
readResult = -1
while (readResult == -1):
readResult = IO.read()
changeGUI()
This doesn't seem to work though in the tkinter structure. I get how to make a function get called on a button press, but I don't know how to do a custom event.
There are a few ways to go with this -- you could give up using Tkinter's mainloop(), and build your own event loop that polled for both types of events. Or, you could spawn a separate thread to monitor IO. Or, you could use the after() method from Tkinter.
For the first two cases, if IO.read() returns immediately, whether or not there's a result, then you probably want to throw a time.sleep() call in the loop, to avoid hogging the CPU.
If your call to IO.read() doesn't block, and doesn't take very long, it's very easy to set up a loop to poll the device every few milliseconds. All you need to do is something like this:
def read_one_result():
readResult = IO.read()
if readResult != -1:
changeGUI()
root.after(100, read_one_result)
This will read one result, update the GUI if anything was read, and the schedule itself to run again in 100ms.
I recently startet getting into pyglet and rabbyt from pygame, but I have hit something of a brick wall.
I created a basic example where one Sprite (of the type found in pyglet.sprite.Sprite) is displayed at 60 frames per second. The problem is that this simple program is using up 50% of the CPU time somehow. I repeated the experiment with the sprite type found in the rabbyt library with the same result.
I decided to render 1000, then 10 000 sprites at 60 frames per second, and to my surprise the CPU usage stays at 50%. The only thing is that moving or animating a sprite results in slight stuttering.
Lastly, I tried running at 360 frames per second. Same result, 50% usage.
Here is the sample code:
import pyglet
import rabbyt
def on_draw(dt):
window.clear()
spr.render()
global window
window = pyglet.window.Window(800, 600)
spr = rabbyt.Sprite('ship.png')
spr.x = 100
spr.y = 100
pyglet.clock.schedule_interval(on_draw, 1.0/60.0)
if __name__ == '__main__':
pyglet.app.run()
I am using a Core 2 Duo with an ATI HD 3500 card.
Any advice/ideas are appreciated.
Be aware that the default pyglet event handler will fire an 'on_draw' event every time it clears the event queue.
http://www.pyglet.org/doc/programming_guide/the_application_event_loop.html
The pyglet application event loop dispatches window events (such as for mouse and keyboard input) as they occur and dispatches the on_draw event to each window after every iteration through the loop.
This means that any event can trigger a redraw.
So if you're moving the mouse about or doing anything that fires events, you will get massive slow down as it begins to trigger render calls.
This also caused problems because I was doing my own render calls, so I would get the two buffers fighting which created a 'ghost' effect on the screen. Took me a while to realise this was the cause.
I monkey patched the event loop to not do this.
https://github.com/adamlwgriffiths/PyGLy/blob/master/pygly/monkey_patch.py
Be aware that this patched event loop will no longer render on it's own, you must manually flip the buffers or trigger an 'on_draw' event.
It might be the case that, although you've hooked in at 60fps, but the internal render loop is ticking at the maximum possible rate.
I dislike code that takes away control, hence my patch lets me decide when render events occur.
Hmm.. You might want to know the fps at which the game runs, if it helps:
cldis = pyglet.clock.ClockDisplay()
Then add this to your on_draw function:
cldis.draw()
it draws the current fps at the bottomleft corner of the screen in a semi-transparent color.
I know that in Pygame there is the built-in called "Clock". You can put a limit on how many times the game loops per second using the tick method. In my example I have put a limit of 30 FPS. This prevents your CPU being on constant demand.
clock = pygame.time.Clock()
While 1:
clock.tick(30) # Puts a limit of 30 frames per second on the loop
In pyglet there appears to be something similar:
pyglet.clock.schedule_interval(on_draw, 1.0/60.0)
clock.set_fps_limit(60)
Hope that helps!
edit: documentation on fps limit: http://pyglet.org/doc/api/pyglet.clock-module.html#set_fps_limit