I apologize in advance for my noob-ness; I'm just getting into programming.
Can you set me down the right path for a GUI framework? Looking at this list of GUI frameworks is pretty daunting, considering my general lack of expertise.
Summary:
I'm trying to write a GUI in python that actively updates a second monitor with images that are mathematically generated using numpy. The GUI will have parameters that can be adjusted in real time that change the image (an interference pattern of light) accordingly.
Important criteria:
parameters adjusted on screen change the interference pattern in real time
compatibility with numpy, matplotlib (or easy graphing)
Secondary criteria:
a framework that is useful/flexible for a beginner who's interested in industry programming
dual monitor support (if push comes to shove I can just update the image in a window and move the window to the second monitor)
as a side project I'd like to write a stock trading interface (with graphs, commands, etc... maybe with PyAlgoTrade?), so, once again, flexibility would be nice
Right now I'm leaning towards wxpython, since I've heard that it's flexible with matplotlib (for stock trading GUI's). Before I head down this path (and likely overwhelm myself with new documentation), I'd like to make sure I'm not heading down an unnecessarily windy road.
Any useful links are much appreciated! Your 'keyword relevance' knowledge is likely much better than mine.
Thank you!
Tkinter GUI for python
To have a fast idea, how matplotlib may get directly into a Tkinter based GUI, included a fully operational Model-Visual-Controller tripod co-integrated with Tkinter real-time control-loop, kindly go through this recipe: https://stackoverflow.com/a/25769600/3666197
Both <<Important>> & <<Secondary>> parameters met.
Fast updates
Numpy is a lingua franca, so telling that it is a must is worthless.
Good Real-Time UI / Event-handling design is cardinal. Poor MVC/control-loop may kill otherwise smart system ( as seen from recent updates of some professionally distributed trading system, where UI-responsiveness fell by far under an acceptable UI-interaction latency and sometimes even freezes UI-interactions for several tens of seconds
There are techniques to construct matplotlib objects ( having pre-baked data-structures ), that accelerate any real-time updates to get propagated faster onto GUI-Visual-layer
Related
TLDR: Is there a Python library that allows me to get a application window frame as an image and rewrite it to the said application?
So the whole story is that I want to write an application using Python that does something similar to Lossless Scaling and Magpie. I want to grab an application window (a videogame window, for example), get the current frame as an image, then use some Machine Learning/Deep Learning algorithm (like FSR or DLSS) to upscale said image, then rewrite the current frame from the application with said upscaled image.
So far, I have been playing around with some upscaling algorithms like the one from Real-ESRGAN, but now my main problem is how to upscale the video game images in real-time. The only thing I found that does something related to what I need to do is PyAutoGUI. But this package only allows you to take screenshots of an application but not rewrite the graphics of said application.
I hope I have clarified my problem; feel free to comment if you still have any questions.
Thank you for reading this post, and have a good day.
Doing this with Python is going to be very difficult. A lot of the performance involved in this sort of thing is in avoiding as many memory copies as possible, and Python's idiom for string and bytes processing unfortunately makes quite a few additional copies in the course of any idiomatic program. I say this as a die-hard Python fan who is constantly trying to cram Python in everywhere it doesn't belong: you'd be better off doing this in Rust.
Update: After receiving some feedback from some folks with more direct experience in this sort of thing, I may have overstated the difficulty here. Many ML tools in Python provide zero-copy access, you can easily access and manipulate memory-mapped data from numpy and there is even a CUDA protocol for doing this to data in GPU memory, so while it's not exactly easy, as long as your operations are implemented as numpy operations and not as pure-python pixel-by-pixel logic, it shouldn't be much harder than other python machine learning applications which require access to native APIs for accessing their source data.
However, there's no way to access framebuffer data directly from python, so step 1 is going to be writing your own bindings over the relevant DirectX APIs. Since Magpie is open source, you can see which APIs it's using, for example, in its various C++ "Frame Source" backends. For example, this looks relevant: https://github.com/Blinue/Magpie/blob/42cfcba1222b07e4cec282eaff639aead229f123/Runtime/GraphicsCaptureFrameSource.cpp#L87
You can then look those APIs up on MSDN; that one, for example, is here: https://learn.microsoft.com/en-us/uwp/api/windows.graphics.capture.direct3d11captureframepool.createfreethreaded?view=winrt-22621
CFFI is a good choice for writing native wrappers: https://cffi.readthedocs.io/en/latest/
Gluing these together appropriately is left as an exercise for the reader :).
I'll try to be as concise as possible:
Quite some time ago, I started using arduinos and other microcontrollers as custom MIDI devices for my PC.
But now, since I'm into filmmaking, I wanted to create a grading panel for use in Blackmagic's DaVinci Resolve. What i want exactly is to read signals from knobs and slider with a microcontroller and send the data to my PC so that Resolve can use it to apply different adjustments.
Unfortunately, since there are many different adjustments, there are no keyboard shortcuts for them, so I can't just emulate them with the microcontroller.
I have tried to use Cheat Engine to see if those adjustments values where stored somewhere in memory, but I can't seem to find them.
Resolve does have a Python and Lua API, but they're extremely limited and don't offer ANY control over color adjustements anyway.
I know that a few people that had my same idea used another software that mapped MIDI input to mouse movement, but I hate that solution, since the UI not only must be visible at all time, but you also can't change UI elements without needing to 're-calibrate' the mapping software.
To clarify: I'm not asking for code, I'm asking for 'directions'. How can I control a 3rd party software with custom made hardware? (I'm on Windows 10 btw)
I believe the tittle itself is sufficient.
I'm currently learning GUI programming using Tkinter in python.
I actually wanted to comment this (but seems like i don't have enough reputation), so here is a simple breakdown.
tkinter is a really nice library (it's beginner friendly - fairly customized), I will highly recommend sticking with tk (because u r already learning it - it will totally comply with your requirement of less resources' consumption ).
I would suggest you to give a try to kivy - if and only if u wish to target mobile devices - it allows native development. Although it uses fair amount of resources which might -in some cases- overlook your system resources.
NOTE: Having said that, if you you require apps with powerful GUI experience, PyQT is definitely a better choice but from what I know, PyQt may not be able to fulfill your priority of less resources' consumption.
I have been thoroughly searching this forum, the internet, and pretty much everything. I want to be able to create a new graphics window in python without using something else, for a project in which I want specialized optimized for whatever a need without any other extra mumbo-jumbo, and understanding fully how it works. Anyways, I want to be able to access pixel by pixel of a new window without using tkinter/turtle, installations, or anything of the sort. Just basic, simple, pixel access of a new window. If this isn't possible, just let me know. I imagine the program could look something like this:
import NOTHING_AT_ALL_GRAPHICS_WISE
window.pixel(1,2,True)
I program in Python 3.6.3
What you seek to accomplish isn't as simple as you might believe. Applications draw graphics on the screen largely with the help of calls to the graphics card API. Those API's can be complicated, as they should be, because all the graphics that you see on your screen, whether it be your browser GUI, to video playback, to your Starcraft II, has to use combinations of these calls to produce the beautiful and seamless graphics you see on your screen (graphics would NOT look good without use of the GPU).
However, not all manipulation of graphics has to be so complicated, and many languages/frameworks write wrappers around some of the graphics libraries that expose a simpler subset of the total functionality. Tkinter is a good example.
If you don't want to import ANYTHING, you will have to make the calls to the API yourself, and you're not in for a good time. Nobody does this nowadays for personal projects because it really is quite complicated. Also, Python is a very high-level language, and although it might expose the OS's system calls that you need, you're probably better off doing it in C. Before you proceed, I suggest you read up on the fundamentals of computer graphics work. Here's a nice image that shows how OpenGL, a very popular graphics library, works.
There is no way to do what you want. Python has no way of changing a single pixel on the screen. This is the whole reason why such "mumbo jumbo" libraries exist: it's a complex problem that requires specialized libraries that know how each operating system works.
I want to learn about graphical libraries by myself and toy with them a bit. I built a small program that defines lines and shapes as lists of pixels, but I cannot find a way to access the screen directly so that I can display the points on the screen without any intermediate.
What I mean is that I do not want to use any prebuilt graphical library such as gnome, cocoa, etc. I usually use Python to code and my program uses Python, but I can also code and integrate C modules with it.
I am aware that accessing the screen hardware directly takes away the multiplatform side of Python, but I disregard it for the sake of learning. So: is there any way to access the hardware directly in Python, and if so, what is it?
No, Python isn't the best choice for this type of raw hardware access to the video card. I would recommend writing C in DOS. Well, actually, I don't recommend it. It's a horrible thing to do. But, it's how I learned to do it, and it's probably about as friendly as you are going to get for accessing hardware directly without any intermediate.
I say DOS rather than Linux or NT because neither of those will give you direct access to the video hardware without writing a driver. That means having to learn the whole driver API, and you need to invoke a lot of "magic," that won't be very obvious because writing a video driver in Windows NT is fairly complicated.
I say C rather than Python because it gives you real pointers, and the ability to do stupid things with them. Under DOS, you can write to arbitrary physical memory addresses in C, which seems to be what you want. Trying to get Python working at all under an OS terrible enough to allow you direct hardware access would be a frustration in itself, even if you only wanted to do simple stuff that Python is good at.
And, as others have said, don't expect to use anything that you learn with this project in the real world. It may be interesting, but if you tried to write a real application in this way, you'd promptly be shot by whoever would have to maintain your code.
This seems a great self-learning path and I would add my two-cents worth and suggest you consider looking at the GObject, Cairo and Pygame modules some time.
The Python GObject module may be at a higher level than your current interest, but it enables pixel level drawing with Cairo (see the home page) as well as providing a general base for portable GUI apps using Python
Pygame also has pixel level methods as well as access methods to the graphics drivers (at the higher level) - here is a quick code example
This is rather an old thread now, but I stumbled upon it while musing the same question.
I used to program in assembly language. In my day, drawing on screen was simply(?) a matter of poking a value into a memory location. The value turned a pixel on or off and defined its colour.
The term 'poke' comes from Basic by the way, not assembler. In assembler, you had to write a value into a data register then tell the processor where to put the data using another command and specifying an address register, usually in hexadecimal form! And each different processor had its own assembly language. But hec was the code fast!
As hardware progressed, I found that graphics hardware programming became more and more complex. There's much more to it than now simply defining a pixel. The graphics subsystem has its own processor -- or processors -- and it's that that you've got to learn to talk to. The processor doesn't just plonk stuff in memory locations. (I believe that what used to be the fastest supercomputer in the world for a while ran on graphics chips!) 'Plonk' is not a Basic command by the way.
Sorry; I digress. In answer to the original poster's query, I believe that the goal of understanding the graphics-drawing process could have been best achieved by experimenting with a Raspberry Pi. It's Python compatible and hence perfect for the job. Its hardware is well documented and it's cheap and easy to use.
Hope this helps someone, Cheers, M