python based shell embedded in c++ + select - python

I have an event based application where I select() on various things and depending on what gets selected, the application handles it. To add to this, I want to add an interactive menu implemented in python. In a typical update() (called from within while(!done) loop):
- while(!done) {
select() with timeout
give control to python menu with timeout
}
Also, I don't want to run the python menu on a separate thread since it brings alongwith it the issue of race conditions and due to the nature of code/application/customers, I cannot have locks in the part of the code where the menu interacts. So, it has to be done in a single threaded environment.
I have been reading online but I couldn't find a way to do this in single threaded environment with a timeout. Can anyone please point me in the right direction or any pointers on how to do this ?

Related

Uncomplicated window interface for displaying status in python

I am trying to create a window in python where I will be displaying the status of a large system, a bunch of numbers or some LEDs. The idea is that the system sends messages to the display thread and the thread updates different parts of the window, like displaying a number or turning the color of a field. More importantly, the user interacts with system via command line of python interpreter, e.g. executing commands or updating variables.
One may simply suggest that I need to use one of the GUI packages, like pyqt or xwpython. But these modules are designed to build GUIs, that means they have plenty of resources to handle events moues clicks and so on, which I don't need. Also, these modules run a event loop which is a waste of resources as well as in many cases they block the python shell.
I tried to use pyqt without running the main loop. But when I do this windows thinks my application is not responding, and I get a bunch of problems. For example the close button on the window does not work, and any effort on closing it crashes my python session.
Any ideas on how I can implement my application?
Maybe you should consider to use the Apache's Superset dashboard.
Check this up:
https://superset.incubator.apache.org/installation.html
It makes amazing dashboards incredibly easy and useful.

running 2 python scripts without them effecting each other

I have 2 python scripts I'm trying to run side by side. However, each of them have to open and close and reopen independently from each other. Also, one of the scripts is running inside a shell script.
Flaskserver.py & ./pyinit.sh
Flaskserver.py is just a flask server that needs to be restarted everynow and again to load a new page. (cant define all pages as the html is interchangeable). the pyinit is runs as xinit ./pyinit.sh (its selenium-webdriver pythoncode)
So when the Flaskserver changes and restarts the ./pyinit needs to wait about 20 seconds then restart as well.
Either one of these can create errors so I need to be able to check if Flaskserver has an error before restarting ./pyinit if ./pyinit errors i need to set the Flaskserver to a default value and then relaunch both of them.
I know a little about subprocess but I'm unsure on how it can deal with errors and stop-start code.
Rather than using sub-process I would recommend you to create a different thread for your processes using multithread.
Multithreading will not solve the problem if global variables are colliding, but by running them in different scripts, while you might solve this, you might collide in something else like a log file.
Now, if you keep both processes running from a single process that takes care of keeping them separated and assigning different global variables where necessary, you should be able to keep a better control. Using things like join and lock from the multithreading library, will also ensure that they don't collide and it should be easy to put a process to sleep while the other is running (as per waiting 20 secs).
You can keep a thread list as a global variable, as well as your lock. I have done this successfully with CherryPy's server for example. Any more details about multithreading look into the question I linked above, it's very well explained.

How to implement breakpoint functionality in a embedding of Python

I am using Python C Api to embed a python in our application. Currently when users execute their scripts, we call PyRun_SimpleString(). Which runs fine.
I would like to extend this functionality to allow users to run scripts in "Debug" mode, where like in a typical IDE, they would be allowed to set breakpointsm "watches", and generally step through their script.
I've looked at the API specs, googled for similar functionality, but did not find anything that would help much.
I did play with PyEval_SetTrace() which returns all the information I need, however, we execute the Python on the same thread as our main application and I have not found a way to "pause" python execution when the trace callback hits a line number that contains a user checked break point - and resuming the execution at a later point.
I also see that there are various "Frame" functions like PyEval_EvalFrame() but not a whole lot of places that demo the proper usage. Perhaps these are the functions that I should be using?
Any help would be much appreciated!
PyEval_SetTrace() is exactly the API that you need to use. Not sure why you need some additional way to "pause" the execution; when your callback has been called, the execution is already paused and will not resume until you return from the callback.

Possible to run a delayed code execution?

Will it is possible to run a small set of code automatically after a script was run?
I am asking this because for some reasons, if I added this set of code into the main script, though it works, it will displays a list of tab errors (its already there, but it is stating that it cannot find it some sort).
I realized that after running my script, Maya seems to 'load' its own setup of refreshing, along with some plugins done by my company. As such, if I am running the small set of code after my main script execution and the Maya/ plugins 'refresher', it works with no problem. I had like to make the process as automated as possible, all within a script if that is possible...
Thus is it possible to do so? Like a delayed sort of coding method?
FYI, the main script execution time depends on the number of elements in the scene. The more there are, it will takes longer...
Maya has a command Maya.cmds.evalDeferred that is meant for this purpose. It waits till no more Maya processing is pending and then evaluates itself.
You can also use Maya.cmds.scriptJob for the same purpose.
Note: While eval is considered dangerous and insecure in Maya context its really normal. Mainly because everything in Maya is inherently insecure as nearly all GUI items are just eval commands that the user may modify. So the second you let anybody use your Maya shell your security is breached.

Python daemonize

I would like to daemonize a python process, and now want to ask if it is good practice to have a daemon running, like a parent process and call another class which opens 10-30 threads.
I'm planning on writing a monitoring script for group of servers and would like to check every server every 5 mins, that each server is checked exactly 5minutes.
I would like to have it this way ( sort of speak, ps auxf style output ):
|monitor-daemon.py
\-check-server.py
\-check-server.py
....
Thank you!
Maybe you should use http://pypi.python.org/pypi/python-daemon
You can use supervisord for this. You can configure tasks to respond to events. The events can be manually created or automatically by monitoring processes or based on regular intervals.
It is fully customizable and written in Python.
Example:
[program:your_daemon_name]
command=your_daemon_process
# Add extra options here according to the manual...
[eventlistener:your_monitor_name]
command=your_monitor_process
events=PROCESS_STATE_RUNNING # Will be triggered after a program changes from starting to running
# Add extra options here according to the manual...
Or if you want the eventlistener to respond to the process output use the event PROCESS_COMMUNICATION_STDOUT or TICK_60 for a check every minute. The logs can be redirected to files and such so you can always view the state.
There's really not much to creating your own daemonize function: The source for Advanced Programming in the Unix Environment (2nd edition) is freely available: http://www.apuebook.com/src.tar.gz -- you're looking for the apue.2e/daemons/init.c file.
There is a small helper program that does all the work of creating a proper daemon, it can be used to wrap arbitrary programs; this might save some hassle.

Categories