I am in a situation that I need to tear apart an GUI application (written with wx and twisted, running on MS Windows), take out the core logic and deploy it as a cron job on a linux server that has no GUI environment.
I have replaced a number of wx.CallLater and wx.CallAfter with threading.timer. Apparently it does not work. The original code does not play well in the mulit-thread environment. It is probably because some underlying libraries are not thread safe. Threading also probably does not schedule jobs in the same manner as twisted.
So it is the typical workflow of the GUI app:
User toggles a button to start up a real-time data reader (written in C)
After toggle button turns green, it means the reader is up and running. User proceeds to switch between different real time data type
When the new set of data becomes ready, user will start using other functions in the app.
My questions:
How can I use twisted to recreate the above workflow? What tools in twisted allow me to wait for readiness of real-time data reader as mentioned in step 2?
Will everything just 'happen' in the main thread?
How can I use twisted to recreate the above workflow? What tools in twisted allow me to wait for readiness of real-time data reader as mentioned in step 2?
reactor.callLater - http://twistedmatrix.com/documents/current/core/howto/time.html
Will everything just 'happen' in the main thread?
Yes - http://twistedmatrix.com/documents/current/core/howto/reactor-basics.html
Related
I am making a simple game for the terminal (because I don't want to install a gui on my arch machine). I want to detect Keypresses and change a variable accordingly so that when the main process loops again it can see the changed variable. I've searched Google for an hour trying to figure this out and everything either stopped the program waiting for a Keypress or it needed an xserver display (pynput). How would I detect for a Keypress in the background? I really don't want to install big 'ol libraries like pygame for this...
What you are looking for is called an event loop. This loop is running your program constantly while allowing callbacks to direct flow inside your program. In Python3 there is a new module in the standard library that is specifically directed at this, called asyncio.
The event loop is the central execution device provided by asyncio. It provides multiple facilities, including:
Registering, executing and cancelling delayed calls (timeouts).
Creating client and server transports for various kinds of communication.
Launching subprocesses and the associated transports for communication with an external program.
Delegating costly function calls to a pool of threads.
https://docs.python.org/3/library/asyncio-eventloop.html
Writing Python programs synchronously (read: with threads) is possible but usually not what you need and adds more complexity than you should care about. In most cases async flow via callsbacks is less complex and prevents you from running into problems like deadlocks, race conditions or other problems connected to threading.
I am working on an GUI based application, that is developed using python and go. I am using python(+Kivy) to implement UI and Go to implement middleware / core on windows OS.
my problem statement is :
1) I want to run the exe of the core on launching the application and it should remain in the background till my application is closed.
2) When an event is triggered from the application, a command is sent to the core, which it turns execute the command on the remote device and returns with the result of command execution.
I want to know, how I control the lifetime of the exe and how I can establish a communication between the UI and Core.
Any ideas!!
There are many ways you can tackle this but what I would recommend is having one of the parts (GUI/Core) as the main application that does all of the initializations and starts the other part. I would recommend using the core for this.
Here's a sample architecture you can use, though the architecture you choose is highly dependent on the application and your goals.
Core runs first, performing initialization actions including starting the GUI, sets up the communication between the GUI (using pipes, sockets, etc), then waits for commands from the GUI. If the GUI signals to close, the core can perform whatever clean up necessary and then exits. With this scenario, the lifetime of the exe is controlled by the GUI.(GUI sends a signal to the core when the user hits the exit button to let the core know it should exit)
If the core starts the GUI, then it can set up the STDIN/STDOUT pipes for it and listen for commands on the STDOUT, while sending results on the STDIN. You can also take the server approach, having the core listen on a socket, and the GUI sends requests to it and wait for a response. With the server approach, you can have some sort of concurrency unlike the serial pipes, but I think it might be slower than the pipes (the difference might be negligible but its hard to say without knowing what exactly you're doing).
I am creating an application in Python that uses SQLite databases and wxPython. I want to implement it using MVC in some way. I am just curious about threading. Should I be doing this in any scenario that uses a GUI? Would this kind of application require it?
One thing I learned from javascript/node.js is that there is a difference between asynchronous programming and parallel programming. In asynchronous programming you may have things running out of sequence, but any given task runs to completion before something else starts running. That way you don't have to worry about synchronizing shared resources with semaphores and locks and things like that, which would be an issue if you have multiple threads running in parallel, with either run simultaneously or might get preempted, thus the need for locks.
Most likely you are doing some sort of asynchronous code in a gui environment, and there isn't any need for you to also do parallel multi-threaded code.
You will use multithreading to perform parallel or background tasks that you don't want the main thread to wait, you don't want it to hang the GUI while it runs, or interfer with the user interactivity, or some other priority tasks.
Most applications today don't use multithreading or use very little of it. Even if they do use multi threads, its usually because of libraries the final programmer is using and isn't even aware that multithreading is happening there as he developed his application.
Even major softwares like AutoCAD use very little multithreading. It's not that its poorly made, but multithreading has very specific applications. For instance, it is pointless to allow user interaction while the project he wants to work on is still loading. A software designed to interact with a single user will hardly need it.
Where you can see multithreading fit a really important role is in servers, where a single application can attend requests from thousands of users without interfering with each other. In this scenario the easier way to make sure everyone is happy is by creating a new thread to each request.
Actually, GUIs are typically single threaded implementations where a single thread (called UI thread) keeps polling for events and keeps executing them in the order they occur.
Regarding the main question, consider this scenario.
At the click of a button you want to do something time consuming that takes say 5-10 seconds or more. You have got 2 options.
Do that operation in the main UI thread itself. This will freeze the UI for that duration and user will not be able to interact with it.
Do that operation in a separate thread that would on completion just notify the main UI thread (in case UI thread needs to make any UI updates based on result of the operation). This option will not block the UI thread and user can continue to use the application.
However, there will be situations where you do not want user to be using the application while something happens. In such cases usually you can still use a separate thread but block the UI using some sort of overlay / progress indicator combination.
almost certainly you already are...
alot of wx is already driven by an asynchronous event loop ..
that said you should use wx.PubSub for communication within an MVC style wx Application, but it is unlikely that you will need to implement any kind of threading (you get it for free practically)
a few good places to python threading(locked by gil) use are:
serial communication
socket servers
a few places to use multiprocessing (still locked by gil but at least it sends it to different cores)
bitcoin miners
anything that requires massive amounts of data processing that can be parallelized
there are lots more places to use it, however most gui are already fairly asynchronously driven by events (not entirely true, but close enough), and sqlite3 queries definitely should be executed one at a time from the same thread(in fact sqlite breaks horribly if you try to write to it in two different threads)
this is likely all a gross oversimplification
Before you go any further, I am currently working in a very restricted environment. Installing additional dll's/exe's, and other admin like activities are frustratingly difficult. I am fully aware that some of the methodology described in this post is far from best practice...
I would like to start a long running background process that start/stops with Apache. I have a cgi enabled python script that takes as input all of the parameters necessary to run a complex "job". It is not feasible to run this job in the cgi script itself - because a)cgi is already slow to begin with and b)multiple simultaneous requests would definitely cause trouble. The cgi script will do nothing more than enter the parameters into a "jobs" database.
Normally, I would set something up like MSMQ in conjunction with a Windows Service. I would have a web service add a job to the queue, and the windows service would be polling the queue at some standard interval - processing jobs in sequence...
How could I accomplish the same in Apache? I can easily enough create a python script to serve as the background job processor. My questions are:
how do I start it process up with, leave it running with, and stop with Apache?
how can i monitor the process - make sure stays alive with Apache?
Any tips or insight welcome.
Note. OS is Windows Server 2008
Heres a pretty hacky solution for anyone looking to do something similar.
Set up a windows scheduled task that does that background processing. set it to run once a day or whatever interval you want (it is irrelevant, as you'll see in next steps)
In the Settings tab of the Scheduled Task - make sure the "Allow task to be run on demand" option is checked. Also, under the "If the task is already running..." text, make sure the Do not start a new instance option in selected.
Then, from the cgi script - it is possible to invoke the scheduled task from the command line(subprocess module) see here. With the options set above - if the task is already running - any subsequent run on demands are ignored.
I need to run a server side script like Python "forever" (or as long as possible without loosing state), so they can keep sockets open and asynchronously react to events like data received. For example if I use Twisted for socket communication.
How would I manage something like this?
Am I confused? or are there are better ways to implement asynchronous socket communication?
After starting the script once via Apache server, how do I stop it running?
If you are using twisted then it has a whole infrastructure for starting and stopping daemons.
http://twistedmatrix.com/projects/core/documentation/howto/application.html
How would I manage something like this?
Twisted works well for this, read the link above
Am I confused? or are there are better ways to implement asynchronous socket communication?
Twisted is very good at asynchronous socket communications. It is hard on the brain until you get the hang of it though!
After starting the script once via Apache server, how do I stop it running?
The twisted tools assume command line access, so you'd have to write a cgi wrapper for starting / stopping them if I understand what you want to do.
You can just write an script that is continuously in a while block waiting for the connection to happen and waits for a signal to close it.
http://docs.python.org/library/signal.html
Then to stop it you just need to run another script that sends that signal to him.
You can use a ‘double fork’ to run your code in a new background process unbound to the old one. See eg this recipe with more explanatory comments than you could possibly want.
I wouldn't recommend this as a primary way of running background tasks for a web site. If your Python is embedded in an Apache process, for example, you'll be forking more than you want. Better to invoke the daemon separately (just under a similar low-privilege user).
After starting the script once via Apache server, how do I stop it running?
You have your second fork write the process number (pid) of the daemon process to a file, and then read the pid from that file and send it a terminate signal (os.kill(pid, signal.SIG_TERM)).
Am I confused?
That's the question! I'm assuming you are trying to have a background process that responds on a different port to the web interface for some sort of unusual net service. If you merely talking about responding to normal web requests you shoudn't be doing this, you should rely on Apache to handle your sockets and service one request at a time.
I think Comet is what you're looking for. Make sure to take a look at Tornado too.
You may want to look at FastCGI, it sounds exactly like what you are looking for, but I'm not sure if it's under current development. It uses a CGI daemon and a special apache module to communicate with it. Since the daemon is long running, you don't have the fork/exec cost. But as a cost of managing your own resources (no automagic cleanup on every request)
One reason why this style of FastCGI isn't used much anymore is there are ways to embed interpreters into the Apache binary and have them run in server. I'm not familiar with mod_python, but i know mod_perl has configuration to allow long running processes. Be careful here, since a long running process in the server can cause resource leaks.
A general question is: what do you want to do? Why do you need this second process, but yet somehow controlled by apache? Why can'ty ou just build a daemon that talks to apache, why does it have to be controlled by apache?