terminate script of another user - python

On a linux box I've got a python script that's always started from predefined user. It may take a while for it to finish so I want to allow other users to stop it from the web.
Using kill fails with Operation not permitted.
Can I somehow modify my long running python script so that it'll recive a signal from another user? Obviously, that another user is the one that starts a web server.
May be there's entirely different way to approach this problem I can't think of right now.

If you set up your python script to run as a deamon (bottom of page under Unix Daemon) on your server (which sounds appropriate), and you give the apache user permissions to execute the init.d script for the service, then you can control the service with php code similar to this (from here - the service script name in this case is 'otto2'):
<?
$otto = "/usr/etc/init.d/otto2 ";
if( $_GET["action"] ) {
$ret = shell_exec( $otto.$_GET["action"] );
//Check your ret value
}
else {
?>
Start
Stop
<?
}
?>
The note on that is 'really basic untested code' :)

Off the top of my head, one solution would be threading the script and waiting for a kill signal via some form or another. Or rather than threading, you could have a file that the script checks every N times through a loop - then you just write a kill signal to that file (which of course has write permissions by the web user).
I'm not terribly familiar with kill, other than killing my own scripts, so there may be a better solution.

If you do not want to execute the kill command with the correct permissions, you can send any other signal to the other script. It is then the other scripts' responsibility to terminate. You cannot force it, unless you have the permissions to do so.
This can happen with a network connection, or a 'kill' file whose existence is checked by the other script, or anything else the script is able to listen to.

You could use sudo to perform the kill command as root, but that is horrible practice.
How about having the long-running script check some condition every x seconds, for example the existence of a file like /tmp/stop-xyz.txt? If that file is found, the script terminates itself immediately.
(Or any other means of inter-process communication - it doesn't matter.)

Related

Multi-processing port still listen after termination - Python

I have a task where I need to run some python file (call it app.py)
that uploads a server (using flask). This is done in run_tests function.
Then, I want to query this
server for some test inputs that I have. This is done in the function
get_sentences_and_test (I do not put its code here for simplicity of the question. It includes waiting for the server to be up, using sleep instructions, and then query it).
I use python mutiprocessing package, for process and subprocess.
My program has a very simple structure like:
def run_tests():
subprocess.call(['python3', path_to_app.py])
main:
api_proc = Process(target=run_tests)
api_proc.start()
get_sentences_and_test(api_proc)
api_proc.terminate()
My problem is this code works ok, and does what it supposed to do.
However, the port that the subcall in run_tests creates when the server is up and running is not
killed once the program is done. And, I have to kill it manually.
I want to know:
How can I kill the process that occupies this port?
What is the best practice to do this? This should be a day-to-day problem for
people working with services and multi processing\threading. Yet, I didn't find a simple
solution or many sources on this issue.

Is there a way to determine if multiple users are running a particular Python script?

I have script which can be run by any user who is connected to a server. This script writes to a single log file, but there is no restriction on who can use it at one time. So multiple people could attempt to write to the log and data might be lost. Is there a way for one instance of the code to know if other instances of that code are running? Moreover, is it possible to gather this information dynamically? (ie not allow data saving for the second user until the first user has completed hes/her task)
I know I could do this with a text file. So I could write the user name to the file when the start, then delete it when they finish, but this could lead to errors if the either step misses, such as an unexpected script termination. So what other reliable ways are there?
Some information on the system: Python 2.7 is installed on a Windows 7 64-bit server via Anaconda. All connected machines are also Windows 7 64-bit. Thanks in advance
Here is an implementation:
http://www.evanfosmark.com/2009/01/cross-platform-file-locking-support-in-python/
If you are using a lock, be aware that stale locks (that are left by hung or crashed processes) can be a bitch. Have a process that periodically searches for locks that were created longer than X minutes ago and free them.
It just in't clean allowing multiple users to write to a single log and hoping things go ok..
why dont you write a daemon that handles logs? other processes connect to a "logging port" and in the simplest case they only succeed if no one else has connected.
you can just modify the echoserver example given here: (keep a timeout in the server for all connections)
http://docs.python.org/release/2.5.2/lib/socket-example.html
If you want know exactly who logged what, and make sure no one unauthorized gets in, you can use unix sockest to restrict it to only certain uids/gids etc.
here is a very good example
NTEventLogHandler is probably the easiest way for logging to a given Windows machine/server, but it might make more sense to use SyslogHandler if you have a syslog sink on a Unix server.
The catch I can think of with SyslogHandler is that you'll likely need to poke holes through the Windows firewall in order to send packets over the syslog protocol, i.e., 514/TCP ("reliable syslog") and 514/UDP (traditional or "unreliable syslog").

Printing PDF's using Python,win32api, and Acrobat Reader 9

I have reports that I am sending to a system that requires the reports be in a readable PDF format. I tried all of the free libraries and applications and the only one that I found worked was Adobe's acrobat family.
I wrote a quick script in python that uses the win32api to print a pdf to my printer with the default registered application (Acrobat Reader 9) then to kill the task upon completion since acrobat likes to leave the window open when called from the command line.
I compiled it into an executable and pass in the values through the command line
(for example printer.exe %OUTFILE% %PRINTER%) this is then called within a batch file
import os,sys,win32api,win32print,time
# Command Line Arguments.
pdf = sys.argv[1]
tempprinter = sys.argv[2]
# Get Current Default Printer.
currentprinter = win32print.GetDefaultPrinter()
# Set Default printer to printer passed through command line.
win32print.SetDefaultPrinter(tempprinter)
# Print PDF using default application, AcroRd32.exe
win32api.ShellExecute(0, "print", pdf, None, ".", 0)
# Reset Default Printer to saved value
win32print.SetDefaultPrinter(currentprinter)
# Timer for application close
time.sleep(2)
# Kill application and exit scipt
os.system("taskkill /im AcroRd32.exe /f")
This seemed to work well for a large volume, ~2000 reports in a 3-4 hour period but I have some that drop off and I'm not sure if the script is getting overwhelmed or if I should look into multithreading or something else.
The fact that it handles such a large amount with no drop off leads me to believe that the issue is not with the script but I'm not sure if its an issue with the host system or Adobe Reader, or something else.
Any suggestions or opinions would be greatly appreciated.
Based on your feedback (win32api.ShellExecute() is probably not synchronous), your problem is the timeout: If your computer or the print queue is busy, the kill command can arrive too early.
If your script runs concurrently (i.e. you print all documents at once instead of one after the other), the kill command could even kill the wrong process (i.e. an acrobat process started by another invocation of the script).
So what you need it a better synchronization. There are a couple of things you can try:
Convert this into a server script which starts Acrobat once, then sends many print commands to the same process and terminates afterwards.
Use a global lock to make sure that ever only a single script is running. I suggest to create a folder somewhere; this is an atomic operation on every file system. If the folder exists, the script is active somewhere.
On top of that, you need to know when the job is finished. Use win32print.EnumJobs() for this.
If that fails, another solution could be to install a Linux server somewhere. You can run a Python server on this box which accepts print jobs that you send with the help of a small Python script on your client machine. The server can then print the PDFs for you in the background.
This approach allow you to add any kind of monitoring you like (sending mails if something fails or send a status mail after all jobs have finished).

Sending commands from one xterm window to another with Python

So I have a Python app that starts different xterm windows and in one window after the operation is finished it asks the user "Do you want to use these settings? y/n".
How can I send y to that xterm window, so that the user doesn't needs to type anything.
Thanks
If you are on linux (kde) and you just want to control the xterms by sending commands between them, you could try using dcop:
http://www.linuxjournal.com/content/start-and-control-konsole-dcop
http://www.riverbankcomputing.co.uk/static/Docs/PyKDE3/dcopext.html
Otherwise you would need to actually use an inter-process communication (IPC) method between the two scripts as opposed to controlling the terminals:
http://docs.python.org/library/xmlrpclib.html
http://docs.python.org/library/ipc.html
Some other IPC or RPC library
Simply listen on a basic socket and wait for ANYTHING. And then from the other app open a socket and write SOMETHING to signal.
Or at a very very basic level, you could have one script wait on file output from the other. So once your first xterm finishes, it could write a file that the other script sees.
These are all varying difficulties of solutions.

How can I detect what other copy of Python script is already running

I have a script. It uses GTK. And I need to know if another copy of scrip starts. If it starts window will extend.
Please, tell me the way I can detect it.
You could use a D-Bus service. Your script would start a new service if none is found running in the current session, and otherwise send a D-Bus message to the running instace (that can send "anything", including strings, lists, dicts).
The GTK-based library libunique (missing Python bindings?) uses this approach in its implementation of "unique" applications.
You can use a PID file to determine if the application is already running (just search for "python daemon" on Google to find some working implementations).
If you detected that the program is already running, you can communicate with the running instance using named pipes.
The new copy could search for running copies, fire a SIGUSER signal and trigger a callback in your running process that then handles all the magic.
See the signal library for details and the list of things that can go wrong.
I've done that using several ways depending upon the scenario
In one case my script had to listen on a TCP port. So I'd just see if the port was available it'd mean it is a new copy. This was sufficient for me but in certain cases, if the port is already in use, it might be because some other kind of application is listening on that port. You can use OS calls to find out who is listening on the port or try sending data and checking the response.
In another case I used PID file. Just decide a location and a filename, and everytime your script starts, read that file to get a PID. If that PID is running, it means another copy is already there. Otherwise create that file and write your process ID in it. This is pretty simple. If you are using django then you can simply use django's daemonizer: "from django.utils import daemonize". Otherwise you can use this script: http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/

Categories