using twistd to run a twisted application but script run twice - python

sample code here
# main.py
from twisted.application import service, internet
application = service.Application("x")
service.IProcess(application).processName = "x"
print "some log...."
if I run this main.py with:
twistd -y main.py
I got 2 "some log...." lines.
If this code run twice?

The "process name" feature you're using works by re-executing the process with a new argv[0]. There is no completely reliable way to save an arbitrary object (like the Application) across this process re-execution. This means that the .py file has to be re-evaluated in the new process to recreate the Application object so twistd knows what you want it to do.

You might want to consider using setproctitle rather than twistd's built-in process title feature. (For that matter, maybe twistd should just use it if it's available...)

Related

Control executed programm with python

I want to execute a testrun via bash, if the test needs too much time. So far, I found some good solutions here. But since the command kill does not work properly (when I use it correctly it says it is not used correctly), I decided to solve this problem using python. This is the Execution call I want to monitor:
EXE="C:/program.exe"
FILE="file.tpt"
HOME_DIR="C:/Home"
"$EXE" -vm-Xmx4096M --run build "$HOME_DIR/test/$FILE" "Auslieferung (ML) Execute"
(The opened *.exe starts a testrun which includes some simulink simulation runs - sometimes there are simulink errors - in this case, the execution time of the tests need too long and I want to restart the entire process).
First, I came up with the idea, calling a shell script containing these lines within a subprocess from python:
import subprocess
import time
process = subprocess.Popen('subprocess.sh', shell = True)
time.sleep(10)
process.terminate()
But when I use this, *.terminate() or *.kill() does not close the program I started with the subprocess call.
That´s why I am now trying to implement the entire call in python language. I got the following so far:
import subprocess
file = "somePath/file.tpt"
p = subprocess.Popen(["C:/program.exe", file])
Now I need to know, how to implement the second call "Auslieferung (ML) Execute" of the bash function. This call starts an intern testrun named "Auslieferung (ML) Execute". Any ideas? Or is it better to choose one of the other ways? Or can I get the "kill" option for bash somewhere, somehow?

How to keep python interpreter in memory across executions?

I need to repeatedly call short programs in python.
Since programs are trivial, but use several (standard) modules and target hardware (embedded ARM9 running linux) is not very powerful loading time of interpreter+libs greatly exceeds prog runtime.
Is there a way to keep a python interpreter in memory and "just" feed it a program to execute?
I know I can write a fairly simple "C" wrapper spawning the interpreter and then feed it my programs via PyRun_SimpleFile(), but that looks like an overkill. Surely there's some simpler (and probably more "pythonic") way of achieving the same.
There are probably many ways of solving this problem.
A simple one would be to combine all your short programs into a simple web application, potentially one that listens on a local Unix socket rather than a network socket. E.g., using the minimal Flask application in the flask quickstart:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello, World!\n'
You could expose it on a local Unix socket like this, assuming you've put the above code into a script called myapp.py:
uwsgi --http-socket /tmp/app.sock --manage-script-name --plugin python --mount /=myapp:app
And now you can access it like this (note the single / in http:/; that's because we don't need a hostname when connecting to a local socket):
$ curl --unix-socket /tmp/app.sock http:/
Hello, world!
This would let you start your Python scripts once and let it run persistently, thus avoiding the costs associated with start up and module loading for subsequent calls, while providing you with a way to run different functions, provide input parameters, etc.
Here's an example that takes a filename as input and performs some transformations on the file:
#app.route('/cmd1', methods=['POST'])
def cmd1():
inputfile = request.form.get('inputfile')
with open(inputfile) as fd:
output = fd.read().replace('Hello', 'Goodbye')
return output
Assuming that we have:
$ cat data
Hello world
We can call:
$ curl --unix-socket /tmp/app.sock http:/cmd1 -d inputfile=$PWD/data
Goodbye world

avoid process to close using python subprocess module

I have a script in python (I called it monitor.py), that checks if another python application (called test.py) is running; if true nothing happens; if false it starts test.py.
I am using the subprocess module in monitor.py, but if I start test.py and I close monitor.py , test.py also closes; is there any way to avoid this ? Is this subprocess module the correct one ?
I have a script [...] that checks if another [...] is running
I'm not sure if it's any help in your case, but i just wanted to say that if you're working with Windows, you can program a real service in python.
Doing that from scratch is some effort, but some good people out there provide examples that you can easily change, like this one.
(In this example, look for the line f = open('test.dat', 'w+') and write your code there)
It'll behave like any other windows service, so you can make it start when booting your PC, for example.

Re-read environment of parent process in python

I've written a little Python (2.7.2+) module (called TWProcessing) that can be described as an improvised batch manager. The way it works is that I pass it a long list of commands that it will then run in parallel, but limiting the total number of simultaneous processes. That way, if I have 500 commands I would like to run, it will loop through all of them, but only running X of them at a time so as to not overwhelm the machine. The value of X can be easily set when declaring an instance of this batch manager (the class is called TWBatchManager) :
batch = TWProcessing.TWBatchManager(MaxJobs=X)
I then add a list of jobs to this object in a very straightforward manner :
batch.Queue.append(/CMD goes here/)
Where Queue is a list of commands that the batch manager will run. When the queue has been filled, I then call Run() which loops through all the commands, only running X at a time :
batch.Run()
So far, everything works fine. Now what I'd like to do is be able to change the value of X (i.e. the maximum number of processes running at once) dynamically i.e. while the processes are still running. My old way of doing this was rather straightforward. I had a file called MAXJOBS that the class would know to look at, and, if it existed, it would check it regularly to see if the desired value has changed. Now I'd like to try something a bit more elegant. I would like to be able to write something along the lines of export MAXJOBS=newX in the bash shell that launched the script containing the batch manager, and have the batch manager realize that this is now the value of X it should be using. Obviously os.environ['MAXJOBS'] is not what I'm looking for, because this is a dictionary that is loaded on startup. os.getenv('MAXJOBS') doesn't cut it either, because the export will only affect child processes that the shell will spawn from then on. So what I need is a way to get back to the environment of the parent process that launched my python script. I know os.ppid will give me the parent pid, but I have no idea how to get from there to the parent environment. I've poked around the interwebz to see if there was a way in which the parent shell could modify the child process environment, and I've found that people tend to insist I not try anything like that, lest I be prepared to do some of the ugliest things one can possibly do with a computer.
Any ideas on how to pull this off? Granted my "read from a standard text file" idea is not so ugly, but I'm new to Python and am therefore trying to challenge myself to do things in an elegant and clean manner to learn as much as I can. Thanks in advance for your help.
For me it looks that you are asking for inter-process communication between a bash script and a python program.
I'm not completely sure about all your requirements, but it might be a candidate for a FIFO (named pipe):
1) make the fifo:
mkfifo batch_control
2) Start the python - server, which reads from the fifo. (Note: the following is only a minimalistic example; you must adapt things:
while True:
fd = file("batch_control", "r")
for cmd in fd:
print("New command [%s]" % cmd[:-1])
fd.close()
3) From the bash script you can than 'send' things to the python server by echo-ing strings into the fifo:
$ echo "newsize 800" >batch_control
$ echo "newjob /bin/ps" >batch_control
The output of the python server is:
New command [newsize 800]
New command [newjob /bin/ps]
Hope this helps.

How can I detect what other copy of Python script is already running

I have a script. It uses GTK. And I need to know if another copy of scrip starts. If it starts window will extend.
Please, tell me the way I can detect it.
You could use a D-Bus service. Your script would start a new service if none is found running in the current session, and otherwise send a D-Bus message to the running instace (that can send "anything", including strings, lists, dicts).
The GTK-based library libunique (missing Python bindings?) uses this approach in its implementation of "unique" applications.
You can use a PID file to determine if the application is already running (just search for "python daemon" on Google to find some working implementations).
If you detected that the program is already running, you can communicate with the running instance using named pipes.
The new copy could search for running copies, fire a SIGUSER signal and trigger a callback in your running process that then handles all the magic.
See the signal library for details and the list of things that can go wrong.
I've done that using several ways depending upon the scenario
In one case my script had to listen on a TCP port. So I'd just see if the port was available it'd mean it is a new copy. This was sufficient for me but in certain cases, if the port is already in use, it might be because some other kind of application is listening on that port. You can use OS calls to find out who is listening on the port or try sending data and checking the response.
In another case I used PID file. Just decide a location and a filename, and everytime your script starts, read that file to get a PID. If that PID is running, it means another copy is already there. Otherwise create that file and write your process ID in it. This is pretty simple. If you are using django then you can simply use django's daemonizer: "from django.utils import daemonize". Otherwise you can use this script: http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/

Categories