I have python script on Ubuntu, which sometimes running more than 24 hours. I have set in cron, to run this script every day. However if script is still running, I would like to terminate new instance of this script. I have already found some solution, but they seems to be complicated. I would like to add few lines in the beginning of script, which will be checking if the script is running, if yes return, else continue.
I like this command:
pgrep -a python | grep 'script.py'
it is possible to make some smart solution for this problem?
There is no simple way how to do it. As mentioned in comments you can use creation of some locking file. But i prefer use of sockets. Not sure if it works same on Linux but on Windows i use this:
import socket
class AppMutex:
"""
Class serves as single instance mutex handler (My application can be run only once).
It use OS default property where single UDP port can be bind only once at time.
"""
#staticmethod
def enable():
"""
By calling this you bind UDP connection on specified port.
If binding fails then port is already opened from somewhere else.
"""
try:
AppMutex.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP
AppMutex.sock.bind(("127.0.0.1", 40000))
except OSError:
raise Exception("Application can be run only once.")
Simple call at begining of your script:
AppMutex.enable()
Related
I've been cracking my head over this but nothing comes to my mind yet.
I want my script to execute a .py file inside of another already started process. I have a maya process opened, and inside for example modo I want to start file hello.py (print 'hello!') inside that exact Maya.
I already got the PID of that maya process, but don't know how to actually send a command to execute.
is theres some attribute/flag in subprocess or signal modules I could be missing? or is it done even another way?
import os
openedMaya = []
r = os.popen('tasklist /v').read().strip().split('\n')
for i in range(len(r)):
s = r[i]
if 'maya.exe' in s and ': untitled' in s:
openedMaya.append(s)
mayaPID = openedMaya.split('maya.exe')[1].split('Console')[0]
I need a command that could execute hello.py in that maya process.
You could use RPyC to act as a bridge so that you can communicate from one software to another. The idea is that you use RPyC to run an idle server in Maya, where the PYTHONPATH is also pointing to your hello.py script. This server stays active in the session, but the user shouldn't notice it exists.
Then in your other software you use RPyC to broadcast a message using the same port as the server so that it triggers it in Maya. This would then run your command.
It's slightly more overhead, but I have been able to use this successfully for stand-alone tools to trigger events in Maya. As far as using subprocess, you can use it to run a command in a new Maya session, but I don't think there's a way to use it for an existing one.
Hope that nudges you in the right direction.
Maybe an easier way would be to transfer your mesh by using an intermediate file. One process creates the file, another process (running inside the host app) reads it in.
Thanks for the advices, at the end I found a solution by opening the port of maya, by starting a mel command (at the startup):
commandPort -n ":<some_port>";
and connecting from modo to that port through socket:
HOST = '127.0.0.1'
PORT = <some_port>
ADDR=(HOST,PORT)
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(ADDR)
client.send(<message_that_you_want_to_send)
data = client.recv(1024)
client.close()
and i'm able to do whatever I want inside that opened maya, as long as I send mel commands.
Thanks for the help though!
I am working on my Raspberry Pi, that is handling some WS2812B RGB-LEDs. I can control the light and everything with the neopixel library and Python. So fine right now.
I want this Python script running an infinite loop that only deals with light management. Dimming LEDs, changing color and lots more. But, I want to be able to get commands from other scripts. Let's say I want to type in a shell command that will change the color. In my infinite Python script (LED Handler), I will be able to recognize this command and change the color or the light mode softly to the desired color.
One idea is, to constantly look into a text file, if there is a new command. And my shell script is able to insert command lines into this text file.
But can you tell me, if there is a better solution of doing it?
Many thanks in advance.
One method would be to expose a TCP server, then communicate with the Python process over TCP. A simple example on how to create a TCP server is here, showcasing both the server script (running the LEDs) and the command scripts: example
I suggest opening a port with your python script and make it receive commands from that port (network programming). Although this would make your project more complicated, it is a very robust implementation.
You can use ZeroMQ and host it locally. It provides bindings for Python. Here is an example script (sender and receiver):
from threading import Thread
import zmq
class Sender(Thread):
def run(self):
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect('tcp://127.0.0.1:8000')
while True:
socket.send_string(input('Enter command: '))
class Receiver(Thread):
def run(self):
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.bind('tcp://127.0.0.1:8000')
socket.setsockopt(zmq.SUBSCRIBE, b'')
while True:
data = socket.recv().decode('ascii')
print(data) # Do stuff with data.
The receiver would be the instance that controls the lights on the RPi and the sender is the command line script that let's you input the various commands. An advantage is that ZeroMQ supports bindings for various programming languages and you can also send/receive commands over a network.
Another solution is to allow commands from a network connection. The script with the "infinite loop" will read input from a socket and perform the commands.
When I open Firefox, then run the command:
firefox http://somewebsite
the url opens in a new tab of Firefox (same thing happens with Chromium as well). Is there some way to replicate this behavior in Python? For example, calling:
processStuff.py file/url
then calling:
processStuff.py anotherfile
should not start two different processes, but send a message to the currently running program. For example, you could have info in one tabbed dialog box instead of 10 single windows.
Adding bounty for anyone who can describe how Firefox/Chromium do this in a cross-platform way.
The way Firefox does it is: the first instance creates a socket file (or a named pipe on Windows). This serves both as a way for the next instances of Firefox to detect and communicate with the first instance, and forward it the URL before dying. A socket file or named pipe being only accessible from processes running on the local system (as files are), no network client can have access to it. As they are files, firewalls will not block them either (it's like writing on a file).
Here is a naive implementation to illustrate my point. On first launch, the socket file lock.sock is created. Further launches of the script will detect the lock and send the URL to it:
import socket
import os
SOCKET_FILENAME = 'lock.sock'
def server():
print 'I\'m the server, creating the socket'
s = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
s.bind(SOCKET_FILENAME)
try:
while True:
print 'Got a URL: %s' % s.recv(65536)
except KeyboardInterrupt, exc:
print 'Quitting, removing the socket file'
s.close
os.remove(SOCKET_FILENAME)
def client():
print 'I\'m the client, opening the socket'
s = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
s.connect(SOCKET_FILENAME)
s.send('http://stackoverflow.com')
s.close()
def main():
if os.path.exists(SOCKET_FILENAME):
try:
client()
except (socket.error):
print "Bad socket file, program closed unexpectedly?"
os.remove(SOCKET_FILENAME)
server()
else:
server()
main()
You should implement a proper protocol (send proper datagrams instead of hardcoding the length for instance), maybe using SocketServer, but this is beyond this question. The Python Socket Programming Howto might also help you. I have no Windows machine available, so I cannot confirm that it works on that platform.
You could create a data directory where you create a "locking file" once your program is running, after having checked if the file doesn't exist yet.
If it exists, you should try to communicate with the existing process, which creates a socket or a pipe or something like this and communicates its address or its path in an appropriate way.
There are many different ways to do so, depending on which platform the program runs.
While I doubt this is how Firefox / Chrome does it, it would be possible to archive your goal with out sockets and relying solely on the file system. I found it difficult to put into text, so see below for a rough flow chart on how it could be done. I would consider this approach similar to a cookie :). One last thought on this is that with this it could be possible to store workspaces or tabs across multiple sessions.
EDIT
Per a comment, environment variables are not shared between processes. All of my work thus far has been a single process calling multiple modules. Sorry for any confusion.
I think you could use multiprocessing connections with a subprocess to accomplish this. Your script would just have to try to connect to the "remote" connection on localhost and if it's not available then it could start it.
Very Basic is use sockets.
http://wiki.python.org/moin/ParallelProcessing
Use Threading, http://www.valuedlessons.com/2008/06/message-passing-conccurrency-actor.html
Example for Socket Programming: http://code.activestate.com/recipes/52218-message-passing-with-socket-datagrams/
Im attempting to start a server app (in erlang, opens ports and listens for http requests) via the command line using pexpect (or even directly using subprocess.Popen()).
the app starts fine, logs (via pexpect) to the screen fine, I can interact with it as well via command line...
the issue is that the servers wont listen for incoming requests. The app listens when I start it up manually, by typing commands in the command line. using subprocess/pexpect stops the app from listening somehow...
when I start it manually "netstat -tlp" displays the app as listening, when I start it via python (subprocess/pexpect) netstat does not register the app...
I have a feeling it has something to do with the environemnt, the way python forks things, etc.
Any ideas?
thank you
basic example:
note:
"-pz" - just ads ./ebin to the modules path for the erl VM, library search path
"-run" - runs moduleName, without any parameters.
command_str = "erl -pz ./ebin -run moduleName"
child = pexpect.spawn(command_str)
child.interact() # Give control of the child to the user
all of this stuff works correctly, which is strange. I have logging inside my code and all the log messages output as they should. the server wouldnt listen even if I started up its process via a bash script, so I dont think its the python code thats causing it (thats why I have a feeling that its something regarding the way the new OS process is started).
It could be to do with the way that command line arguments are passed to the subprocess.
Without more specific code, I can't say for sure, but I had this problem working on sshsplit ( https://launchpad.net/sshsplit )
To pass arguments correctly (in this example "ssh -ND 3000"), you should use something like this:
openargs = ["ssh", "-ND", "3000"]
print "Launching %s" %(" ".join(openargs))
p = subprocess.Popen(openargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
This will not only allow you to see exactly what command you are launching, but should correctly pass the values to the executable. Although I can't say for sure without seeing some code, this seems the most likely cause of failure (could it also be that the program requires a specific working directory, or configuration file?).
I have a script. It uses GTK. And I need to know if another copy of scrip starts. If it starts window will extend.
Please, tell me the way I can detect it.
You could use a D-Bus service. Your script would start a new service if none is found running in the current session, and otherwise send a D-Bus message to the running instace (that can send "anything", including strings, lists, dicts).
The GTK-based library libunique (missing Python bindings?) uses this approach in its implementation of "unique" applications.
You can use a PID file to determine if the application is already running (just search for "python daemon" on Google to find some working implementations).
If you detected that the program is already running, you can communicate with the running instance using named pipes.
The new copy could search for running copies, fire a SIGUSER signal and trigger a callback in your running process that then handles all the magic.
See the signal library for details and the list of things that can go wrong.
I've done that using several ways depending upon the scenario
In one case my script had to listen on a TCP port. So I'd just see if the port was available it'd mean it is a new copy. This was sufficient for me but in certain cases, if the port is already in use, it might be because some other kind of application is listening on that port. You can use OS calls to find out who is listening on the port or try sending data and checking the response.
In another case I used PID file. Just decide a location and a filename, and everytime your script starts, read that file to get a PID. If that PID is running, it means another copy is already there. Otherwise create that file and write your process ID in it. This is pretty simple. If you are using django then you can simply use django's daemonizer: "from django.utils import daemonize". Otherwise you can use this script: http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/