Simple HTTPServer python on background - python

I have this script, I do not know how to have it running in the background, cause when i close the session it close too. I tried putting it on crontab but not find the index.html and shows the list of files in /.
#! /opt/python3/bin/python3
from http.server import HTTPServer, CGIHTTPRequestHandler
port = 8000
httpd = HTTPServer(('', port), CGIHTTPRequestHandler)
print("Starting simple_httpd on port: " + str(httpd.server_port))
httpd.serve_forever()`

Basically you are asking about how to detach the program from your shell ... here is a few options
./scriptname.py >/dev/null 2>&1 & # sends the program to the background
Use gnu-screen (or similar) ... run your program via screen and you can bring it back up when you log back in
Daemonize your program properly
Update:
Recently I have not written a single daemon in python. The days of forking twice or using a daemon library seem to be well behind us. I currently use supervisord and have heard good things about circus. These are just a small set of extra options you can use to deploy python daemons.

Related

How to check if Python script is already running

I have python script on Ubuntu, which sometimes running more than 24 hours. I have set in cron, to run this script every day. However if script is still running, I would like to terminate new instance of this script. I have already found some solution, but they seems to be complicated. I would like to add few lines in the beginning of script, which will be checking if the script is running, if yes return, else continue.
I like this command:
pgrep -a python | grep 'script.py'
it is possible to make some smart solution for this problem?
There is no simple way how to do it. As mentioned in comments you can use creation of some locking file. But i prefer use of sockets. Not sure if it works same on Linux but on Windows i use this:
import socket
class AppMutex:
"""
Class serves as single instance mutex handler (My application can be run only once).
It use OS default property where single UDP port can be bind only once at time.
"""
#staticmethod
def enable():
"""
By calling this you bind UDP connection on specified port.
If binding fails then port is already opened from somewhere else.
"""
try:
AppMutex.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # UDP
AppMutex.sock.bind(("127.0.0.1", 40000))
except OSError:
raise Exception("Application can be run only once.")
Simple call at begining of your script:
AppMutex.enable()

Transferring a mesh from one process to another in PYTHON

I've been cracking my head over this but nothing comes to my mind yet.
I want my script to execute a .py file inside of another already started process. I have a maya process opened, and inside for example modo I want to start file hello.py (print 'hello!') inside that exact Maya.
I already got the PID of that maya process, but don't know how to actually send a command to execute.
is theres some attribute/flag in subprocess or signal modules I could be missing? or is it done even another way?
import os
openedMaya = []
r = os.popen('tasklist /v').read().strip().split('\n')
for i in range(len(r)):
s = r[i]
if 'maya.exe' in s and ': untitled' in s:
openedMaya.append(s)
mayaPID = openedMaya.split('maya.exe')[1].split('Console')[0]
I need a command that could execute hello.py in that maya process.
You could use RPyC to act as a bridge so that you can communicate from one software to another. The idea is that you use RPyC to run an idle server in Maya, where the PYTHONPATH is also pointing to your hello.py script. This server stays active in the session, but the user shouldn't notice it exists.
Then in your other software you use RPyC to broadcast a message using the same port as the server so that it triggers it in Maya. This would then run your command.
It's slightly more overhead, but I have been able to use this successfully for stand-alone tools to trigger events in Maya. As far as using subprocess, you can use it to run a command in a new Maya session, but I don't think there's a way to use it for an existing one.
Hope that nudges you in the right direction.
Maybe an easier way would be to transfer your mesh by using an intermediate file. One process creates the file, another process (running inside the host app) reads it in.
Thanks for the advices, at the end I found a solution by opening the port of maya, by starting a mel command (at the startup):
commandPort -n ":<some_port>";
and connecting from modo to that port through socket:
HOST = '127.0.0.1'
PORT = <some_port>
ADDR=(HOST,PORT)
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(ADDR)
client.send(<message_that_you_want_to_send)
data = client.recv(1024)
client.close()
and i'm able to do whatever I want inside that opened maya, as long as I send mel commands.
Thanks for the help though!

Python daemon exits silently in urlopen

I have python daemon started from a init.d script. The daemon optionally reads an array of id:s from a server through a REST interface. Otherwise I use an array of pre-defined id:s.
logger.info("BehovsBoBoxen control system: bbb_domoticz.py starting up")
if DOMOTICZ_IN or DOMOTICZ_OUT:
#
# build authenticate string to access Domoticz server
#
p = urllib2.HTTPPasswordMgrWithDefaultRealm()
p.add_password(None, DOMOTICZ_URL, USERNAME, PASSWORD)
handler = urllib2.HTTPBasicAuthHandler(p)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
if DOMOTICZ_IN:
#
# Find all temperature sensors in Domoticz and populate sensors array
#
url= "http://"+DOMOTICZ_URL+"/json.htm?type=devices&filter=temp&used=true&order=Name"
logger.debug('Reading from %s',url)
response=urllib2.urlopen(url)
data=json.loads(response.read())
logger.debug('Response is %s',json.dumps(data, indent=4, sort_keys=True))
for i in range(len(data["result"])):
a=data["result"][i]["Description"]
ini=a.find('%room')
if ini != -1:
ini=ini+6
rIndex=int(a[ini:])
logger.info('Configure room id %s with Domoticz sensor idx: %s', rIndex, data["result"][i]["idx"])
sensors[rIndex]=data["result"][i]["idx"]
The daemon is started from an init.d script at boot. Everything works perfectly if I use the option with predefined id:s, i.e. I don't use the REST interface. The daemon starts at boot, and I can stop and restart the deamon with the command
sudo service start/stop/restart
However, if I use the other option (read id:s from server), the daemon does not start at boot. In the log file I find one single line ("...bbb_domoticz.py starting up"). Henze, the daemon exits silently right after this, probably in one of the following urllib2 calls. The following logger.debug('Reading...') does not show up in the log-file.
But the strange thing is if I manually start the daemon with a copy of the init.d script in my home directory, the daemon starts. If I run the init.d script from /etc/init.d, the deamon immediatly exits as it does at boot. But if start the daemon with the script in my home directory, I can continue to start/stop/restart with the service command.
So my taking from this is something goes wrong in urllib2 unless I have managed to start the daemon once from my home directory. It puzzles me I don't get any traceback or anything when the daemon exits.
Any idea how to nail down this problem?
Edit: Inspired by the answer to add logging to specific modules, I tried to add logging to urllib2. However, I canĀ“t figure out how to let this module use my logging handler. Help on this is appreciated.

sending trigger to my python script

I have a long running python script that I would like to be called via a udev rule. The script should run unattended, in the background if you like. The udev RUN is not suitable for long running commands which is getting in my way - my script gets killed after a wile.So I cannot call my script directly via udev.
I tried to disowning it by calling it in udev RUN via a shell script:
#!/bin/bash
/path/to/pythonscript.py & disown
This still got killed.
Now I was thinking to turn my script into a daemon, e.g. using pythons daemon module. Now "all" I need to do is put a short command in my udev RUN statement that will send some sort of trigger to my daemon, and things should be fine. What is less clear to me is the best way to implement this communication. I was thinking of adding maybe a jsonrpc service to my daemon, only listening on the loopback device. Is there a simpler or better way to achive what I want?
EDIT
Thanks to the below comments I came up with the following solution. Main "daemon":
import signal, time
import auto_copy
SIGNAL_RECEIVED = False
def run_daemon():
signal.signal(signal.SIGUSR1, signal_handler)
while True:
time.sleep(1)
global SIGNAL_RECEIVED
if SIGNAL_RECEIVED:
auto_copy.auto_copy()
SIGNAL_RECEIVED = False
def signal_handler(dump1, dump2):
global SIGNAL_RECEIVED
SIGNAL_RECEIVED = True
if __name__ == "__main__":
auto_copy.setup_logging()
run_daemon()
This gets started via systemd, the unit file beeing
[Unitt]
Description=Start auto_copy.py script
[Service]
ExecStart=/home/isaac/bin/auto_copy_daemon.py
[Install]
WantedBy=multi-user.target
Then there is a udev rule:
SUBSYSTEM=="block", KERNEL=="sr0", ACTION=="change", RUN+="/home/isaac/bin/send_siguser1.sh"
and finally the script that sends the signal:
#!/bin/bash
kill -SIGUSR1 $(ps -elf |grep auto_copy_daemon.py |awk '/python/ {print $4}')
If anyone is interested, the project is on github.

python,running command line servers - they're not listening properly

Im attempting to start a server app (in erlang, opens ports and listens for http requests) via the command line using pexpect (or even directly using subprocess.Popen()).
the app starts fine, logs (via pexpect) to the screen fine, I can interact with it as well via command line...
the issue is that the servers wont listen for incoming requests. The app listens when I start it up manually, by typing commands in the command line. using subprocess/pexpect stops the app from listening somehow...
when I start it manually "netstat -tlp" displays the app as listening, when I start it via python (subprocess/pexpect) netstat does not register the app...
I have a feeling it has something to do with the environemnt, the way python forks things, etc.
Any ideas?
thank you
basic example:
note:
"-pz" - just ads ./ebin to the modules path for the erl VM, library search path
"-run" - runs moduleName, without any parameters.
command_str = "erl -pz ./ebin -run moduleName"
child = pexpect.spawn(command_str)
child.interact() # Give control of the child to the user
all of this stuff works correctly, which is strange. I have logging inside my code and all the log messages output as they should. the server wouldnt listen even if I started up its process via a bash script, so I dont think its the python code thats causing it (thats why I have a feeling that its something regarding the way the new OS process is started).
It could be to do with the way that command line arguments are passed to the subprocess.
Without more specific code, I can't say for sure, but I had this problem working on sshsplit ( https://launchpad.net/sshsplit )
To pass arguments correctly (in this example "ssh -ND 3000"), you should use something like this:
openargs = ["ssh", "-ND", "3000"]
print "Launching %s" %(" ".join(openargs))
p = subprocess.Popen(openargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
This will not only allow you to see exactly what command you are launching, but should correctly pass the values to the executable. Although I can't say for sure without seeing some code, this seems the most likely cause of failure (could it also be that the program requires a specific working directory, or configuration file?).

Categories