I need to run python script and be sure that it will restart after it terminates. I know that there is UNIX solution called supervisord. But unfortunately server where my script has to be run is on Windows. Do you know what tool can be useful?
Thanks
Despite the big fat disclaimer here, you can run Supervisor with Cygwin in Windows; it turns out that Cygwin goes a long way to simulate a Posix environment, so well that in fact supervisord runs unchanged. There is no need to learn a new tool, and you will even save quite a bit of work if you need to deploy a complicated project across multiple platforms.
Here's my recipe:
If you have not done it yet, install Cygwin. During the installation process, select Python.
From the Cygwin terminal, install virtualenv as usual.
Create a virtualenv for supervisord, and then install as usual:
pip install supervisord
Configure supervisord in the usual way. Keep in mind that supervisord will be running with Cygwin, so you better use paths the Cygwin way (C:\myservers\project1 translates to /cygdrive/c/myservers/project1 in Cygwin).
Now you probably want to install supervisord as a service. Here's how I do it:
cygrunsrv --install supervisord --path /home/Administrator/supervisor/venv/bin/python --args "/home/Administrator/supervisor/venv/bin/supervisord -n -c /home/Administrator/supervisor/supervisord.conf"
Go to the Windows service manager and start the service supervisord that you just installed.
Point 5 installs supervisord as a Windows service, so that you can control it (start/stop/restart) from the Windows service manager. But the things that you can do with supervisorctl work as usual, meaning that you can simply deploy your old configuration file.
You likely want to run your script as a Windows Service. To do so you'll need the python-win32 library. This question has a good description of how you go about doing this, as well as a bunch of links to other related resources. This question may also be of use.
A Windows Service is how you want to wrap up any script that needs to run continuously on Windows. They can be configured to automatically start on boot, and handle failures. Nothing is going to stop anyone from killing the process itself, but to handle that potential situation, you can just create a bat file and use the sc command to poll the service to see if it is running and if not restart the service. Just schedule the bat file to run every 60 seconds (or whatever is reasonable for your script to potentially be down).
If you want a supervisord-like process manager that runs on most posix OS and is Python-based like supervisord, then you should look at honcho which is a Python port of foreman (Ruby-based):
http://pypi.python.org/pypi/honcho/
It works great on mac, linux but (actually) not yet windows... (editing my initial answer where I had said optimistically it was already working on Windows based on a pull request that has been discarded since)
There is a fork that provides Windows support here https://github.com/redpie/honcho
and some work in progress to support Windows here https://github.com/nickstenning/honcho/issues/28 ... at least it could become a possible solution in a near future.
There is also a foreman fork to support Windows here: https://github.com/ddollar/foreman-windows that may be working for you, though I never tried it.
So for now, a Windows service might be your best short term option.
supervisor for windows worked for us on python27 - 32 bit. I had to install pypiwin32 and pywin32==223.
As it is an old question with old answers I will update it with latest news:
There is a supervisor-win project that claims to support supervisor on Windows.
No, supervisord is not supported under Windows.
BUT what you can do is, to restart it automatically from a wrapper script:
#!/usr/bin/python
from subprocess import Popen
file_path = " script_to_be_restarted.py"
args_as_str = " --arg1=woop --arg2=woop"
while True:
print("(Re-)Start script %s %s" % (file_path, args_as_str))
p = Popen("python " + file_path + args_as_str, shell=True)
p.wait()
Related
Wish You all beautiful sunny day! :D,
I have a question for You guys. I have following python "script":
import os
os.system('ubuntu.exe')
Which opens Ubuntu running on my WSL. And now, when the Ubuntu terminal appears:
I would like to execute following commands: sudo /etc/init.d/dbus start and sudo /etc/init.d/xrdp start using my python script (just do them automatically). However, when I run one of the commands above, terminal requests my password:
So the script should be also able to enter the password.
Is there any way, how to do it?
Kind regards,
D.
While the question/answers linked in the comments is a good read (sudoers in particular), there's a better method for WSL. Instead of using ubuntu.exe, use the newer wsl.exe replacement. The wsl command offers more control over the startup, including being able to change the user:
import os
os.system('wsl ~ -u root -e sh -c "nohup service xrdp start"')
os.system('wsl -u root service dbus start')
The nohup is needed because of what seems to be a timing issue. When starting up via the WSL command, the shell (owning process) will terminate before xrdp gets a chance to fork. nohup just makes sure that the full xrdp init script gets a chance to run before that happens. This really isn't a WSL issue, per se. It can also be replicated if you were do something similar with exec sh -c "sudo service xrdp start".
A couple of other notes. First, this does not require a password, since WSL doesn't have the concept of "login." The /init process (WSL's PID1 and initialization) is responsible for setting the owning user for each session. This is not considered a security risk since even the root WSL user runs with no greater than the permissions of the Windows user.
Also note that, in my experience, it's not necessary to start dbus for xrdp access, even though I've seen instructions that say it is. Ultimately it will depend on what you want to run within the xrdp session, of course.
I'm trying to use Sage in Anaconda 3 but it looks that the libraries are not imported.
I firstly created a new environment 'ipykernel_py2' and then installed Python 2 as explained in here. With this I can have both Python 3 and Python 3 up and running in Anaconda 3.
Then I went to the kernel's folder created (C:\Users\YOUR_USERNAME\AppData\Local\Continuum\anaconda3\envs\ipykernel_py2\share\jupyter\kernels) and pasted Sage's kernel (taken from C:\Program Files\SageMath 8.2\runtime\opt\sagemath-8.2\local\share\jupyter\kernels). This allows to create new SageMath files in Jupyter but the kernel is dead.
To activate the kernel I used Anaconda Prompt and typed:
activate ipykernel_py2
python -m ipykernel install --user --name sagemath --display-name "SageMath 8.2"
So the kernel is now activated and I can create and run Sage files. However the libraries are still not working. It seems that the file is running like a normal Python 2 file.
Does anyone know how to fix this? Do I need to create a seperate environment?
Sage for Windows runs under a UNIX emulation environment called Cygwin. Looking at the sagemath/kernel.json it contains:
{"display_name": "SageMath 8.2", "argv": ["/opt/sagemath-8.2/local/bin/sage", "--python", "-m", "sage.repl.ipython_kernel", "-f", "{connection_file}"]}
You can see here that it has a UNIX-style path to the sage executable. This path only makes sense to other programs running under Sage's Cygwin environment, and is meaningless to native Windows programs. Simply converting it to the equivalent Windows path won't work either, because bin/sage is actually a shell script. At the very least you need to provide a Windows path to the bash that comes with Cygwin and pass it the UNIX path to the sage executable (the same as the one above). Without a login shell, most environment variables needed won't be set either, so you probably need bash -l.
So, something like:
{"display_name": "SageMath 8.2", "argv": ["C:\\Program Files\\SageMath 8.2\\runtime\\bin\\bash.exe", "-l", "/opt/sagemath-8.2/local/bin/sage", "--python", "-m", "sage.repl.ipython_kernel", "-f", "{connection_file}"]}
might work. The one thing I'm not sure about is whether the {connection_file} argument will be handled properly either. I haven't tested it.
Update: Indeed, the above partially works, but there are a few problems: The {connection_file} argument as passed as the absolute Windows path to the file. While Cygwin can normally translate transparently from Windows paths to a corresponding UNIX path, there is a known issue that Python's os.path module on Cygwin does not handle Windows-style paths well, and this leads to issues.
The other major problem I encountered was that IPKernelApp, the class that drives generic Jupyter kernels, has a thread which polls to see if the kernel's parent process (in this case the notebook server) has exited, so it can appropriately shut down if the parent shuts down. This is how kernels know to automatically shut down when you kill the notebook server.
How this is done is very different depending on the platform--Windows versus UNIX-like. Because Sage's kernel runs in Cygwin, it chooses the UNIX-like poller. However, this is wrong if the notebook server happens to be a native Windows process, as is the case when running the Sage kernel in a Windows-native Jupyter. Remarkably, the parent poller for Windows can work just as well on Cygwin since it accesses the Windows API through ctypes. Therefore, this can be worked around by providing a wrapper to IPKernelApp that forces uses of ParentPollerWindows.
A possible solution then looks something like this: From within the SageMath Shell do:
$ cd "$SAGE_LOCAL"
$ mkdir -p ./share/jupyter/kernels/sagemath
$ cd ./share/jupyter/kernels/sagemath
$ cat <<_EOF_ > kernel-wrapper.sh
#!/bin/sh
here="$(dirname "$0")"
connection_file="$(cygpath -u -a "$1")"
exec /opt/sagemath-8.2/local/bin/sage --python "${here}/kernel-wrapper.py" -f "${connection_file}"
_EOF_
$ cat <<_EOF_ > kernel-wrapper.py
from ipykernel.kernelapp import IPKernelApp as OrigIPKernelApp
from ipykernel.parentpoller import ParentPollerWindows
from sage.repl.ipython_kernel.kernel import SageKernel
class IPKernelApp(OrigIPKernelApp):
"""
Although this kernel runs under Cygwin, its parent is a native Windows
process, so we force use of the ParentPollerWindows.
"""
def init_poller(self):
if self.interrupt or self.parent_handle:
self.poller = ParentPollerWindows(self.interrupt,
self.parent_handle)
IPKernelApp.launch_instance(kernel_class=SageKernel)
_EOF_
Now edit the kernel.json (in its existing location under share\jupyter\kernels\sagemath) to read:
{"display_name": "SageMath 8.2", "argv": ["C:\\Program Files\\SageMath 8.2\\runtime\\bin\\bash.exe", "-l", "/opt/sagemath-8.2/local/share/jupyter/kernels/sagemath/kernel-wrapper.sh", "{connection_file}"]}
This runs kernel-wrapper.sh which in turn runs kernel-wrapper.py. (There are a few simplifications I could make to get rid of the need for kernel-wrapper.sh completely, but that would be easier in SageMath 8.3 which includes PyCygwin.)
Make sure to change every "8.2" to the appropriate "X.Y" version for your Sage installation.
Update: Made some updates thanks to feedback from a user, but I haven't tested these changes yet, so please make sure instead of blindly copy/pasting that every file/directory path in my instructions exists and looks correct.
As you can see, this was not trivial, and was never by design meant to be possible. But it can be done. Once the kernel itself is up and running it's just a matter of talking to it over TCP/IP sockets so there's not too much magic involved after that. I believe there are some small improvements that could be made on both the Jupyter side and on the Sage side that would facilitate this sort of thing in the future...
I have quite complicated setup with
Luigi https://github.com/spotify/luigi
https://github.com/kennethreitz/requests-html
and https://github.com/miyakogi/pyppeteer
But long story short - everything works fine at my local Ubuntu (17.10) desktop, but when run in Docker(18.03.0-ce) via docker-compose(1.18.0) (config version 3.3, some related details here https://github.com/miyakogi/pyppeteer/issues/14) it bloats up with zombie chrome processes spawned from python.
Any ideas why it might happen and what to do with it?
Try installing dumb-init: https://github.com/Yelp/dumb-init (available both in alpine and debian) inside your container and use it as entry point (more reading here: https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/)
Seems like it is because python process is not meant to be a root-level process - the topmost one in the processes tree. It just does not handle zombies properly. So after few hours of struggling I end up with the next ugly docker-compose config entry:
entrypoint: /bin/bash -c
command: "(luigid) &"
Where luigid is a python process. This makes bash a root process which handles zombies perfectly.
Would be great to know a more straightforward way of doing this.
LSOpenURLsWithRole() failed with error -600 for the URL http://localhost:9000/.
This is the error I get when I try to launch my SimpleHTTPServer while in a tmux session. I'm a front-end web developer and I spend most of my time working with a SimpleHTTPServer, rather than Apache. The issue is that it errors out at the open command, because I have the habit of opening files and directories from the terminal directly (open dirname/, or open .) , and when i use this in tmux it gives me the same error.
I want to mention that I'm on a Macbook Air, running OSX 10.9 Mavericks.
This is the code of the function I use in my terminal to start the server:
# Start an HTTP server from a directory, optionally specifying the port
function server() {
local port="${1:-8000}"
open "http://localhost:${port}/"
# Set the default Content-Type to `text/plain` instead of `application/octet-stream`
# And serve everything as UTF-8 (although not technically correct, this doesn’t break anything for binary files)
python -c $'import SimpleHTTPServer;\nmap = SimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map;\nmap[""] = "text/plain";\nfor key, value in map.items():\n\tmap[key] = value + ";charset=UTF-8";\nSimpleHTTPServer.test();' "$port"
}
Edit
The issue doesn't appear anymore so I have 3 possible solutions for this:
Highly unlikely:
changing from Python3 to Python 2.7.5 (OS X Default)
Most likely:
Apple released an update to Mavericks that fixes this issue
installing Command-Line tools in order to use Homebrew to build & install the latest version of VIM
Not sure if this is the same case for you, since you mentioned you restarted a few times ...
However for me, I noticed that I had stray 2 tmux sessions after an iterm failure, which I had forgotten about. They were running a bunch of services started by grunt, so i'm assuming that one of these services was conflicting when trying to start again. Killing them made the bug not occur anymore and I was able to run my node app in tmux.
Like #Cosmin said, check if there is another tmux running, kill all tmux.
Then open a new tmux, it will work like before.
I have the following problem. I wanted to use the matplotlib package animation to save an mp4 video file. The save function has a dependency for generating the mp4 file, the ffmpeg external library. So I installed ffmpeg on a Mac osx 10.8 via Macports, and it got installed in /opt/local/bin .
But now, running the script in canopy, the interpreter (ipython shell) can not see ffmpeg. I added the path to my .bash_profile, and I can run the program at my terminal, but when I type os.environ['PATH'] the actual PATH of my shell was not added, and /opt/local/bin is not there.
If I try to run the script, I get this error:
/Users/alejandrodelacallenegro/Library/Enthought/Canopy_64bit/User/lib/python2.7/site- packages/matplotlib/animation.py:695: UserWarning: MovieWriter ffmpeg unavailable
warnings.warn("MovieWriter %s unavailable" % writer)
Any ideas to fix the problem? What I have to do to change an environmental variable that python sees at startup? Did anyone have the same problem?
Thanks.
The problem here has nothing to do with Enthought; it's that OS X doesn't run bash when you launch things from Finder, LaunchDaemons, etc., and therefore doesn't access your .bash_profile. Instead, it runs them from launchd.
If you want to add some environment variables to affect anything run by launchd for the current user, that's easy:
launchctl setenv PATH $PATH:/opt/local/bin
If you want this to happen every time you log in, if you create a file ~/.launchd.conf, the subcommands in that file will be run through launchctl every time launchd starts (which is the first step in logging in a new user session).
If you want it to be system-wide, rather than just for your user, you can sudo launchctl and/or create/edit /etc/launchd.conf. However, you almost certainly don't want to change the environment used by root services, etc., unless you really know what you're doing.
If it helps: Using launchctl manually, editing ~/.launchd.conf, and editing /etc/launchd.conf are roughly equivalent to export, ~/.bash_profile, and /etc/profile (except of course that they affect launchd rather than bash/sh).
See the launchctl(1) man page for details, or just type launchctl to start an interactive session and use the built-in help. (The pages launchd(8) and launchd.conf(5) also have useful info.)
You can also use the deprecated environment.plist file to affect even things that aren't run by launchd, but… that's deprecated, and there really isn't anything for it to affect that you care about, except in (much) older versions of OS X.
People coming from other Unix systems are often caught out by this. Most file managers ask the shell to run programs for them; Finder.app (and the command-line tool open, and the AppleScript environment, and so on) ask launchd to do it. Plus, on most X11 systems, if you look up the process tree from your file manager, it was ultimately launched by a user shell too, whereas on OS X, Finder.app was launched by launchd, which was launched by the system-wide launchd; no shell in sight.
This also means that other shell-specific stuff like changing resource limits or default umask won't affect programs started outside the shell on a Mac. launchctl is again the answer.