I'm hoping this isn't a dupe. I've searched and searched and tried suggestions from many posts here, but so far I can't find what I'm looking for.
I have a Python 2.7 app which communicates via a second app via sockets. I would like to make it so that when the first Python app is launched, it automatically starts the second one.
The challenge is I'd really like the second app to pop up in its own console window with its own console logging. The catch is my app is used by people on different platforms, so I'd like something that's platform-independent, and so far all I have found that do this use platform-specific things. So I wonder, is this something that's even possible?
The closest thing I've found is to use subprocess.Popen, like this:
subprocess.Popen(['python', 'myscript.py'],
stdout=None,
stderr=None,
stdin=None)
(I know that I could use sys.executable instead of 'python', but in some cases the second app that needs to run won't be a Python app.)
Anyway the above works except that the output of the second app shows up co-mingled in the same window as the first app. I tried to shell=True to the args for Popen (and then I switched the args to a string as recommended in the Popen documentation), like this:
subprocess.Popen(['python myscript.py'],
stdout=None,
stderr=None,
stdin=None,
shell=True
)
The result of that (at least on my Mac) is the same as before. The second Python app starts, but its output is co-mingled into the existing console window instead of launching a new one. (And on Windows I can't get that to work at all. I just get an error that "python myscript.py" is not a valid command. Same result if I change the string to "python.exe myscript.py")
I don't need any communication between the apps since I already have everything I need via the sockets. (My app is a game controller engine that connects to an external media controller which could be Unity, Unreal, Python, etc.)
I tried various incarnations of the "old" ways, including os.spawn, os.system and a bunch of other things, but none of them give the behavior I'm hoping for. I also tried to use the subprocess convenience functions, but as far as I can tell those all block until the subprocess terminates.
Frankly at this point the best I can think of is just using a batch file (though that would have to be OS-specific as far as I know)?
So I wonder if anyone knows whether want I want to do is possible?
Thanks!
Brian
Related
From the IPython Architecture Overview documentation we know that ...
The IPython engine is a Python instance that takes Python commands over a network connection.
Given that it is a Python instance does that imply that these engines are stand alone processes? I can manually load a set of engines via a command like ipcluster start -n 4. Doing thus is the creation of engines considered the creation of child processes of some parent process or just a means to kick off a set of independent processes that rely on IPC communication to get their work done? I can also invoke an engine via the ipengine command, which is surely standalone as its entered directly to the OS command line with no relation to anything really.
As background I'm trying to drill into how the many IPython engines manipulated through a Client from a python script will interact with another process kicked off in that script.
Here's a simple way to find out the processes involved, print the list of current processes before I fire off the controller and engines and then print the list after they're fired off. There's a wmic command to get the job done...
C:\>wmic process get description,executablepath
Interestingly enough the controller gets 5 python processes going, and each engine creates one additional python process. So from this investigation I also learned that an engine is its own process, as well as the controller...
C:\>wmic process get description,executablepath | findstr ipengine
ipengine.exe C:\Python34\Scripts\ipengine.exe
ipengine.exe C:\Python34\Scripts\ipengine.exe
C:\>wmic process get description,executablepath | findstr ipcontroller
ipcontroller.exe C:\Python34\Scripts\ipcontroller.exe
From the looks of it they all seem standalone, though I don't think the OS's running process list carries any information about how the processes are related as far as the parent/child relationship is concerned. That may be a developer only formalism that has no representation that's tracked in the OS, but I don't know about these sort of internals to know either way.
Here's a definitive quote from MinRK that addresses this question directly:
"Every engine is its own isolated process...Each kernel is a separate
process and can be on any machine... It's like you started a terminal IPython session, and every engine is a separate IPython session. If you do a=5 in this one, a=10 in that one, this guy has 10 this guy has 5."
Here's further definitive validation, inspired by a great SE Hot Network Question on ServerFault that mentioned use of ProcessExplorer which actually tracks parent child processes...
Process Explorer is a Sysinternals tool maintained by Microsoft. It
can display the command line of the process in the process's
properties dialog as well as the parent that launched it, though the
name of that process may no longer be available.
--Corrodias
If I fire off more engines in another command window that section of ProcessExplorer just duplicates exactly as you see in the screenshot.
And just for the sake of completeness, here' what the command ipcluster start --n=5 looks like...
I have been trying to create a script which reloads a web browser called Midori if the internet flickers. But, it seems only to work if I open Midori through the CLI - otherwise, the program crashes after I reload it. I have decided that the best idea is thus to have the script open Midori through the subprocess module. So, I put this as one of the first arguments in my code:
import subprocess as sub
sub.call(["midori"])
The browser opens, but the rest of the program freezes until I quit Midori. I have tried to use threading, but it doesn't seem to work.
Is there any way to open an application through Python, and then let the rest of the script continue to run once said application has been opened?
From the docs:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
(Emphasis added)
You can see this is the behaviour we should expect. To get around this, use subprocess.Popen instead. This will not block in the same way:
from subprocess import Popen
midori_process = Popen(["midori"])
Can my python script spawn a process that will run indefinitely?
I'm not too familiar with python, nor with spawning deamons, so I cam up with this:
si = subprocess.STARTUPINFO()
si.dwFlags = subprocess.CREATE_NEW_PROCESS_GROUP | subprocess.CREATE_NEW_CONSOLE
subprocess.Popen(executable, close_fds = True, startupinfo = si)
The process continues to run past python.exe, but is closed as soon as I close the cmd window.
Using the answer Janne Karila pointed out this is how you can run a process that doen't die when its parent dies, no need to use the win32process module.
DETACHED_PROCESS = 8
subprocess.Popen(executable, creationflags=DETACHED_PROCESS, close_fds=True)
DETACHED_PROCESS is a Process Creation Flag that is passed to the underlying CreateProcess function.
This question was asked 3 years ago, and though the fundamental details of the answer haven't changed, given its prevalence in "Windows Python daemon" searches, I thought it might be helpful to add some discussion for the benefit of future Google arrivees.
There are really two parts to the question:
Can a Python script spawn an independent process that will run indefinitely?
Can a Python script act like a Unix daemon on a Windows system?
The answer to the first is an unambiguous yes; as already pointed out; using subprocess.Popen with the creationflags=subprocess.CREATE_NEW_PROCESS_GROUP keyword will suffice:
import subprocess
independent_process = subprocess.Popen(
'python /path/to/file.py',
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP
)
Note that, at least in my experience, CREATE_NEW_CONSOLE is not necessary here.
That being said, the behavior of this strategy isn't quite the same as what you'd expect from a Unix daemon. What constitutes a well-behaved Unix daemon is better explained elsewhere, but to summarize:
Close open file descriptors (typically all of them, but some applications may need to protect some descriptors from closure)
Change the working directory for the process to a suitable location to prevent "Directory Busy" errors
Change the file access creation mask (os.umask in the Python world)
Move the application into the background and make it dissociate itself from the initiating process
Completely divorce from the terminal, including redirecting STDIN, STDOUT, and STDERR to different streams (often DEVNULL), and prevent reacquisition of a controlling terminal
Handle signals, in particular, SIGTERM.
The reality of the situation is that Windows, as an operating system, really doesn't support the notion of a daemon: applications that start from a terminal (or in any other interactive context, including launching from Explorer, etc) will continue to run with a visible window, unless the controlling application (in this example, Python) has included a windowless GUI. Furthermore, Windows signal handling is woefully inadequate, and attempts to send signals to an independent Python process (as opposed to a subprocess, which would not survive terminal closure) will almost always result in the immediate exit of that Python process without any cleanup (no finally:, no atexit, no __del__, etc).
Rolling your application into a Windows service, though a viable alternative in many cases, also doesn't quite fit. The same is true of using pythonw.exe (a windowless version of Python that ships with all recent Windows Python binaries). In particular, they fail to improve the situation for signal handling, and they cannot easily launch an application from a terminal and interact with it during startup (for example, to deliver dynamic startup arguments to your script, say, perhaps, a password, file path, etc), before "daemonizing". Additionally, Windows services require installation, which -- though perfectly possible to do quickly at runtime when you first call up your "daemon" -- modifies the user's system (registry, etc), which would be highly unexpected if you're coming from a Unix world.
In light of that, I would argue that launching a pythonw.exe subprocess using subprocess.CREATE_NEW_PROCESS_GROUP is probably the closest Windows equivalent for a Python process to emulate a traditional Unix daemon. However, that still leaves you with the added challenge of signal handling and startup communications (not to mention making your code platform-dependent, which is always frustrating).
That all being said, for anyone encountering this problem in the future, I've rolled a library called daemoniker that wraps both proper Unix daemonization and the above strategy. It also implements signal handling (for both Unix and Windows systems), and allows you to pass objects to the "daemon" process using pickle. Best of all, it has a cross-platform API:
from daemoniker import Daemonizer
with Daemonizer() as (is_setup, daemonizer):
if is_setup:
# This code is run before daemonization.
do_things_here()
# We need to explicitly pass resources to the daemon; other variables
# may not be correct
is_parent, my_arg1, my_arg2 = daemonizer(
path_to_pid_file,
my_arg1,
my_arg2
)
if is_parent:
# Run code in the parent after daemonization
parent_only_code()
# We are now daemonized, and the parent just exited.
code_continues_here()
For that purpose you could daemonize your python process or as you are using windows environment you would like to run this as a windows service.
You know i like to hate posting only web-links:
But for more information according to your requirement:
A simple way to implement Windows Service. read all comments it will resolve any doubt
If you really want to learn more
First read this
what is daemon process or creating-a-daemon-the-python-way
update:
Subprocess is not the right way to achieve this kind of thing
I had a program that Scraped certain data from certain Web-Pages, and when the Web-Pages changed, acted accordingly.
How would one set up the program so it continues to run in the background?
I don't need any specifics
I'm just really confused on this concept and would appreciate whatever help anybody has to offer.
start path-to-pythonw.exe your-code.py
pythonw means without console.
start means start on background.
if your python is installed system-wide, you can probably start your-code.pyw
.pyw is associated with pythonw.exe
remember you cannot use print (to stdout) in this case.
If you want to be able to just start your process and have it background itself and do a few more typical things that "daemon" processes do in Unix, look here: How do you create a daemon in Python?
There is no concept of "background" in Windows. But the UNIX shell concept of a background process can be reasonably emulated by running your Python script as a Windows service. There are a couple of suggestions in this question: Is it possible to run a Python script as a service in Windows? If possible, how?
For casual use, I suggest that you learn how to use srvany from the second answer.
You simply need to leave your program running! Please google "python daemon" and see how to implement a persistent background process in Python.
Now, you cannot know when a website changes unless you poll it. If the website is well designed, the page you are trying to poll will have a "Last-Modified" header, you can make a "HEAD" request every so often (be nice: don't poll like crazy) and act when Last-Modified is >= than the one on record. If the site is not well designed, it will not have a reliable Last-Modified or ETAG header, in that case you will have to parse manually and check for changes yourself.
Cheers.
Now I am working in a project where the testscript has to connect many (3-10) remote computers (SSH and do some stuff).
I started to use the pexpect and it is simple as a button. It works fine.
I want to see the communication during test. I know it is possible to redirect the log to the screen. But in this case the logs (from different computer) are mixed.
What I would like is to open new terminal window (or consol or whatever) for every new spawn object. In this case I could see all communication in different windows. Additionally I would like to keep the possibility of spawn.interact() in every window.
I feel that it is possible somehow but I don't know how. I think some file pointer (or pipe) should pass to the new window somehow(?)
(SecureCRT knows sometihng like this, it has tabbed consol windows and can access them separately, but it is a commercial product)
Or let me make the problem more simple.
If I do this, I can open a new shell in a new window:
p=Popen(["cygstart", "bash"])
How can I read and write into this shell from my script (parent) to see it in this new window?
I would really appreciate it, if one of you could point me in the right direction.
It is enough if you tell me what to read or find for (on Google) because I did not find anybody such kind of problem.
The environment is cygwin.
Thanks in advance
br:drv
Have you tried using the logfile parameter?
child = pexpect.spawn('some_command')
mylog = open('/tmp/mylog','w')
child.logfile = mylog
This will automatically log all communication to the file, including commands you enter after calling spawn.interact()
More info available on the website: http://pexpect.sourceforge.net/pexpect.html
Search for 'logfile' to find the relevant documentation.