This question already has answers here:
How to run Python's subprocess and leave it in background
(2 answers)
Closed 1 year ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
I made a remote-control client which can receive commands from the server. The commands that are received will be executed as normal cmd commands in a shell. But how can I execute these commands in the background so that the user wont see any in- or output.
For example when I do this, the user would see anything whats going on:
import os
os.system(command_from_server)
You can using subprocess Popen to start a cmd without waiting for end:
from subprocess import Popen
pid = Popen(["ls", "-l"]).pid
Popen has a lot of configure options for handling stdout and stderr. See the ufficial doc.
To execute a command in background, you have to use the subprocess module.
For example:
import subprocess
subprocess.Popen("command", shell=True)
If you want to execute command with more than one argument eg: ls -a, the code is a bit different:
import subprocess
subprocess.Popen("ls -a", shell=True)
To change the directory you can use the os module:
import os
os.chdir(path)
But, as mentioned from tripleee in the comments bellow, you can also pass the cwd parameter to the subprocess.Popen-Method.
Related
What I need here is opening a python console from python script and then pass commands to that python console.
I need to automate this process. Currntly I am using os.system and subprocess to open the python console but after that I am totally stuck with passing print("Test") to python console.
Here is the sample code which I am working on.
import os
os.system("python")
#os.system("print('Hello World'))
# or
import subprocess
subprocess.run("python", shell = True)
Please help me to understand how I can pass the nested commands.
Thanks
I have found a very helpful answer from another thread and applied to my use case.
from subprocess import run, PIPE
p = run(['python'], stdout=PIPE,
input='print("Test")\nprint("Test1")\nimport os\nprint(os.getcwd())', encoding='ascii')
print(p.returncode)
# -> 0
print(p.stdout)
It starts the python shell and then executes the following commands separated by \n.
Prints Test
Prints Test1
Imports OS
Get the current working directory.
Reference :
How do I pass a string into subprocess.Popen (using the stdin argument)?
I am trying to run this code in the background (from command line) on Windows using python 2.7:
import httpimport
mod = httpimport.load('module name','URL')
Everything works, but the process lingers when launched and only ctrl + c will end it. I am looking to start an independent process from this in the background.
I have read that multiprocess can come useful here but I would need some pointers if I may.
Any suggestions ?
EDIT: I may add this is a script which is calling another python script from URL. From the answers below I gathered that I might need to change my remote script first.
if you want to run your process in the background you can use spawnl
import os
os.spawnl(os.P_DETACH, 'python code.py "module name" "url"')
but you need to be cautious, you can't kill the process if you don't knew it's pid or check where it is running via task manager
check for more: https://docs.python.org/2/library/os.html#os.spawnl
for your code (for exemple code.py):
import httpimport
from sys import argv
name, module_name, URL = argv # here you get the module name and URL from the argument given from before
mod = httpimport.load(module_name , URL)
This question already has answers here:
Why subprocess.Popen doesn't work when args is sequence?
(3 answers)
Closed 6 years ago.
In my terminal if I run: echo $(pwd), I got /home/abr/workspace, but when I tried to run this script in python like this:
>>> import subprocess
>>> cmd = ['echo', '$(pwd)']
>>> subprocess.check_output(cmd, shell=True)
I get '\n'. How to fix this?
Use os package:
import os
print os.environ.get('PWD', '')
From the documentation on the subprocess module:
If args is a sequence, the first item specifies the command string,
and any additional items will be treated as additional arguments to
the shell itself.
You want:
subprocess.check_output("echo $(pwd)", shell=True)
Try this:
cmd = 'echo $(pwd)'
subprocess.check_output(cmd, shell=True)
In subprocess doc it specified that cmd should be a string when shell=True.
From the documentation:
The shell argument (which defaults to False) specifies whether to use
the shell as the program to execute. If shell is True, it is
recommended to pass args as a string rather than as a sequence.
A better way to achieve this is probably to use the os module from the python standard library, like this:
import os
print os.getcwd()
>> "/home/abr/workspace"
The getcwd() function returns a string representing the current working directory.
The command subpreocess.check_output will return the output of the command you are calling:
Example:
#echo 2
2
from python
>>>subprocess.check_output(['echo', '2'], shell=True)
>>>'2\n'
the '\n' is included because that is what the command does it prints the output sting and then puts the current on a new line.
now back to your problem; assuming you want the output of 'PWD', first of all you have to get rid of the shell. If you provide the shell argument, the command will be run in a shell environment and you won't see the returned string.
subprocess.check_output(['pwd'])
Will return the current directory + '\n'
On a personal note, I have a hard time understanding what you are trying to do, but I hope this helps solve it.
This question already has answers here:
How do I execute a program or call a system command?
(65 answers)
Convert POSIX->WIN path, in Cygwin Python, w/o calling cygpath
(4 answers)
Closed 7 years ago.
In Perl, if I want to execute a shell command such as foo, I'll do this:
#!/usr/bin/perl
$stdout = `foo`
In Python I found this very complex solution:
#!/usr/bin/python
import subprocess
p = subprocess.Popen('foo', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout = p.stdout.readlines()
retval = p.wait()
Is there any better solution ?
Notice that I don't want to use call or os.system. I would like to place stdout on a variable
An easy way is to use sh package.
some examples:
import sh
print(sh.ls("/"))
# same thing as above
from sh import ls
print(ls("/"))
Read more of the subprocess docs. It has a lot of simplifying helper functions:
output = subprocess.check_output('foo', shell=True, stderr=subprocess.STDOUT)
You can try this
import os
print os.popen('ipconfig').read()
#'ipconfig' is an example of command
This question already has answers here:
Running shell command and capturing the output
(21 answers)
Closed 2 years ago.
In Python , If I am using "wget" to download a file using os.system("wget ), it shows on the screen like:
Resolving...
Connecting to ...
HTTP request sent, awaiting response...
100%[====================================================================================================================================================================>] 19,535,176 8.10M/s in 2.3s
etc on the screen.
What can I do to save this output in some file rather than showing it on the screen ?
Currently I am running the command as follows:
theurl = "< file location >"
downloadCmd = "wget "+theurl
os.system(downloadCmd)
The os.system functions runs the command via a shell, so you can put any stdio redirects there as well. You should also use the -q flag (quiet) to wget.
cmd = "wget -q " + theurl + " >/dev/null 2>&1"
However, there are better ways of doing this in python, such as the pycurl wrapper for libcurl, or the "stock" urllib2 module.
To answer your direct question, and as others have mentioned, you should strongly consider using the subprocess module. Here's an example:
from subprocess import Popen, PIPE, STDOUT
wget = Popen(['/usr/bin/wget', theurl], stdout=PIPE, stderr=STDOUT)
stdout, nothing = wget.communicate()
with open('wget.log', 'w') as wgetlog:
wgetlog.write(stdout)
But, no need to call out to the system to download a file, let python do the heavy lifting for you.
Using urllib,
try:
# python 2.x
from urllib import urlretrieve
except ImportError:
# python 3.x
from urllib.request import urlretrieve
urlretrieve(theurl, local_filename)
Or urllib2,
import urllib2
response = urllib2.urlopen(theurl)
with open(local_filename, 'w') as dl:
dl.write(response.read())
local_filename is the destination path of your choosing. It is sometimes possible to determine this value automatically, but the approach depends on your circumstance.
As others have noted, you can use Python native library modules to do your I/O, or you can modify the command line to redirect the output.
But for full control over the output, the best thing is to use the Python subprocess module instead of os.system(). Using subprocess would let you capture the output and inspect it, or feed arbitrary data into standard input.
When you want a quick-and-dirty way to run something, use os.system(). When you want full control over how you run something, use subprocess.
The wget process is just writing to STDOUT (and perhaps STDERR if something bad happens) and these are still "wired" to the terminal.
To get it to stop doing this, redirect (or close) said filehandles. Look at the subprocess module which allows configuring said filehandles when starting a process. (os.system just leaves the STDOUT/STDERR of the spawned process alone and thus they are inherited but the subprocess module is more flexible.)
See Working with Python subprocess - Shells, Processes, Streams, Pipes, Redirects and More for lots of nice examples and explanations (it introduces the concepts of STDIN/STDOUT/STDERR and works from there).
There are likely better ways to handle this than using wget -- but I'll leave such to other answers.
Happy coding.