I am writing a code in python in which I need to get the internet traffic by software's name. It's required of me to use the cmd command netstat -nb, command which requires elevation. I have to keep it simple, something of one line or so, no long batch or powershell scripts. It's preferable if I use only the subprocess python library.
I have got two lines of code that work halfway of what I need:
subprocess.check_output('powershell Start-Process netstat -ArgumentList "-nb" -Verb "runAs"', stdout=subprocess.PIPE, shell=True)
The problem in this one is that a new window it's opened and all the data I need is lost. Maybe there's a way of not opening another window or saving the output from the new window?
subprocess.check_output('powershell Invoke-Command {cmd.exe -ArgumentList "/c netstat -nb"}', stdout=subprocess.PIPE, shell=True)
This one I have the output in the same window but I don't have elevation so I don't get any results... Maybe there is a way of getting elevation without opening a new window or so?
Thank you for your help, hope my question was clear enough.
Create a batch file to perform the task with captured output to a temp file:
[donetstat.bat]
netstat -nb > ".\donetstat.tmp"
Then execute that in your program:
[yourprogram.py]
subprocess.check_output('powershell Start-Process cmd -ArgumentList "/c ".\donetstat.tmp" -Verb "runAs"', stdout=subprocess.PIPE, shell=True)
It would probably be a bit more bullet-resistent to get the TEMP environment variable and use it for a fully-qualified tempfile location:
netstat -nb > "%TEMP%.\donetstat.tmp"
And then get do the same in your Python script.
Once you've created the tempfile, you should be able to process it in Python.
If this needs to be durable with multiple worker processes, add some code to ensure you have a unique tempfile for each process.
Related
I have a script that uses a really simple file based IPC to communicate with another program. I write a tmp file with the new content and mv it onto the IPC file to keep stuff atomar (the other program listens of rename events).
But now comes the catch: This works like 2 or 3 times but then the exchange is stuck.
time.sleep(10)
# check lsof => target file not opened
subprocess.run(
"mv /tmp/tempfile /tmp/target",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
shell=True,
)
# check lsof => target file STILL open
time.sleep(10)
/tmp/tempfile will get prepared for every write
The first run results in:
$ lsof /tmp/target
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 1714 <user> 3u REG 0,18 302 10058 /tmp/target
which leave it open until I terminate the main python program. Consecutive runs change the content as expected, the inode and file descriptor but its still open what I would not expect from a mv.
The file is finally gets closed when the python program featuring these lines above is getting closed.
EDIT:
Found the bug: mishandeling the tempfile.mkstemp(). See: https://docs.python.org/3/library/tempfile.html#tempfile.mkstemp
I created the tempfile like so:
_fd, temp_file_path = tempfile.mkstemp()
where I discarded the filedescriptor _fd which was open by default. I did not close it and so it was left open even after the move. This resulted in an open target and since I was just lsofing on the target, I did not see that the tempfile was already opened. This would be the corrected version:
fd, temp_file_path = tempfile.mkstemp()
fd.write(content)
fd.close()
# ... mv/rename via shell execution/shutil/pathlib
Thank you all very much for your help and your suggestions!
I wasn't able reproduce this behavior. I created a file /tmp/tempfile and ran a python script with the subprocess.run call you give followed by a long sleep. /tmp/target was not in use, nor did I see any unexpected open files in lsof -p <pid>.
(edit) I'm not surprised at this, because there's no way that your subprocess command is opening the file: mv does not open its arguments (you can check this with ltrace) and subprocess.run does not parse its argument or do anything with it besides pass it along to be exec-ed.
However, when I added some lines to open a file and write to it and then move that file, I see the same behavior you describe. This is the code:
import subprocess
out=open('/tmp/tempfile', 'w')
out.write('hello')
subprocess.run(
"mv /tmp/tempfile /tmp/target",
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
shell=True,
)
import time
time.sleep(5000)
In this case, the file is still open because it was never closed, and even though it's been renamed the original file handle still exists. My bet would be that you have something similar in your code that's creating this file and leaving open a handle to it.
Is there any reason why you don't use shutil.move? Otherwise it may be necassary to wait for the mv command to finish moving and then kill it, read stdin, run something like
p = subprocess.run(...)
# wait to finish moving/read from stdin
p.terminate()
Of course terminate would be a bit harsh.
Edit: depending on your use rsync, which is not part of python, may be a elegant solution to keep your data synced over the network without writing a single line of code
you say it is still open by "mv" but you lsof result shown open by python. As it is an sub process see if the pid is the same of the python process maybe it is another python process.
Normally you can automate answers to an interactive prompt by piping stdin:
import subprocess as sp
cmd = 'rpmbuild --sign --buildroot {}/BUILDROOT -bb {}'.format(TMPDIR, specfile)
p = sp.Popen(cmd, stdout=sp.PIPE, stderr=sp.PIPE, stdin=sp.PIPE, universal_newline=True, shell=True)
for out in p.communicate(input='my gpg passphrase\n'):
print(out)
For whatever reason, this is not working for me. I've tried writing to p.stdin, before executing p.communicate(), I've tried flushing the buffer, I've tried using bytes without universal_newlines=True, I've hard coded things, etc. In all scenarios, the command is executed and hangs on:
Enter pass phrase:
My first hunch was that stdin was not the correct file descriptor and that rpmbuild was internally calling a gpg command, and maybe my input isn't piped. But when I do p.stdin.close() I get an OSerror about subprocess trying to write to the closed descriptor.
What is the rpmbuild command doing to stdin that prevents me from writing to it?
Is there a hack I can do? I tried echo "my passphrase" | rpmbuild .... as the command but that doesn't work.
I know I can do something with gpg like command and sign packages without a passphrase but I kind of want to avoid that.
EDIT:
After some more reading, I realize this is issue is common to commands that require password input, typically using a form of getpass.
I see a solution would be to use a library like pexpect, but I want something from the standard library. I am going to keep looking, but I think maybe i can try writing to something similar /dev/tty.
rpm uses getpass(3) which reopens /dev/tty.
There are 2 approaches to automating:
1) create a pseudotty
2) (linux) find the reopened file descriptor in /proc
If scripting, expect(1) has (or had) a short example with pseudotty's that can be used.
In python, I use subprocess.Popen() to launch several processes, I want to debug those processes, but the windows of those processes disappeared quickly and I got no chance to see the error message. I would like to know whether there is any way I can stop the window from disappearing or write the contents in the windows to a file so that I can see the error message later.
Thanks in advance!
you can use the stdout and stderr arguments to write the outputs in a file.
example:
with open("log.txt", 'a') as log:
proc = subprocess.Popen(['cmd', 'args'], stdout=log, stderr=log)
In windows, the common way of keeping cmd windows opened after the end of a console process is to use cmd /k
Example : in a cmd window, typing start cmd /k echo foo
opens a new window (per start)
displays the output foo
leave the command window opened
I am creating a movie controller (Pause/Stop...) using python where I ssh into a remote computer, and issue commands into a named pipe like so
echo -n q > ~/pipes/pipename
I know this works if I ssh via the terminal and do it myself, so there is no problem with the setup of the named pipe redirection. My problem is that setting up an ssh session takes time (1-3 seconds), whereas I want the pause command to be instantaneous. Therefore, I thought of setting up a persistent pipe like so:
controller = subprocess.Popen ( "ssh -T -x <hostname>", shell = True, close_fds = True, stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE )
Then issue commands to it like so
controller.stdin.write ( 'echo -n q > ~/pipes/pipename' )
I think the problem is that ssh is interactive so it expects a carriage return. This is where my problems begin, as nearly everyone who has asked this question has been told to use an existing module:
Vivek's answer
Chakib's Answer
shx2's Answer
Crafty Thumber's Answer
Artyom's Answer
Jon W's Answer
Which is fine, but I am so close. I just need to know how to include the carriage return, otherwise, I have to go learn all these other modules, which mind you is not trivial (for example, right now I can't figure out how pexpect uses either my /etc/hosts file or my ssh keyless authentications).
To add a newline to the command, you will need to add a newline to the string:
controller.stdin.write('\n')
You may also need to flush the pipe:
controller.stdin.flush()
And of course the controller has to be ready to receive new data, or you could block forever trying to send it data. (And if the reason it's not ready is that it's blocking forever waiting for you to read from its stdout, which is possible on some platforms, you're deadlocked unrecoverably.)
I'm not sure why it's not working the way you have it set up, but I'll take a stab at this. I think what I would do is change the Popen call to:
controller = subprocess.Popen("ssh -T -x <hostname> \"sh -c 'cat > ~/pipes/pipename'\"", ...
And then simply controller.stdin.write('q').
I am trying to use the subprocess module in python and trying to fetch the process id of firefox
cmd = "firefox &"
fire = subprocess.Popen(cmd,shell=True, stdout=subprocess.PIPE, preexec_fn=os.setsid)
fire_task_procs = find_task(fire.pid)
print "fire_task_procs",fire_task_procs
I think I am getting the pid of the commandline argument that I am executing.. am I doing something wrong?
I confirmed that it is not the same using the ps aux | grep firefox
If you use shell=True the pid you'll get ist that of the started shell, not that of the process you want, specially as you use & to send the process into background.
You should use the long (list) form of supplying the parameters, without &, as that makes little sense anyway if you combine it with output redirection.
Don't use the shell, instead just use
subprocess.Popen(['firefox'], stdout=subprocess.PIPE, preexec_fn=os.setsid)
However, if firefox is already running then this will not work either since in this case firefox will use some IPC to tell the existing process to open a new window and then terminates.