how to set proxy value and run linux command in python - python

In linux to set a proxy value you do the following.
proxy=http://$user:$password#proxy.server.com:${NUM}
http_proxy="$proxy" https_proxy="$proxy" ${COMMAND}
For security reasons, if you run this in a subshell, you are not explicitly letting your password in the open, or in the logs. Problem with this approach is, I have to set user name and password everytime I want to run a command.
Therefore decided to write a python code for it. I have a working version in C. Just wanted to learn more in Python. I have found nice ways to encode and decode my password, and after most of the hooplahs, I pass it to this function to test proxy connection.
def test_connection(creds,proxy_url):
import pycurl
import cStringIO
buf = cStringIO.StringIO()
test_url="http://www.google.com"
c = pycurl.Curl()
c.setopt(c.URL, test_url)
c.setopt(c.WRITEFUNCTION, buf.write)
c.setopt(c.PROXY, proxy_url)
c.setopt(c.PROXYPORT, 8080)
c.setopt(c.PROXYTYPE, c.PROXYTYPE_HTTP)
c.setopt(c.PROXYAUTH, c.HTTPAUTH_NTLM)
c.setopt(c.PROXYUSERPWD, creds)
c.perform()
buf.close()
return c.getinfo(c.RESPONSE_CODE)
Where I'm having problems is using suprocess, I do understand that subprocess does not allow you to use export, since it is not really a command. Subprocess module errors with 'export' in python on linux?
this is my implementation
finalCommand = ["/bin/sh", "-c"]
finalCommand.append(http_proxy)
finalCommand.append(https_proxy)
for x in bashCommand:
finalCommand.append(x)
print subprocess.call(finalCommand)
process = subprocess.Popen(finalCommand,stdout=subprocess.PIPE)
out, err = process.communicate()
print "Output ... \n %s" % (out)
if err == None:
print "No errors"
else:
print "Errors ... \n %s" %(err)
Unfortunately, after several tests, my program always return no output and no error.
I have printed the output of the curl, so i know the decode, encode, or proxy isn't the issue. Any suggestions?
POST-ANSWER EDIT:
Interaction between Python script and linux shell
env did solve my problem, but I also had to refer to the thread above. Some of the commands I ran were interactive one, and as it explains it well in the thread, PIPE doesn't work properly with interactive programs.

It's hard to be sure without knowing exactly what commands you're trying to run, but I'm pretty sure what you want to do here is just set up the environment for your subprocess, using the env argument to Popen:
env = dict(os.environ)
env['http_proxy'] = proxy
env['https_proxy'] = proxy
for command in commands:
out = subprocess.check_output(command, env=env)
If you want to modify your own environment, rather than just the subprocesses' environments, just modify os.environ in place. (See the documentation for platform-specific issues, and how to deal with them.)
Meanwhile, the reason you're getting no errors is simple:
process = subprocess.Popen(finalCommand,stdout=subprocess.PIPE)
out, err = process.communicate()
If you don't pass stderr=subprocess.PIPE to the Popen constructor, it doesn't capture stderr, so err ends up as None.
As a side note, you almost never want to check == None. Often, just if not err: is sufficient. When it's not, if err is not None: is almost always required. The set of cases where == is necessary but not insufficient is vanishingly small. See the Programming Recommendations in PEP 8 for (slightly) more details.
And one more side note: You can just write finalCommand.extend(x). The list.extend method does the same thing as looping over an iterable and appending each element one by one, except that it's more readable, harder to get anything wrong, more concise, and faster.

Related

Cannot read program output from pipe, but it does move to file when redirected caled by subprocees python

I am trying to use subprocess to call an swipl.exe (prolog interpreter) and simulate terminal interaction via pipes.
USE_UNIVERSAL_ENDLINES = True
proc = subprocess.Popen([SWI_PROLOG_PATH,PROLOG_FILE_PATH],bufsize=0,stdin=subprocess.PIPE, stdout=subprocess.PIPE,stderr=subprocess.PIPE,universal_newlines=USE_UNIVERSAL_ENDLINES)
def output_reader(proc,sms):
while True:
c= proc.stdout.read(1)
print('got data in pipe')
sms.add_data(c)
def error_reader(proc,sms):
while True:
c = proc.stderr.read(1)
# sms.add_data(c)
def input_sender(proc):
while True:
input_data = input()
proc.stdin.write((input_data + '\n') if USE_UNIVERSAL_ENDLINES else (input_data + '\n').encode(sys.stdout.encoding))
proc.stdin.flush()
print('send: ' + input_data)
to = threading.Thread(target=output_reader, args=(proc,global_sms))
to.start()
te = threading.Thread(target=error_reader, args=(proc,global_sms))
te.start()
ti = threading.Thread(target=input_sender, args=(proc,))
ti.start()
this setup works with simple echo program that reads input and print to output.
Notice also that i set bufforsize to 0 which should mean no bufforing.
As a response i get message via piped stderr:
recieved : Welcome to SWI-Prolog (threaded, 64 bits, version 8.2.1)
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software.
Please run ?- license. for legal details.
For online help and background, visit https://www.swi-prolog.org
For built-in help, use ?- help(Topic). or ?- apropos(Word).
but not a single character from piped stdout and i should see at least '1 ?-'
when i use terminal to redirect output
c:\Program Files\swipl\bin>swipl c:/agh/s9/SE/project/pp.pl > c:/agh/s9/SE/project/prolog_out.txt
and in file i get:
1 ?-
Meaning a) swiprolog does send something down it's stdout
b) for reasons unknown to me i do not get anything in pipe, even though i try to read single character.
Question:
how to read what swiprolog writes to stdout? I am aware of existence of pyswip. I do not wish to use it, cause redoing prolog file would take probably more time than fixing this
So if anyone stumbles with similar problem not willing to use any other api, SWI-Prolog seems to go into buffering output mode if both stdin and stdout are redirected. (only when both, which was quite hard to find). Adding a procedure with (or adding as first in procedure you want to call):
set_stream(current_output, tty(true))
it will force stdout to work in terminal mode and not buffer, allowing for message communication via pipes.

Ensuring order of commands in Python

I have a .jar file that I'm running with arguments via Popen. This server takes about 4 seconds to start up and then dumps out "Server Started" on the terminal and then runs until the user quits the terminal. However, the print and webbrowser.open execute immediately because of Popen and if I use call, they never run at all. Is there a way to ensure that the print and webbrowser don't run until after the server is started other than using wait? Maybe grep for server started?
from subprocess import Popen
import glob
import sys
import webbrowser
reasoner = glob.glob("reasoner*.jar")
reasoner = reasoner.pop()
port = str(input("Enter connection port: "))
space = ""
portArg = ("-p", port)
portArg = space.join(portArg)
print "Navigate to the Reasoner at http://locahost:" + port
reasoner_process = Popen(["java", "-jar", reasoner, "-i", "0.0.0.0", portArg, "--dbconnect", "jdbc:h2:tcp://localhost//tmp/UXDemo;user=sa;password=admin"])
# I want the following to execute after the .jar process above
print "Opening http://locahost:" + port + "..."
webbrowser.open("http://locahost:" + port)
What you're looking to do is a very simple, special version of interacting with a CLI app. So, you have two options.
First, you can use a library like pexpect that's designed to handle driving almost any CLI application. It may be overkill, and there is a bit of a learning curve, but once you get the basics down this will make your problem trivial: you launch the JAR, block expecting "Server Started", then close.
Alternatively, you can do this manually with the Popen pipes. In general this has a lot of problems, but when you know there's going to exactly one output that fits easily into 128 bytes and you don't want to do anything but block on that output and then close the pipe, none of those problems comes up. So:
reasoner_process = Popen(args, stdout=PIPE)
line = reasoner_process.stdout.readline()
if line.strip() != 'Server Started':
# error handling
# Any code that you want to do while the server is running goes here
reasoner_process.stdout.close()
reasoner_process.kill()
reasoner_process.wait()
But first make sure you actually have to kill it; often closing the pipe is sufficient, in which case you can and should leave out the kill(), in which case you can also check the exit code and raise if it's not 0.
Also, you probably want a with contextlib.closing(…) or whatever's appropriate, or just a try/finally to make sure you can raise an exception for error handling and not leak the child. (Python 3.2+ makes this a lot simpler, because it guarantees that both the pipes and the Popen itself are usable as context managers.)
Finally, I was assuming that "runs until the user quits the terminal" means you want to wait for it to start, then leave it running while you do other stuff, then kill it. If your workflow is different, you obviously need to change the order in which you do things.

Passing output from subprocess.Popen to a if elif statement

Im creating a python script to check for suPHP i'm trying to create an if else statement to declare if suPHP is on the server using output from subprocess.Popen
I've tested the output of the variable with print before i created this post and it pass's the correct output to the variable suphp. This is what i have so far:
# check for suPHP
suphp = subprocess.Popen("/usr/local/cpanel/bin/rebuild_phpconf --current", shell=True, stdout=subprocess.PIPE,).communicate()[0]
if suphp = "/bin/sh: /usr/local/cpanel/bin/rebuild_phpconf: No such file or directory"
print "suPHP is not installed on the server"
elif
print suphp
Please note I am new to coding and python and decided to try to use python to admin some servers.
You don't appear to be doing anything useful with the shell=True, and so you can probably safely skip it alltogether:
try:
suphp = subprocess.Popen(["/usr/local/cpanel/bin/rebuild_phpconf", "--current"],
stdout=subprocess.PIPE,).communicate()[0]
except OSError:
print "Couldn't start subprocess, suPHP is not installed on the server"
note that you'll have to split the command into each of its separate arguments, since you won't have a shell to do it for you. You should always avoid using the shell for subprocesses unless you absolutely require it (say, because you have to set your environment by sourcing a script)
Out of my head:
the comparison operator is == not = and output is almost always followed by a newline character.
so try something like this:
if "No such file or directory" in suphp:
...
In Unix, you sometimes need to consider that subprocesses can output text to two different output streams. When there are no problems, like with echo hello, the text gets sent to the "standard output" stream.
On the other hand, it's considered good manners for a process to send all of its error messages to the "standard error" stream; for example stat /this-file-does-not-exist. You can verify this by sending all standard output to /dev/null.
When you run this command, you'll get no output on your console:
stat . > /dev/null
When you run this, an error message will appear on your console (because the text is from the standard error stream):
sh /this-program-does-not-exist > /dev/null
Getting back to your question, the "standard error" stream is sometimes called "stderr". The text from this stream can be captured using Python's subprocess library using the POpen.stderr property.

Best way to pipe the output of a local() to the stdin of a remote run() command in Fabric?

Is there a simple way to pipe output from local commands to remote ones (and vice versa)?
I've always just piped to a file, moved the file over, and then read it...but it seems like there could be an easier way.
For simpler situations, just capturing the output and using string interpolation works:
ip = local('hostname -i')
run('Script was run from ip: %s' % ip)
But when the output either needs escaping to be safe on the command line and/or needs to come from stdin it is a bit trickier.
If the output is bash-safe, then something like run('echo "%s" | mycmd' % ip) would do what I'm looking for (which I guess implies that an equivalent question would be "is there a simple way to bash-escape strings?"), but it seems like there should be a "right way" to provide a remote stdin.
Edit:
To clarify with long-ish inputs there a number of potential problems with simple string interpollation: classic shell problems (eg. the output could contain "; rm -rf /) but also (and more realistically, in my case) the output can contain quotes (both single and double).
I think just doing run("echo '%s' | cmd" % output.replace("'", "'\\''") should work, but there may be edge cases that misses.
As I mentioned above, this seems like the type of thing that fabric could handle more elegantly for me by directly sending a string to the run()'s stdin (though perhaps I've just been spoiled by it handling everything else so elegantly :)
You could send the remote stdin with fexpect, my fabric extension. this also sends a file, but hides it behind an api. You would still have to do the escaping though.
I've done this once in order to send a (binary) stream to a remote server.
It's a bit hackish, as it digs deep into fabric and paramiko's channels, and there may be untested edge cases, but it mostly seems to do the job
def remote_pipe(local_command, remote_command, buf_size=1024*1024):
'''executes a local command and a remote command (with fabric), and
sends the local's stdout to the remote's stdin'''
local_p= subprocess.Popen(local_command, shell=True, stdout=subprocess.PIPE)
channel= default_channel() #fabric function
channel.set_combine_stderr(True)
channel.settimeout(2)
channel.exec_command( remote_command )
try:
read_bytes= local_p.stdout.read(buf_size)
while read_bytes:
channel.sendall(read_bytes)
read_bytes= local_p.stdout.read(buf_size)
except socket.error:
local_p.kill()
#fail to send data, let's see the return codes and received data...
local_ret= local_p.wait()
received= channel.recv(buf_size)
channel.shutdown_write()
channel.shutdown_read()
remote_ret= channel.recv_exit_status()
if local_ret!=0 or remote_ret!=0:
raise Exception("remote_pipe failed. Local retcode: {0} Remote retcode: {1} output: {2}".format(local_ret, remote_ret, received))
In case anyone feels like contributing modifications, this is part of btrfs-send-snapshot
This is a slightly improved version of #goncalopp's answer:
def remote_pipe(local_command, remote_command, buffer_size=1024*1024, channel_timeout=60):
'''executes a local command and a remote command (with fabric), and
sends the local's stdout to the remote's stdin'''
local_process = Popen(local_command, shell=True, stdout=PIPE)
channel = default_channel() # Fabric function
channel.set_combine_stderr(True)
channel.settimeout(channel_timeout)
channel.exec_command(remote_command)
try:
bytes_to_send = local_process.stdout.read(buffer_size)
while bytes_to_send:
channel.sendall(bytes_to_send)
bytes_to_send = local_process.stdout.read(buffer_size)
except socket.error:
# Failed to send data, let's see the return codes and received data...
local_process.kill()
local_returncode = local_process.wait()
channel.shutdown_write()
remote_output = ""
try:
bytes_received = channel.recv(buffer_size)
while bytes_received:
remote_output += bytes_received
bytes_received = channel.recv(buffer_size)
except socket.error:
pass
channel.shutdown_read()
remote_returncode = channel.recv_exit_status()
print(remote_output)
if local_returncode != 0 or remote_returncode != 0:
raise Exception("remote_pipe() failed, local return code: {0}, remote return code: {1}".format(local_returncode, remote_returncode, remote_output))
Apart from readability, the improvement is that it does not abort with a socket timeout in case the remote command outputs less than buffer_size bytes, and that it prints the complete output of the remote command.

How do I close a Python 2.5.2 Popen subprocess once I have the data I need?

I am running the following version of Python:
$ /usr/bin/env python --version
Python 2.5.2
I am running the following Python code to write data from a child subprocess to standard output, and reading that into a Python variable called metadata:
# Extract metadata (snippet from extractMetadata.py)
inFileAsGzip = "%s.gz" % inFile
if os.path.exists(inFileAsGzip):
os.remove(inFileAsGzip)
os.symlink(inFile, inFileAsGzip)
extractMetadataCommand = "bgzip -c -d -b 0 -s %s %s" % (metadataRequiredFileSize, inFileAsGzip)
metadataPipes = subprocess.Popen(extractMetadataCommand, stdin=None, stdout=subprocess.PIPE, shell=True, close_fds=True)
metadata = metadataPipes.communicate()[0]
metadataPipes.stdout.close()
os.remove(inFileAsGzip)
print metadata
The use case is as follows, to pull the first ten lines of standard output from the aforementioned code snippet:
$ extractMetadata.py | head
The error will appear if I pipe into head, awk, grep, etc.
The script ends with the following error:
close failed: [Errno 32] Broken pipe
I would have thought closing the pipes would be sufficient, but obviously that's not the case.
Hmmm. I've seen some "Broken pipe" strangeness with subprocess + gzip before. I never did figure out exactly why it was happening but by changing my implementation approach, I was able to avoid the problem. It looks like you're just trying to use a backend gzip process to decompress a file (probably because Python's builtin module is horrendously slow... no idea why but it definitely is).
Rather than using communicate() you can, instead, treat the process as a fully asynchronous backend and just read it's output as it arrives. When the process dies, the subprocess module will take care of cleaning things up for you. The following snippit should provide the same basic functionality without any broken pipe issues.
import subprocess
gz_proc = subprocess.Popen(['gzip', '-c', '-d', 'test.gz'], stdout=subprocess.PIPE)
l = list()
while True:
dat = gz_proc.stdout.read(4096)
if not d:
break
l.append(d)
file_data = ''.join(l)
I think this exception has nothing to do with the subprocess call nor its file descriptors (after calling communicate the popen object is closed). This seems to be the classic problem of closing sys.stdout in a pipe:
http://bugs.python.org/issue1596
Despite being a 3-year old bug it has not been solved. Since sys.stdout.write(...) does not seem to help either, you may resort to a lower-level call, try this out:
os.write(sys.stdout.fileno(), metadata)
There's not enough information to answer this conclusively, but I can make some educated guesses.
First, os.remove should definitely not be failing with EPIPE. It doesn't look like it is, either; the error is close failed: [Errno 32] Broken pipe, not remove failed. It looks like close is failing, not remove.
It's possible for closing a pipe's stdout to give this error. If data is buffered, Python will flush the data before closing the file. If the underlying process is gone, doing this will raise IOError/EPIPE. However, note that this isn't a fatal error: even when this happens, the file is still closed. The following code reproduces this about 50% of the time, and demonstrates that the file is closed after the exception. (Watch out; I think the behavior of bufsize has changed across versions.)
import os, subprocess
metadataPipes = subprocess.Popen("echo test", stdin=subprocess.PIPE,
stdout=subprocess.PIPE, shell=True, close_fds=True, bufsize=4096)
metadataPipes.stdin.write("blah"*1000)
print metadataPipes.stdin
try:
metadataPipes.stdin.close()
except IOError, e:
print "stdin after failure: %s" % metadataPipes.stdin
This is racy; it only happens part of the time. That may explain why it looked like removing or adding the os.remove call affects the error.
That said, I can't see how this would happen with the code you've provided, since you don't write to stdin. It's the closest I can get without a usable repro, though, and maybe it'll point you in the right direction.
As a side note, you shouldn't check os.path.exists before deleting a file that may not exist; it'll cause race conditions if another process deletes the file at the same time. Instead, do this:
try:
os.remove(inFileAsGzip)
except OSError, e:
if e.errno != errno.ENOENT: raise
... which I usually wrap in a function like rm_f.
Finally, if you explicitly want to kill a subprocess, there's metadataPipes.kill--just closing its pipes won't do that--but that doesn't help explain the error. Also, again, if you're just reading gzip files you're much better off with the gzip module than a subprocess. http://docs.python.org/library/gzip.html
Getting the first 10 lines from a process output might work better this way:
ph = os.popen(cmdline, 'r')
lines = []
for s in ph:
lines.append(s.rstrip())
if len(lines) == 10: break
print '\n'.join(lines)
ph.close()

Categories