Erlang: port to Python instance not responding - python

I am trying to communicate to an external python process through an Erlang port. First, a port is opened, then a message is sent to the external process via stdin. I am expecting a corresponding reply on the process's stdout.
My attempt looks like this:
% open a port
Port = open_port( {spawn, "python -u -"},
[exit_status, stderr_to_stdout, {line, 1000000}] ).
% send a command to the port
true = port_command( Port, "print( \"Hello world.\" )\n" ).
% gather response
% PROBLEM: no matter how long I wait flushing will return nothing
flush().
% close port
true = port_close( Port ).
% still nothing
flush().
I realize that someone else on Stackoverflow tried to do something similar but the proposed solution apparently doesn't work for me.
Also, I see that a related post on Erlang Central is starting a Python script through an Erlang port but it is not the Python shell itself that is invoked.
I have taken notice of ErlPort but I have a whole script to be executed in Python. If possible, I wouldn't want to break up the script into single Python calls.
Funny enough, doing it with bash is no problem:
Port = open_port( {spawn, "bash"},
[exit_status, stderr_to_stdout, {line, 1000000}] ).
true = port_command( Port, "echo \"Hello world.\"\n" ).
So the above example gives me a "Hello world." on flushing:
3> flush().
Shell got {#Port<0.544>,{data,{eol,"Hello world."}}}
ok
Just what I wanted to see.
Ubuntu 15.04 64 bit
Erlang 18.1
Python 2.7.9
Edit:
I have finally decided to write a script file (with a shebang) to disk and execute the script file instead of piping the script to the language interpreter for some languages (like Python).
I suspect, the problem has to do with the way some interpreters buffer IO, which I just can't work around, making necessary this extra round to disk.

As you've discovered, ports don't do what you'd like for this problem, which is why alternatives like ErlPort exist. An old workaround for this problem is to use netcat to pipe commands into python so that a proper EOF occurs. Here's an example session:
1> PortOpts = [exit_status, stderr_to_stdout, {line,1000000}].
[exit_status,stderr_to_stdout,{line,1000000},use_stdio]
2> Port = open_port({spawn, "nc -l 51234 | python"}, PortOpts).
#Port<0.564>
3> {ok, S} = gen_tcp:connect("localhost", 51234, []).
{ok,#Port<0.565>}
4> gen_tcp:send(S, "print 'hello'\nprint 'hello again'\n").
ok
5> gen_tcp:send(S, "print 'hello, one more time'\n").
ok
6> gen_tcp:close(S).
ok
7> flush().
Shell got {#Port<0.564>,{data,{eol,"hello"}}}
Shell got {#Port<0.564>,{data,{eol,"hello again"}}}
Shell got {#Port<0.564>,{data,{eol,"hello, one more time"}}}
Shell got {#Port<0.564>,{exit_status,0}}
ok
This approach opens a port running netcat as a listener on port 51234 — you can choose whatever port you wish to, of course, as long as its not already in use — with its output piped into python. We then connect to netcat over the local TCP loopback and send python command strings into it, which it then forwards through its pipe to python. Closing the socket causes netcat to exit, which results in an EOF on python's stdin, which in turn causes it to execute the commands we sent it. Flushing the Erlang shell message queue shows we got the results we expected from python via the Erlang port.

Related

Why are these python print lines indented? Context: port forwarding with ssh using python Popen

I have this piece of code that is supposed to use subprocess.Popen to forward some ports in the background. It gives unexpectedly indented print statements -- any idea what went wrong here?
Also if you have a better way of port forwarding (in the background) with python, I'm all ears! I've just been opening another terminal and running the SSH command, but surely there's a way to do that programmatically, right?
The code:
import subprocess
print("connecting")
proc = subprocess.Popen(
["ssh", f"-L10005:[IP]:10121",
f"[USERNAME]#[ANOTHER IP]"],
stdout=subprocess.PIPE
)
for _ in range(100):
realtime_output = str(proc.stdout.readline(), "utf-8")
if "[IP]" in realtime_output:
print("connected")
break
... other code that uses the forwarded ports
print("terminating")
proc.terminate()
Expected behavior (normal print lines):
$ python test.py
connecting
connected
terminating
Actual behavior (wacky print lines):
$ python test.py
connecting
connected
terminating
$ [next prompt is here for some reason?]
This is likely because ssh is opening up a full shell on the remote machine (if you type in some commands, they'll probably be run remotely!). You should disable this by passing -N so it doesn't run anything. If you don't ever need to type anything into ssh (i.e. entering passwords or confirming host keys), you can also pass -n so it doesn't read from stdin at all. With that said, it looks like you also can do this entirely within Python with the Fabric library, specifically Connection.forward_local().
The indented line weirdness is due to either ssh or the remote shell changing some terminal settings, one of which adds carriage returns before newlines that get sent to the terminal. When this is disabled, each line will start at the horizontal position of the end of the previous line:
$ stty -onlcr; printf 'foo\nbar\nbaz\n'; stty onlcr
foo
bar
baz
$

Python Paramiko hangs at .recv(1024)

I'm having issues with using the invoke shell receiving output from a particular command. The purpose of the script is to log in and check the status of a particular application on linux/unix servers. The problem stems, the list of servers are huge and not all servers have this application. So The script works on the servers that has the application and even pulls the data and prints it to the screen, however when the scrip encounters "-bash: CMD: command not found" it hangs and doesn't iterate through the list.
I also confirmed using wireshark and filtering for the IP of the server ip.addr == x.x.x.x the TCP connection is established therefore ruling out any ACL, FW, and or IP tables along the path. I can't further elaborate on the packet within wireshark as it's encrypted.
I can see the the server communicate with my desktop(client) and sends multiple encrypted packets of multiple lens. The script seems to hang right before the print statement (stuck).
Now granted the most ideal circumstance is to use exec shell, but I want to design this script to be scale-able in the future.
I've used this script before on network appliances and even on serves, the script works when the expected response in uniform meaning, show route, uname -a, those are all expected outcomes, but I think there is something in different to the bash no command error which is causing the issue the receive.
# ***** Open Plain Text
f = open("nfsus.txt")
# ***** Read & Store into Variable
hn=(f.read().splitlines())
f.close()
# ***** Credentials
#username = raw_input("Please Enter Username: ")
#password = getpass.getpass("Please Enter Passwod: ")
# ***** SSH
client=paramiko.SSHClient()
def connect_ssh(hn):
try:
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hn, 22, username, password, look_for_keys=False, allow_agent=False)
print 'Connection Attempting to: '+(hn)
channel = client.get_transport().open_session()
channel.invoke_shell()
channel.sendall("CMD \n")
cmd = channel.recv(1024)
#SCRIPT HANGS HERE! ^
print ('stuck')
#print (cmd)
except Exception, e:
print '*** Caught exception: %s: %s' % (e.__class__, e)
try:
channel.close()
except:
pass
# *****Create Loop through input.txt (contains a list of IP address)
for x in hn:
connect_ssh(x)
I'm new to Python and admit, I suck hard! However, I'm really trying my hardest to understand every detail of scripting. I never wrote anything ever in my life, in python or any other language. I'm excited to learn, but please have patience because I suck! I went through the paramiko documentation, but somethings I haven't yet fully understand and I'm hoping someone here would be cool enough to show this noob the error of my ways.
Thank You,
I haven't looked at Paramiko in a dogs age, but in this case bash (the shell) is sending the error message on "STDERR", not "STDOUT". It may be Paramiko isn't listening to STEDRR, so it's not getting the error messages, or it's trapping them some other way.
ortep#Motte ~
$ foobar
-bash: foobar: command not found
ortep#Motte ~
$ foobar 2>/dev/null
ortep#Motte ~
$
In the second calling of "foobar" I redirected (>) STDERR (2) to the /dev/null device.
You can also redirect it to "STDOUT" thusly:
ortep#Motte ~
$ foobar 2>&1
-bash: foobar: command not found
This looks the same because on the ssh console STDOUT and STDERR are (kinda) the same thing.
So that gives you two options that might help. One is to do:
channel.sendall("CMD 2>&1 \n")
The other is to check for the existence of the command:
channel.sendall("if [[-x /path/to/CMD ]]; then /path/to/CMD/; else echo "CMD not found"; fi \n)
The first one is easier to try, the second one you can expand on the "else" section to give you more information (like "; else echo "CMD not on $(hostname)". Which may or may not be useful).

Pseudo terminal will not be allocated error - ssh - sudo - websocket - subprocess

I basically want to create a web page through which a unix terminal at the server side can be reached and commands can be sent to and their results can be received from the terminal.
For this, I have a WSGIServer. When a connection is opened, I execute the following:
def opened(self):
self.p = Popen(["bash", "-i"], bufsize=1, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
self.p.stdout = Unbuffered(self.p.stdout)
self.t = Thread(target=self.listen_stdout)
self.t.daemon = True
self.t.start()
When a message comes to the server from the client, It is handled in the following function, which only redirects the coming message to the stdin of subprocess p which is an interactive bash:
def received_message(self, message):
print(message.data, file=self.p.stdin)
Then outputs of the bash is read in the following function within a separate thread t. It only sends the outputs to the client.
def listen_stdout(self):
while True:
c = self.p.stdout.read(1)
self.send(c)
In such a system, I am able to send any command(ls, cd, mkdir etc.) to the bash working at the server side and receive their outputs. However, when I try to run ssh xxx#xxx, the error pseudo-terminal will not be allocated because stdin is not a terminal is shown.
Also, in a similar way, when I run sudo ..., the prompt for password is not sent to the client somehow, but it appears on the terminal of the server script, instead.
I am aware of expect; however, only for such sudo and ssh usage, I do not want to mess my code up with expect. Instead, I am looking for a general solution that can fake sudo and ssh and redirect prompt's to the client.
Is there any way to solve this? Ideas are appreciated, thanks.
I found the solution. What I need was creating a pseudo-terminal. And, at the slave side of the tty, make a setsid() call to make this process a new session and run commands on it.
Details are here:
http://www.rkoucha.fr/tech_corner/pty_pdip.html

Persistent ssh session in Python using Popen

I am creating a movie controller (Pause/Stop...) using python where I ssh into a remote computer, and issue commands into a named pipe like so
echo -n q > ~/pipes/pipename
I know this works if I ssh via the terminal and do it myself, so there is no problem with the setup of the named pipe redirection. My problem is that setting up an ssh session takes time (1-3 seconds), whereas I want the pause command to be instantaneous. Therefore, I thought of setting up a persistent pipe like so:
controller = subprocess.Popen ( "ssh -T -x <hostname>", shell = True, close_fds = True, stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE )
Then issue commands to it like so
controller.stdin.write ( 'echo -n q > ~/pipes/pipename' )
I think the problem is that ssh is interactive so it expects a carriage return. This is where my problems begin, as nearly everyone who has asked this question has been told to use an existing module:
Vivek's answer
Chakib's Answer
shx2's Answer
Crafty Thumber's Answer
Artyom's Answer
Jon W's Answer
Which is fine, but I am so close. I just need to know how to include the carriage return, otherwise, I have to go learn all these other modules, which mind you is not trivial (for example, right now I can't figure out how pexpect uses either my /etc/hosts file or my ssh keyless authentications).
To add a newline to the command, you will need to add a newline to the string:
controller.stdin.write('\n')
You may also need to flush the pipe:
controller.stdin.flush()
And of course the controller has to be ready to receive new data, or you could block forever trying to send it data. (And if the reason it's not ready is that it's blocking forever waiting for you to read from its stdout, which is possible on some platforms, you're deadlocked unrecoverably.)
I'm not sure why it's not working the way you have it set up, but I'll take a stab at this. I think what I would do is change the Popen call to:
controller = subprocess.Popen("ssh -T -x <hostname> \"sh -c 'cat > ~/pipes/pipename'\"", ...
And then simply controller.stdin.write('q').

Python subprocesses don't output properly?

I don't think I'm understanding python subprocess properly at all but here's a simple example to illustrate a point I'm confused about:
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.poll()
lookup_client.stdin.write("magic\n")
print lookup_client.poll()
lookup_client.send_signal(subprocess.signal.SIGINT)
print lookup_client.poll()
lookup_server.wait()
print "Lookup server terminated properly"
The output comes back as
None
None
None
and never completes. Why is this? Also, if I change the first argument of Popen to an array of all of those arguments, none of the nc calls execute properly and the script runs through without ever waiting. Why does that happen?
Ultimately, I'm running into a problem in a much larger program that does something similar using netcat and another program running locally instead of two versions of nc. Either way, I haven't been able to write to or read from them properly. However, when I run them in the python console everything runs as I would expect. All this has me very frustrated. Let me know if you have any insights!
EDIT: I'm running this on Ubuntu Linux 12.04, when I man nc, I get the BSD General Commands manual so I'm assuming this is BSD netcat.
The problem here is that you're sending SIGINT to the process. If you just close the stdin, nc will close its socket and quit, which is what you want.
It sounds like you're actually using nc for the client (although not the server) in your real program, which means you have two easy fixes:
Instead of lookup_client.send_signal(subprocess.signal.SIGINT), just do lookup_client.stdin.close(). nc will see this as an EOF on its input, and exit normally, at which point your server will also exit.
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.poll()
lookup_client.stdin.write("magic\n")
lookup_client.stdin.close()
print lookup_client.poll()
lookup_server.wait()
print "Lookup server terminated properly"
When I run this, the most common output is:
None
None
magic
Lookup server terminated properly
Occasionally the second None is a 0 instead, and/or it comes after magic instead of before, but otherwise, it's always all four lines. (I'm running on OS X.)
For this simple case (although maybe not your real case), just use communicate instead of trying to do it manually.
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.communicate("magic\n")
lookup_server.wait()
print "Lookup server terminated properly"
Meanwhile:
Also, if I change the first argument of Popen to an array of all of those arguments, none of the nc calls execute properly and the script runs through without ever waiting. Why does that happen?
As the docs say:
On Unix with shell=True… If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional arguments to the shell itself.
So, subprocess.Popen(["nc", "-l", "5050"], shell=True) does /bin/sh -c 'nc' -l 5050, and sh doesn't know what to do with those arguments.
You probably do want to use an array of args, but then you have to get rid of shell=True—which is a good idea anyway, because the shell isn't helping you here.
One more thing:
lookup_client.send_signal(subprocess.signal.SIGINT)
print lookup_client.poll()
This may print either -2 or None, depending on whether the client has finished responding to the SIGINT and been killed before you poll it. If you want to actually get that -2, you have to call wait rather than poll (or do something else, like loop until poll returns non-None).
Finally, why didn't your original code work? Well, sending SIGINT is asynchronous; there's no guarantee as to when it might take effect. For one example of what could go wrong, it could take effect before the client even opens the socket, in which case the server is still sitting around waiting for a client that never shows up.
You can throw in a time.sleep(5) before the signal call to test this—but obviously that's not a real fix, or even an acceptable hack; it's only useful for testing the problem. What you need to do is not kill the client until it's done everything you want it to do. For complex cases, you'll need to build some mechanism to do that (e.g., reading its stdout), while for simple cases, communicate is already everything you need (and there's no reason to kill the child in the first place).
Your invocation of nc is bad, what will happen if I invoke this as you in command line:
# Server window:
[vyktor#grepfruit ~]$ nc -l 5050
# Client Window
[vyktor#grepfruit ~]$ nc localhost 5050
[vyktor#grepfruit ~]$ echo $?
1
Which mean (1 in $?) failure.
Once you use -p:
-p, --local-port=NUM local port number
NC starts listening, so:
# Server window
[vyktor#grepfruit ~]$ nc -l -p 5050
# Keeps handing
# Client window
[vyktor#grepfruit ~]$ echo Hi | nc localhost 5050
# Keeps hanging
Once you add -c to client invocation:
-c, --close close connection on EOF from stdin
You'll end up with this:
# Client window
[vyktor#grepfruit ~]$ echo Hi | nc localhost 5050 -c
[vyktor#grepfruit ~]$
# Server window
[vyktor#grepfruit ~]$ nc -l -p 5050
Hi
[vyktor#grepfruit ~]$
So you need this python piece of code:
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l -p 5050", shell=True)
lookup_client = subprocess.Popen("nc -c localhost 5050", shell=True,
stdin=subprocess.PIPE)
lookup_client.stdin.write("magic\n")
lookup_client.stdin.close() # This
lookup_client.send_signal(subprocess.signal.SIGINT) # or this kill
lookup_server.wait()
print "Lookup server terminated properly"

Categories