I am trying to do some communication between my ruby process and Python process; and I want to use UNIX socket.
Objective:
ruby process "fork and exec" the Python process. In ruby process, create a UNIX socket pair, and pass it to Python.
Ruby code (p.rb):
require 'socket'
r_socket, p_socket = Socket.pair(:UNIX, :DGRAM, 0)
# I was hoping this file descriptor would be available in the child process
pid = Process.spawn('python', 'p.py', p_socket.fileno.to_s)
Process.waitpid(pid)
Python code (p.py):
import sys
import os
import socket
# get the file descriptor from command line
p_fd = int(sys.argv[1])
socket.fromfd(p_fd, socket.AF_UNIX, socket.SOCK_DGRAM)
# f_socket = os.fdopen(p_fd)
# os.write(p_fd, 'h')
command line:
ruby p.rb
Result:
OSError: [Errno 9] Bad file descriptor
I was hoping that the ruby process will pass the file descriptor to the python process, so that these two could send data using these socket.
So, my question:
1) Is it possible to pass open file descriptor between ruby and python process as above?
2) If we can pass around file descriptor between two processes, then what's wrong in my code.
You were close, but Ruby spawn closes any file descriptors > 2 by default, unless you pass :close_others => false as argument. See the documentation:
http://apidock.com/ruby/Kernel/spawn
Working example:
require 'socket'
r_socket, p_socket = Socket.pair(:UNIX, :DGRAM, 0)
pid = Process.spawn('python', 'p.py', p_socket.fileno.to_s,
{ :close_others => false })
# Close the python end (we're not using it on the Ruby side)
p_socket.close
# Wait for some data
puts r_socket.gets
# Wait for finish
Process.waitpid(pid)
Python:
import sys
import socket
p_fd = int(sys.argv[1])
p_socket = socket.fromfd(p_fd, socket.AF_UNIX, socket.SOCK_DGRAM)
p_socket.send("Hello world\n")
Test:
> ruby p.rb
Hello world
Related
I'm trying to use the spur library to launch a long-running command via ssh then read and process the output from it one line at a time. The documentation says you can pass a file object using stdout=f and run/spawn will call stdout.write for anything the subprocess writes to its stdout stream. I hit on the idea of creating an os.pipe() to make this work, but it doesn't. Can someone please suggest a fix.
NOTE: I've already got this working with paramiko.SSHClient.exec_command but the interface is a bit low-level for my needs, so I want to learn how to do it with spur. Thanks!
import spur
import os
HOST = "rocky.lan"
USER = "rocky"
CMD = "while sleep 1; do date; done"
r, w = os.pipe()
r = os.fdopen(r, 'rb')
w = os.fdopen(w, 'wb')
ssh = spur.SshShell(hostname=HOST, username=USER)
child = ssh.spawn(CMD, stdout=w)
for line in iter(r.readline, ""):
print(line, end="")
Since someone is bound to ask, the parakimo code looks like this:-
from paramiko import SSHClient
HOST = "rocky.lan"
USER = "rocky"
CMD = "while sleep 1; do date; done"
ssh = SSHClient()
ssh.load_system_host_keys()
ssh.connect(HOST, username=USER)
stdin, stdout, stderr = ssh.exec_command(CMD)
for line in iter(stdout.readline, ""):
print(line, end="")
I've discovered parallel-ssh which seems to have parted company from paramiko and gone for python-ssh/python-ssh2 instead. A 5-minute test suggests that it combines paramiko's power with spur's simplicity, but sadly still doesn't support ~/.ssh/config, so Perl's Net::OpenSSH is still my favourite :-) Here's the code I got working with pssh:-
from pssh.clients import SSHClient
HOST = "rocky.lan"
USER = "rocky"
CMD = "while sleep 1; do date; done"
ssh = SSHClient(host=HOST, user=USER)
cmd = ssh.run_command(CMD)
for line in cmd.stdout:
print(line)
So this is an alternative, but really I still need to know how to read the subprocess's stdout using spur.
I am writing a script and I have two different kinds of output, say Op1 and Op2. I want to output Op1 to the terminal where the python process was called from while Op2 should be dumped to a different terminal instance. Can I do that?
Even if the answer is Linux-specific it's okay, I need a temporary solution.
You can make the Python script write to a file, or pipe its output to a file python script.py >> output.log, then you can tail the file with -f which makes it continuously update the view on your console.
Example snippet
# logmaker.py
import time
import datetime
buffer_size = 0 # This makes it so changes appear without buffering
with open('output.log', 'a', buffer_size) as f:
while(True):
f.write('{}\n'.format(datetime.datetime.now()))
time.sleep(1)
Run that file
python logmaker.py
Then in one or more consoles do
tail -f output.log
or less if you prefer
less +F output.log
You should get a continuous update like this
2016-07-06 10:52:44.997416
2016-07-06 10:52:45.998544
2016-07-06 10:52:46.999697
Here are some common solutions in Linux.
To achieve this, you usually need two programs.
File i/o + Loop:
main program + file writer (print Op1 and write Op2 into file A)
file reader (keep fetching A file until it be modified and print the content of file A)
Socket (pipe):
main program + sender (print Op1 and send Op2 to a specific socket)
receiver (listen a specific socket and print Op2 while receiving things)
File i/o + Signal:
main program + file writer + signal sender (print Op1 and write Op2 into file A and send signal to the daemon receiver)
signal receiver (halt until receiving signal and print the content of file A)
By the way, I suppose that your requirement does not need to write any daemon program because you have certainly two consoles.
Additionally, I am pretty sure that printing on specific console is achievable.
Example of second solution [Socket]
# print1.py (your main program)
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('localhost', 8001))
Op1 = 'Op1'
Op2 = 'Op2'
print Op1
sock.send(Op2)
sock.close()
Steps
// a. console 2: listen 8001 port
// Luckily, nc(netcat) is enough to finish this without writing any code.
$ nc -l 8001
// b. console 1: run your main program
$ python print1.py
Op1
// c. console 2
Op2
Following up on Kir's response above, as I am working on something similar, I further modified the script using threading so that the console listening is launched directly from the script, rather than having to be launched by hand. Hope it helps.
import subprocess
import threading
import socket
import time
def listenproc():
monitorshell = subprocess.Popen("mate-terminal --command=\"nc -l 8001\"",shell=True)
def printproc():
print("Local message")
time.sleep(5) # delay sending of message making sure port is listening
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('localhost', 8001))
sock.send("Sent message")
time.sleep(5)
sock.close()
listenthread = threading.Thread(name="Listen", target=listenproc, args=())
printhread = threading.Thread(name="Print", target=printproc, args=())
listenthread.start()
printhread.start()
listenthread.join()
printhread.join()
This is my first post in StackOverflow, so I hope to do it the right way! :)
I have this task to do for my new job that needs to connect to several servers and execute a python script in all of them. I'm not very familiar with servers (and just started using paramiko), so I apologize for any big mistakes!
The script I want to run on them modifies the authorized_keys file but to start, I'm trying it with only one server and not yet using the aforementioned script (I don't want to make a mistake and block the server in my first task!).
I'm just trying to list the directory in the remote machine with a very simple function called getDir(). So far, I've been able to connect to the server with paramiko using the basics (I'm using pdb to debug the script by the way):
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
This is the function to get the directory list:
getDir.py
#!/usr/bin/python
import os
import pdb
def get_dir():
pdb.set_trace()
print "Current dir list is:"
for item in os.listdir(os.getcwd()):
print item
While debugging I got the directory list of my local machine instead of the one from the remote machine... is there a way to pass a python function as a parameter through paramiko? I would like to just have the script locally and run it remotely like when you do it with a bash file from ssh with:
ssh -i pth/to/key username#domain.com 'bash -s' < script.sh
so to actually avoid to copy the python script to every machine and then run it from them (I guess with the above command the script would also be copied to the remote machine and then deleted, right?) Is there a way to do that with paramiko.sshClient()?
I have also tried to modify the code and use the standard output of the channel that creates exec_command to list the directory leaving the scripts like:
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
for line in o.readlines():
print line
for line in e.readlines():
print line
getDir.py
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
But with this, it actually tries to run the local directory list as commands (which actually makes sense they way I have it). I had to convert the list to a string because I was having a TypeError saying that it expects a string or a read-only character buffer, not a list... I know this was a desperate attempt to pass the function... Does anyone know how I could do such thing (pass a local function through paramiko to execute it on a remote machine)?
If you have any corrections or tips on the code, they are very much welcome (actually, any kind of help would be very much appreciated!).
Thanks a lot in advance! :)
You cannot just execute python function through ssh. ssh is just a tunnel with your code on one side (client) and shell on another (server). You should execute shell commands on remote side.
If using raw ssh code is not critical, i suggest fabric as library for writing administration tools. It contains tools for easy ssh handling, file transferring, sudo, parallel execution and other.
I think you might want change the paramaters you're passing into ssh.exec_command Here's an idea:
Instead of doing:
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
i, o, e = ssh.exec_command(getDir())
You might want to try:
i, o, e = ssh.exec_command('pwd')
o.printlines()
And other things to explore:
Writing a bash script or a Python that lives on your servers. You can use Paramiko to log onto the server and executing the script with ssh.exec_command(some_script.sh) or ssh.exec_command(some_script.py)
Paramiko has some FTP/SFTP utilities so you can actually use it to put the script on the server and then execute it.
It is possible to do this by using a here document to feed a module into the remote server's python interpreter.
remotepypath = "/usr/bin/"
# open the module as a text file
with open("getDir.py", "r") as f:
mymodule = f.read()
# setup from OP code
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
# use here document to feed module into python interpreter
stdin, stdout, stderr = ssh.exec_command("{p}python - <<EOF\n{s}\nEOF".format(p=remotepypath, s=mymodule))
print("stderr: ", stderr.readlines())
print("stdout: ", stdout.readlines())
I'm using Windows Vista and Python 2.7.2, but answers needn't be in Python.
So I can start and interact with a subprocesses stdin/stdout normally (using python), for command-line programs such as `dir'.
- however -
the program I now want to call likes to make a new console window for itself on Windows (not curses), with new handles, even when run from a pre-existing cmd.exe window. (Odd, as it's the "remote control" interface of VLC.) Is there any way of either:
getting the handles for the process-made console's stdin/out; or
getting the new shell to run within the old (like invoking bash from within bash)?
Failing that, so that I can hack the subprocesses' code, how would a new console be set up in Windows and in/output transferred?
Edit:
I.e.
>>> p = Popen(args=['vlc','-I','rc'],stdin=PIPE,stdout=PIPE)
# [New console appears with text, asking for commands]
>>> p.stdin.write("quit\r\n")
Traceback:
File "<stdin>", line 1, in <module>
IOError: [Errno 22] Invalid argument
>>> p.stdout.readline()
''
>>> p.stdout.readline()
''
# [...]
But the new console window that comes up doesn't accept keyboard input either.
Whereas normally:
>>> p = Popen(args=['cmd'],stdin=PIPE,stdout=PIPE)
>>> p.stdin.write("dir\r\n")
>>> p.stdin.flush()
>>> p.stdout.readline() #Don't just do this IRL, may block.
'Microsoft Windows [Version...
I haven't gotten the rc interface to work with a piped stdin/stdout on Windows; I get IOError at all attempts to communicate or write directly to stdin. There's an option --rc-fake-tty that lets the rc interface be scripted on Linux, but it's not available in Windows -- at least not in my somewhat dated version of VLC (1.1.4). Using the socket interface, on the other hand, seems to work fine.
The structure assigned to the startupinfo option -- and used by the Win32 CreateProcess function -- can be configured to hide a process window. However, for the VLC rc console, I think it's simpler to use the existing --rc-quiet option. In general, here's how to configure startupinfo to hide a process window:
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
subprocess.Popen(cmd, startupinfo=startupinfo)
Just to be complete -- in case using pipes is failing on your system too -- here's a little demo I cooked up using the --rc-host option to communicate using a socket. It also uses --rc-quiet to hide the console. This just prints the help and quits. I haven't tested anything else. I checked that it works in Python versions 2.7.2 and 3.2.2. (I know you didn't ask for this, but maybe it will be useful to you nonetheless.)
import socket
import subprocess
from select import select
try:
import winreg
except ImportError:
import _winreg as winreg
def _get_vlc_path():
views = [(winreg.HKEY_CURRENT_USER, 0),
(winreg.HKEY_LOCAL_MACHINE, winreg.KEY_WOW64_64KEY),
(winreg.HKEY_LOCAL_MACHINE, winreg.KEY_WOW64_32KEY)]
subkey = r'Software\VideoLAN\VLC'
access = winreg.KEY_QUERY_VALUE
for hroot, flag in views:
try:
with winreg.OpenKey(hroot, subkey, 0, access | flag) as hkey:
value, type_id = winreg.QueryValueEx(hkey, None)
if type_id == winreg.REG_SZ:
return value
except WindowsError:
pass
raise SystemExit("Error: VLC not found.")
g_vlc_path = _get_vlc_path()
def send_command(sock, cmd, get_result=False):
try:
cmd = (cmd + '\n').encode('ascii')
except AttributeError:
cmd += b'\n'
sent = total = sock.send(cmd)
while total < len(cmd):
sent = sock.send(cmd[total:])
if sent == 0:
raise socket.error('Socket connection broken.')
total += sent
if get_result:
return receive_result(sock)
def receive_result(sock):
data = bytearray()
sock.setblocking(0)
while select([sock], [], [], 1.0)[0]:
chunk = sock.recv(1024)
if chunk == b'':
raise socket.error('Socket connection broken.')
data.extend(chunk)
sock.setblocking(1)
return data.decode('utf-8')
def main(address, port):
import time
rc_host = '{0}:{1}'.format(address, port)
vlc = subprocess.Popen([g_vlc_path, '-I', 'rc', '--rc-host', rc_host,
'--rc-quiet'])
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
sock.connect((address, port))
help_msg = send_command(sock, 'help', True)
print(help_msg)
send_command(sock, 'quit')
except socket.error as e:
exit("Error: " + e.args[0])
finally:
sock.close()
time.sleep(0.5)
if vlc.poll() is None:
vlc.terminate()
if __name__ == '__main__':
main('localhost', 12345)
With reference to monitoring the stdOut which appears in the new Spawned Console Window.
Here´s another question/answer that solves the problem.
In summary (as answered by Adam M-W ):
Suppress the new spawned console by launching vlc in quiet mode --intf=dummy --dummy-quiet or --intf=rc --rc-quiet.
Monitor stdErr of launched process
Note: As for stdIn commands for the rc interface, the --rc-host solution is described by eryksun´s answer
Good day, Stackoverflow!
I have a little (big) problem with porting one of my Python scripts for Linux to Windows. The hairy thing about this is that I have to start a process and redirect all of its streams into pipes that I go over and read and write to and from in my script.
With Linux this is a piece of cake:
server_startcmd = [
"java",
"-Xmx%s" % self.java_heapmax,
"-Xms%s" % self.java_heapmin,
"-jar",
server_jar,
"nogui"
]
server = Popen(server_startcmd, stdout = PIPE,
stderr = PIPE,
stdin = PIPE)
outputs = [
server_socket, # A listener socket that has been setup before
server.stderr,
server.stdout,
sys.stdin # Because I also have to read and process this.
]
clients = []
while True:
read_ready, write_ready, except_ready = select.select(outputs, [], [], 1.0)
if read_ready == []:
perform_idle_command() # important step
else:
for s in read_ready:
if s == sys.stdin:
# Do stdin stuff
elif s == server_socket:
# Accept client and add it to 'clients'
elif s in clients:
# Got data from one of the clients
The whole 3 way alternating between a server socket, stdin of the script and the output channels of the child process (as well as the input channel, as my script will write to that one, although that one is not in the select() list) is the most important part of the script.
I know that for Windows there is win32pipe in the win32api module. The problem is that finding resources to this API is pretty hard, and what I found was not really helpful.
How do I utilize this win32pipe module to do what I want? I have some sources where it's being used in a different but similar situation, but that confused me pretty much:
if os.name == 'nt':
import win32pipe
(stdin, stdout) = win32pipe.popen4(" ".join(server_args))
else:
server = Popen(server_args,
stdout = PIPE,
stdin = PIPE,
stderr = PIPE)
outputs = [server.stderr, server.stdout, sys.stdin]
stdin = server.stdin
[...]
while True:
try:
if os.name == 'nt':
outready = [stdout]
else:
outready, inready, exceptready = select.select(outputs, [], [], 1.0)
except:
break
stdout here is the combined stdout and stderr of the child process that has been started with win32pipe.popen4(...)
The questions arsing are:
Why not select() for the windows version? Does that not work?
If you don't use select() there, how can I implement the neccessary timeout that select() provides (which obviously won't work like this here)
Please, help me out!
I think you cannot use select() on pipes.
In one of the projects, where I was porting a linux app to Windows I too had missed this point and had to rewrite the whole logic.