I am writing a script and I have two different kinds of output, say Op1 and Op2. I want to output Op1 to the terminal where the python process was called from while Op2 should be dumped to a different terminal instance. Can I do that?
Even if the answer is Linux-specific it's okay, I need a temporary solution.
You can make the Python script write to a file, or pipe its output to a file python script.py >> output.log, then you can tail the file with -f which makes it continuously update the view on your console.
Example snippet
# logmaker.py
import time
import datetime
buffer_size = 0 # This makes it so changes appear without buffering
with open('output.log', 'a', buffer_size) as f:
while(True):
f.write('{}\n'.format(datetime.datetime.now()))
time.sleep(1)
Run that file
python logmaker.py
Then in one or more consoles do
tail -f output.log
or less if you prefer
less +F output.log
You should get a continuous update like this
2016-07-06 10:52:44.997416
2016-07-06 10:52:45.998544
2016-07-06 10:52:46.999697
Here are some common solutions in Linux.
To achieve this, you usually need two programs.
File i/o + Loop:
main program + file writer (print Op1 and write Op2 into file A)
file reader (keep fetching A file until it be modified and print the content of file A)
Socket (pipe):
main program + sender (print Op1 and send Op2 to a specific socket)
receiver (listen a specific socket and print Op2 while receiving things)
File i/o + Signal:
main program + file writer + signal sender (print Op1 and write Op2 into file A and send signal to the daemon receiver)
signal receiver (halt until receiving signal and print the content of file A)
By the way, I suppose that your requirement does not need to write any daemon program because you have certainly two consoles.
Additionally, I am pretty sure that printing on specific console is achievable.
Example of second solution [Socket]
# print1.py (your main program)
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('localhost', 8001))
Op1 = 'Op1'
Op2 = 'Op2'
print Op1
sock.send(Op2)
sock.close()
Steps
// a. console 2: listen 8001 port
// Luckily, nc(netcat) is enough to finish this without writing any code.
$ nc -l 8001
// b. console 1: run your main program
$ python print1.py
Op1
// c. console 2
Op2
Following up on Kir's response above, as I am working on something similar, I further modified the script using threading so that the console listening is launched directly from the script, rather than having to be launched by hand. Hope it helps.
import subprocess
import threading
import socket
import time
def listenproc():
monitorshell = subprocess.Popen("mate-terminal --command=\"nc -l 8001\"",shell=True)
def printproc():
print("Local message")
time.sleep(5) # delay sending of message making sure port is listening
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('localhost', 8001))
sock.send("Sent message")
time.sleep(5)
sock.close()
listenthread = threading.Thread(name="Listen", target=listenproc, args=())
printhread = threading.Thread(name="Print", target=printproc, args=())
listenthread.start()
printhread.start()
listenthread.join()
printhread.join()
Related
I'm working on a script to send a list of commands to a device and return the output.
When the device first boots up, it has a few prompts. I am able to get through the prompts.
However, after completing the prompts, when I try to send a command the command isn't sent.
Commands
The commands.txt is set up like this:
200,
2,no
2,
The first line (200) is to let the device boot up.
The 2nd and 3rd lines answer 2 different prompts.
Issues
The issues come after these 3 inputs. The code runs and completes. Python prints out each of the commands. So the list is processed by Python. However, I don't think the device is receiving the commands.
In the log, the \n and no are written out, but none of the commands after it are. The commands do show when I use ser.inWaiting()
When I access the device through putty and run the commands through the console, everything works as expected.
Why aren't the commands going through?
Small update:
I read somewhere that python may be sending the commands to quickly, so I tried sending the commands 1 char at a time with a .01 delay.
It still didn't work:
for i in lines[1]:
cmd = i
encoded_cmd = cmd.encode("utf-8")
ser.write(encoded_cmd)
sleep(0.1)
print(cmd)
Code
import serial
import time
from time import sleep
from datetime import datetime
# create list of commands
with open('commands.txt') as commands:
list_of_commands = [tuple(map(str, i.split(','))) for i in commands]
# open and name log file
date = datetime.now().strftime("%Y-%m-%d")
log = open(f'{date}.txt', 'w+')
# serial configuration
info = open('info.txt', 'r')
lines = info.readlines()
port = lines[0].strip('\n')
baud = int(lines[1].strip('\n'))
try:
# open port
ser = serial.Serial(port=port, baudrate=baud, timeout=5, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, write_timeout=0)
except ConnectionError:
log.write(''.join('There was a connection error'))
else:
# run commands
x = 0
for lines in list_of_commands:
ser.close()
ser.open()
sleep(2)
cmd = lines[1]
encoded_cmd = cmd.encode("utf-8")
sleep_time = int(lines[0])
ser.write(encoded_cmd)
time.perf_counter()
# log output
while 1:
test = ser.readline()
text = test.decode('utf-8')
print(text)
log.write(''.join(text))
print(time.perf_counter())
print(time.perf_counter() - x)
if time.perf_counter() - x > sleep_time:
x = time.perf_counter()
ser.flushInput()
ser.flushOutput()
break
print(cmd)
# close port
ser.close()
# close files
log.close()
From the question it's obvious that multiple issues are intermingled. The same observation comes when reading the code. So I tried to list some of those I struggled with.
Issues
Try-except-else
What is the intention behind try .. except .. else ?
Not sure, its used correctly on purpose here. See try-except-else explained:
The else clause is executed if and only if no exception is raised. This is different from the finally clause that’s always executed.
The serial connection
Why opening and closing inside the loop:
ser.close()
ser.open()
Why the misleading comment:
# close server
ser.close()
Usage of sleep_time
What is the purpose of using the first column sleep_time of your CSV commands.txt inside a conditional break inside you read-loop?
sleep_time = int(lines[0])
Instead the sleep is fix 2 seconds before sending the command:
sleep(2)
How to debug
I would recommend adding some print (or log) statements to
verify the list_of_commands has been read correctly
verify which commands (cmd or even encoded_cmd) have been sent to the serial output
I have developed the following python script to help me upload NX-OS images to the Cisco Nexus switches.
The script is running just fine with small files. Tried with files under 100M and it's working fine. However I have also NX-OS images which are about 600M . At some point while script is running and the TFTP upload in in progress the upload stops when the file on the Cisco flashdisk reach size: 205987840. The programs freezes and when I type show users in the cisco console I can see that the user used for upload is already disconnected.
I am thinking that maybe is something related to the ssh session timed out ? Or maybe something wrong in my script? I am new with python.
I am posting only relevant parts of the script:
def ssh_connect_no_shell(command):
global output
ssh_no_shell = paramiko.SSHClient()
ssh_no_shell.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_no_shell.connect(device, port=22, username=myuser, password=mypass)
ssh_no_shell.exec_command('terminal length 0\n')
stdin, stdout, stder = ssh_no_shell.exec_command(command)
output = stdout.readlines()
ssh_no_shell.close()
def upload_file():
cmd_1 = "copy tftp:" + "//" + tftp_server + "/" + image + " " + "bootflash:" + " vrf " + my_vrf
ssh_connect_no_shell(cmd_1)
print '\n##### Device Output Start #####'
print '\n'.join(output)
print '\n##### Device Output End #####'
def main():
print 'Program starting...\n'
time.sleep(1)
variables1()
check_if_file_present()
check_if_enough_space()
upload_file()
check_file_md5sum()
are_you_sure(perform_upgrade)
perform_upgrade_and_reboot()
if __name__ == '__main__':
clear_screen()
main()
My experience is:
don't use TFTP
...it's incredibly slow for large files
...it doesn't work well with some firewalls
...it depends on the server implementation to handle large files
=> i'd guess, your script would just run fine using a different TFTP-server-software...
Rather than troubleshooting TFTP I'd suggest to
go with SCP
...it requires an open SSH-Port at your Nexus-Device
...if SSH is possible through your firewall, SCP is, too - no extra rule required
+++ you can "push" the images from your laptop to your device without having to login into your device
for example - use "putty scp" => pscp.exe
//pscp # Windows-Client
cd d:\DOWNLOADS
start pscp n7000-s1-kickstart.6.2.12.bin admin#10.10.10.11:bootflash:
start pscp n7000-s1-dk9.6.2.12.bin admin#10.10.10.11:bootflash:
This copies, in parallel!, the nxos- and the kickstart-image to a device.
...easy to loop over several devices to add more parallel transfers
btw. some "IOS"-based devices require additonal flags:
pscp -2 -scp ...
I am trying to do some communication between my ruby process and Python process; and I want to use UNIX socket.
Objective:
ruby process "fork and exec" the Python process. In ruby process, create a UNIX socket pair, and pass it to Python.
Ruby code (p.rb):
require 'socket'
r_socket, p_socket = Socket.pair(:UNIX, :DGRAM, 0)
# I was hoping this file descriptor would be available in the child process
pid = Process.spawn('python', 'p.py', p_socket.fileno.to_s)
Process.waitpid(pid)
Python code (p.py):
import sys
import os
import socket
# get the file descriptor from command line
p_fd = int(sys.argv[1])
socket.fromfd(p_fd, socket.AF_UNIX, socket.SOCK_DGRAM)
# f_socket = os.fdopen(p_fd)
# os.write(p_fd, 'h')
command line:
ruby p.rb
Result:
OSError: [Errno 9] Bad file descriptor
I was hoping that the ruby process will pass the file descriptor to the python process, so that these two could send data using these socket.
So, my question:
1) Is it possible to pass open file descriptor between ruby and python process as above?
2) If we can pass around file descriptor between two processes, then what's wrong in my code.
You were close, but Ruby spawn closes any file descriptors > 2 by default, unless you pass :close_others => false as argument. See the documentation:
http://apidock.com/ruby/Kernel/spawn
Working example:
require 'socket'
r_socket, p_socket = Socket.pair(:UNIX, :DGRAM, 0)
pid = Process.spawn('python', 'p.py', p_socket.fileno.to_s,
{ :close_others => false })
# Close the python end (we're not using it on the Ruby side)
p_socket.close
# Wait for some data
puts r_socket.gets
# Wait for finish
Process.waitpid(pid)
Python:
import sys
import socket
p_fd = int(sys.argv[1])
p_socket = socket.fromfd(p_fd, socket.AF_UNIX, socket.SOCK_DGRAM)
p_socket.send("Hello world\n")
Test:
> ruby p.rb
Hello world
I'm using Windows Vista and Python 2.7.2, but answers needn't be in Python.
So I can start and interact with a subprocesses stdin/stdout normally (using python), for command-line programs such as `dir'.
- however -
the program I now want to call likes to make a new console window for itself on Windows (not curses), with new handles, even when run from a pre-existing cmd.exe window. (Odd, as it's the "remote control" interface of VLC.) Is there any way of either:
getting the handles for the process-made console's stdin/out; or
getting the new shell to run within the old (like invoking bash from within bash)?
Failing that, so that I can hack the subprocesses' code, how would a new console be set up in Windows and in/output transferred?
Edit:
I.e.
>>> p = Popen(args=['vlc','-I','rc'],stdin=PIPE,stdout=PIPE)
# [New console appears with text, asking for commands]
>>> p.stdin.write("quit\r\n")
Traceback:
File "<stdin>", line 1, in <module>
IOError: [Errno 22] Invalid argument
>>> p.stdout.readline()
''
>>> p.stdout.readline()
''
# [...]
But the new console window that comes up doesn't accept keyboard input either.
Whereas normally:
>>> p = Popen(args=['cmd'],stdin=PIPE,stdout=PIPE)
>>> p.stdin.write("dir\r\n")
>>> p.stdin.flush()
>>> p.stdout.readline() #Don't just do this IRL, may block.
'Microsoft Windows [Version...
I haven't gotten the rc interface to work with a piped stdin/stdout on Windows; I get IOError at all attempts to communicate or write directly to stdin. There's an option --rc-fake-tty that lets the rc interface be scripted on Linux, but it's not available in Windows -- at least not in my somewhat dated version of VLC (1.1.4). Using the socket interface, on the other hand, seems to work fine.
The structure assigned to the startupinfo option -- and used by the Win32 CreateProcess function -- can be configured to hide a process window. However, for the VLC rc console, I think it's simpler to use the existing --rc-quiet option. In general, here's how to configure startupinfo to hide a process window:
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
subprocess.Popen(cmd, startupinfo=startupinfo)
Just to be complete -- in case using pipes is failing on your system too -- here's a little demo I cooked up using the --rc-host option to communicate using a socket. It also uses --rc-quiet to hide the console. This just prints the help and quits. I haven't tested anything else. I checked that it works in Python versions 2.7.2 and 3.2.2. (I know you didn't ask for this, but maybe it will be useful to you nonetheless.)
import socket
import subprocess
from select import select
try:
import winreg
except ImportError:
import _winreg as winreg
def _get_vlc_path():
views = [(winreg.HKEY_CURRENT_USER, 0),
(winreg.HKEY_LOCAL_MACHINE, winreg.KEY_WOW64_64KEY),
(winreg.HKEY_LOCAL_MACHINE, winreg.KEY_WOW64_32KEY)]
subkey = r'Software\VideoLAN\VLC'
access = winreg.KEY_QUERY_VALUE
for hroot, flag in views:
try:
with winreg.OpenKey(hroot, subkey, 0, access | flag) as hkey:
value, type_id = winreg.QueryValueEx(hkey, None)
if type_id == winreg.REG_SZ:
return value
except WindowsError:
pass
raise SystemExit("Error: VLC not found.")
g_vlc_path = _get_vlc_path()
def send_command(sock, cmd, get_result=False):
try:
cmd = (cmd + '\n').encode('ascii')
except AttributeError:
cmd += b'\n'
sent = total = sock.send(cmd)
while total < len(cmd):
sent = sock.send(cmd[total:])
if sent == 0:
raise socket.error('Socket connection broken.')
total += sent
if get_result:
return receive_result(sock)
def receive_result(sock):
data = bytearray()
sock.setblocking(0)
while select([sock], [], [], 1.0)[0]:
chunk = sock.recv(1024)
if chunk == b'':
raise socket.error('Socket connection broken.')
data.extend(chunk)
sock.setblocking(1)
return data.decode('utf-8')
def main(address, port):
import time
rc_host = '{0}:{1}'.format(address, port)
vlc = subprocess.Popen([g_vlc_path, '-I', 'rc', '--rc-host', rc_host,
'--rc-quiet'])
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
sock.connect((address, port))
help_msg = send_command(sock, 'help', True)
print(help_msg)
send_command(sock, 'quit')
except socket.error as e:
exit("Error: " + e.args[0])
finally:
sock.close()
time.sleep(0.5)
if vlc.poll() is None:
vlc.terminate()
if __name__ == '__main__':
main('localhost', 12345)
With reference to monitoring the stdOut which appears in the new Spawned Console Window.
Here´s another question/answer that solves the problem.
In summary (as answered by Adam M-W ):
Suppress the new spawned console by launching vlc in quiet mode --intf=dummy --dummy-quiet or --intf=rc --rc-quiet.
Monitor stdErr of launched process
Note: As for stdIn commands for the rc interface, the --rc-host solution is described by eryksun´s answer
I'm trying to write a python script that can ssh into remote server and can execute simple commands like ls,cd from the python client. However, I'm not able to read the output from the pseudo-terminal after successfully ssh'ing into the server. Could anyone please help me here so that I could execute some commands on the server.
Here is the sample code:
#!/usr/bin/python2.6
import os,sys,time,thread
pid,fd = os.forkpty()
if pid == 0:
os.execv('/usr/bin/ssh',['/usr/bin/ssh','user#host',])
sys.exit(0)
else:
output = os.read(fd,1024)
print output
data = output
os.write(fd,'password\n')
time.sleep(1)
output = os.read(fd,1024)
print output
os.write(fd,'ls\n')
output = os.read(fd,1024)
print output
Sample output:
user#host's password:
Last login: Wed Aug 24 03:16:57 2011 from 1x.x.x.xxxx
-bash: ulimit: open files: cannot modify limit: Operation not permitted
host: /home/user>ls
I'd suggest trying the module pexpect, which is built exactly for this sort of thing (interfacing with other applications via pseudo-TTYs), or Fabric, which is built for this sort of thing more abstractly (automating system administration tasks on remote servers using SSH).
pexpect: http://pypi.python.org/pypi/pexpect/
Fabric: http://docs.fabfile.org/en/1.11/
As already stated, better use public keys. As I use them normally, I have changed your program so that it works here.
#!/usr/bin/python2.6
import os,sys,time,thread
pid,fd = os.forkpty()
if pid == 0:
os.execv('/usr/bin/ssh',['/usr/bin/ssh','localhost',])
sys.exit(0)
else:
output = os.read(fd,1024)
print output
os.write(fd,'ls\n')
time.sleep(1) # this is new!
output = os.read(fd,1024)
print output
With the added sleep(1), I give the remote host (or, in my case, not-so-remote host) time to process the ls command and produce its output.
If you send ls and read immediately, you only read what is currently present. Maybe you should read in a loop or so.
Or you just should do it this way:
import subprocess
sp = subprocess.Popen(("ssh", "localhost", "ls"), stdout=subprocess.PIPE)
print sp.stdout.read()