I've tried to create a little app that plays a sound when you lose connectivity for an extended period and plays another when the connection is established. Useful for wireless connections.
I'm still new to Python :) trying little projects to improve my knowledge. If you do answer I will be very grateful if you could include any information about how to use subprocess.
I've defined the subprocess but I'm not sure how to word my if statement so it loops from one function to the other. IE Function 1 = IF ping loss > 15 pings play sound and move on to function 2... If function 2 ping success > 15 pings play sound and move back to function 1. So on.
I've yet to wrap the program in a loop, at this point I'm just trying to get the ping to work with the if statement.
So right now the application just continuously loop pings.
import os
import subprocess
import winsound
import time
def NetFail():
winsound.Beep(2000 , 180), winsound.Beep(1400 , 180)
def NetSucc():
winsound.Beep(1400 , 250), winsound.Beep(2000 , 250),
ips=[]
n = 1
NetSuccess = 10
NetFailure = 10
PinSuc = 0
PinFail = 0
x = '8.8.8.8'
ips.append(x)
for ping in range(0,n):
ipd=ips[ping]
def PingFailure():
while PinFail < NetSuccess:
res = subprocess.call(['ping', '-n', '10', ipd])
if ipd in str(res):
PingSuccess()
else:
print ("ping to", ipd, "failed!"), NetFail()
def PingSuccess():
while PinFail < NetFailure: # This needs to be cleaned up so it doesn't interfere with the other function
res = subprocess.call(['ping', '-n', '10', ipd])
if ipd in str(res):
PingFail()
else:
print ("ping to", ipd, "successful!"), NetSucc()
As you use the command ping -n 10 ip, I assume that you are using a Windows system, as on Linux (or other Unix-like) it would be ping -c 10 ip.
Unfortunately, on Windows ping always return 0, so you cannot use the return value to know whether peer was reached. And even the output is not very clear...
So you should:
run in a cmd console the command ping -n 1 ip with an accessible and inaccessible ip, note the output and identify the differences. On my (french) system, it writes Impossible, I suppose that you should get Unable or the equivalent in your locale
start the ping from Python with subprocess.Popen redirecting the output to a pipe
get the output (and error output) from the command with communicate
search for the Unable word in output.
Code could be like:
errWord = 'Unable' # replace with what your locale defines...
p = subprocess.Popen([ 'ping', '-n', '1', ipd],
stdout = subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if errWord in out:
# process network disconnected
else:
# process network connected
Alternatively, you could search pypi for a pure Python implementation of ping such as py-ping ...
Anyway, I would not use two functions in flip-flop because it will be harder if you later wanted to test connectivity to multiple IPs. I would rather use an class
class IP(object):
UNABLE = "Unable" # word indicating unreachable host
MAX = 15 # number of success/failure to record new state
def __init__(self, ip, failfunc, succfunc, initial = True):
self.ip = ip
self.failfunc = failfunc # to warn of a disconnection
self.succfunc = succfunc # to warn of a connection
self.connected = initial # start by default in connected state
self.curr = 0 # number of successive alternate states
def test(self):
p = subprocess.Popen([ 'ping', '-n', '1', self.ip],
stdout = subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if self.UNABLE in out:
if self.connected:
self.curr += 1
else:
self.curr = 0 # reset count
else:
if not self.connected:
self.curr += 1
else:
self.curr = 0 # reset count
if self.curr >= self.MAX: # state has changed
self.connected = not self.connected
self.curr = 0
if self.connected: # warn for new state
self.succfunc(self)
else:
self.failfunc(self)
Then you can iterate over a list of IP objects, repeatedly calling ip.test(), and you will be warned for state changes
Not quite sure, what you want to achieve, but your if statement has to be part of the while loop if you want it to be executed each time ping is called via subprocess is called.
Also:
Here is the documentation for subprocess: https://docs.python.org/3/library/subprocess.html
For viewing the output of a process you have to call it via subprocess.call_output:
ls_output = subprocess.check_output(['ls'])
For further information have a look at this: http://sharats.me/the-ever-useful-and-neat-subprocess-module.html#a-simple-usage
Related
I am working on my own project. In which these steps have to be performed:
Connect to remote server.
Get pid, process name, cpu usage, swap memory usage by each running process on remote server daily on some specific time(say at 4'0 clock).
I have to compare every day's result with previous day's result (e.g. day1-pid with day2 pid and day1 process name with day2 process name etc.)
So far I have done up to step-2. Now I want to know that how to extract the pid, process name, cpu usage, swap memory usage from remote server and store it in some iterable variable. So that I can compare it for checking memory spike?
Any other way apart from my idea will be appreciable.
My code sample is like this:
import paramiko
import re
import psutil
class ShellHandler:
def __init__(self, host, user, psw):
self.ssh = paramiko.SSHClient()
self.ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.ssh.connect(host, username=user, password=psw, port=22)
channel = self.ssh.invoke_shell()
self.stdin = channel.makefile('wb')
self.stdout = channel.makefile('r')
def __del__(self):
self.ssh.close()
#staticmethod
def _print_exec_out(cmd, out_buf, err_buf, exit_status):
print('command executed: {}'.format(cmd))
print('STDOUT:')
for line in out_buf:
print(line, end="")
print('end of STDOUT')
print('STDERR:')
for line in err_buf:
print(line, end="")
print('end of STDERR')
print('finished with exit status: {}'.format(exit_status))
print('------------------------------------')
#print(psutil.pids())
pass
def execute(self, cmd):
"""
:param cmd: the command to be executed on the remote computer
:examples: execute('ls')
execute('finger')
execute('cd folder_name')
"""
cmd = cmd.strip('\n')
self.stdin.write(cmd + '\n')
finish = 'end of stdOUT buffer. finished with exit status'
echo_cmd = 'echo {} $?'.format(finish)
self.stdin.write(echo_cmd + '\n')
shin = self.stdin
self.stdin.flush()
shout = []
sherr = []
exit_status = 0
for line in self.stdout:
if str(line).startswith(cmd) or str(line).startswith(echo_cmd):
# up for now filled with shell junk from stdin
shout = []
elif str(line).startswith(finish):
# our finish command ends with the exit status
exit_status = int(str(line).rsplit(maxsplit=1)[1])
if exit_status:
# stderr is combined with stdout.
# thus, swap sherr with shout in a case of failure.
sherr = shout
shout = []
break
else:
# get rid of 'coloring and formatting' special characters
shout.append(re.compile(r'(\x9B|\x1B\[)[0-?]*[ -/]*[#-~]').sub('', line).replace('\b', '').replace('\r', ''))
# first and last lines of shout/sherr contain a prompt
if shout and echo_cmd in shout[-1]:
shout.pop()
if shout and cmd in shout[0]:
shout.pop(0)
if sherr and echo_cmd in sherr[-1]:
sherr.pop()
if sherr and cmd in sherr[0]:
sherr.pop(0)
self._print_exec_out(cmd=cmd, out_buf=shout, err_buf=sherr, exit_status=exit_status)
return shin, shout, sherr
obj=ShellHandler('Servername','username','password')
pID=[]
## I want this(pid, cmd, swap memory) to store in a varible which would be iterable.
pID=ShellHandler.execute(obj,"ps -eo pid,cmd,lstart,%mem,%cpu|awk '{print $1}'")
print(pID[0])##---------------------------------Problem not giving any output.
Your ShellHandler's execute method returns three items, the first of which is the input you sent to it.
You should probably call it directly like this, anyway:
obj = ShellHandler('Servername','username','password')
in, out, err = obj.execute("ps -eo pid,lstart,%mem,%cpu,cmd")
for line in out.split('\n'):
pid, lstartwd, lstartmo, lstartdd, lstartm, lstartyy, mem, cpu, cmd = line.split(None, 8)
I moved cmd last because it might contain spaces. The lstart value also contains multiple space-separated fields. Here's what the output looks like in Debian:
19626 Tue Jan 15 15:03:57 2019 0.0 0.0 less filename
There are many questions about how to parse ps output in more detail; I'll refer you to them for figuring out how to handle the results from split exactly.
Splitting out the output of ps using Python
Is there any way to get ps output programmatically?
ps aux command should have all the info you need (pid, process name, cpu, memory)
I am using Tornado Server, 4.4.2 and pypy 5.9.0 and python 2.7.13,
hosted on Ubuntu 16.04.3 LTS
A new client logs in and a new class is created and passed the socket, so dialog can be maintained. I am using a global clients[] list to contain the classes. initial dialog looks like :
clients = []
class RegisterWebSocket(SockJSConnection):
# intialize the class and handle on-open (some things left out)
def on_open(self,info):
self.ipaddress = info.headers['X-Real-Ip']
def on_message(self, data):
coinlist = []
msg = json.loads(data)
if 'coinlist' in msg:
coinlist = msg['coinlist']
if 'currency' in msg:
currency = msg['currency']
tz = pendulum.timezone('America/New_York')
started = pendulum.now(tz).to_day_datetime_string()
ws = WebClientUpdater(self, self.clientid, coinlist,currency,
started, self.ipaddress)
clients.append(ws)
The ws class is shown below and I use a tornado periodiccallback to update the clients with their specific info every 20 seconds
class WebClientUpdater(SockJSConnection):
def __init__(self, ws,id, clist, currency, started, ipaddress):
super(WebClientUpdater,self).__init__(ws.session)
self.ws = ws
self.id = id
self.coinlist = clist
self.currency = currency
self.started = started
self.ipaddress = ipaddress
self.location = loc
self.loop = tornado.ioloop.PeriodicCallback(self.updateCoinList,
20000, io_loop=tornado.ioloop.IOLoop.instance())
self.loop.start()
self.send_msg('welcome '+ id)
def updateCoinList(self):
pdata = db.getPricesOfCoinsInCurrency(self.coinlist,self.currency)
self.send(dict(priceforcoins = pdata))
def send_msg(self,msg):
self.send(msg)
I also start at 60 second periodiccallback at startup, to monitor the clients for closed connections and remove them from the client[] list. Which I put on the startup line to call a def internally like
if __name__ == "__main__":
app = make_app()
app.listen(options.port)
ScheduleSocketCleaning()
and
def ScheduleSocketCleaning():
def cleanSocketHouse():
print "checking sockets"
for x in clients:
if x.is_closed:
x = None
clients[:] = [y for y in clients if not y.is_closed ]
loop = tornado.ioloop.PeriodicCallback(cleanSocketHouse, 60000,
io_loop=tornado.ioloop.IOLoop.instance())
loop.start()
If I monitor the server using TOP I see that it uses 4% cpu typical with bursts to 60+ immediately, but later, say after a few hours it becomes in the 90% and stays there.
I have used strace and I see an enormous amount of Stat calls on the same files with errors shown in the strace -c view, but I cannot find any errors in a text file using -o trace.log. How can I find those errors ?
But I also notice that most of the time is consumed in epoll_wait.
%time
41.61 0.068097 7 9484 epoll_wait
26.65 0.043617 0 906154 2410 stat
15.77 0.025811 0 524072 read
10.90 0.017840 129 138 brk
2.41 0.003937 9 417 madvise
2.04 0.003340 0 524072 lseek
0.56 0.000923 3 298 sendto
0.06 0.000098 0 23779 gettimeofday
100.00 0.163663 1989527 2410 total
Notice 2410 errors above.
When I view the strace output stream using attached pid, I just see endless Stat calls on the same files..
Can someone advise me as to how to better debug this situation? With only two clients and 20 seconds between client updates, I would expect the CPU usage (there are no other users of the site during this prototype stage) would be less than 1% or thereabouts.
You need to close PeriodicCallbacks, otherwise its a memory leak. You do that by simply calling .close() on a PeriodicCallback object. One way to deal with that is in your periodic cleaning task:
def cleanSocketHouse():
global clients
new_clients = []
for client in clients:
if client.is_closed:
# I don't know why you call it loop,
# .timer would be more appropriate
client.loop.close()
else:
new_clients.append(client)
clients = new_clients
I'm not sure how accurate .is_closed is (some testing is required). The other way is to alter updateCoinList. The .send() method should fail when the client is no longer connected, right? Therefore try: except: should do the trick:
def updateCoinList(self):
global clients
pdata = db.getPricesOfCoinsInCurrency(self.coinlist,self.currency)
try:
self.send(dict(priceforcoins = pdata))
except Exception:
# log exception?
self.loop.close()
clients.remove(self) # you should probably use set instead of list
If ,send() actually doesn't fail (for whatever reason, I'm not that familiar with Tornado) then stick to the first solution.
I need to communicate with an embedded system over RS232. For this I want to profile the time it takes to send a response to each command.
I've tested this code using two methods: datetime.now() and timeit()
Method #1
def resp_time(n,msg):
"""Given number of tries - n and bytearray list"""
msg = bytearray(msg)
cnt = 0
timer = 0
while cnt < n:
time.sleep(INTERVAL)
a = datetime.datetime.now()
ser.flush()
ser.write(msg)
line = []
for count in ser.read():
line.append(count)
if count == '\xFF':
# print line
break
b = datetime.datetime.now()
c = b-a
# print c.total_seconds()*1000
timer = timer + c.total_seconds()*1000
cnt = cnt + 1
return timer/n
ser = serial.Serial(COMPORT,BAUDRATE,serial.EIGHTBITS, serial.PARITY_NONE, serial.STOPBITS_ONE, timeout=16)
if ser.isOpen():
print "Serial port opened at: Baud:",COMPORT,BAUDRATE
cmd = read_file()
# returns a list of commands [msg1,msg2....]
n = 100
for index in cmd:
timer = resp_time(n,index)
print "Time in msecs over %d runs: %f " % (n,timer)
Method #2
def com_loop(msg):
msg = bytearray(msg)
time.sleep(INTERVAL)
ser.flush()
ser.write(msg)
line = []
for count in ser.read():
line.append(count)
if count == '\xFF':
break
if __name__ == '__main__':
import timeit
ser = serial.Serial(COMPORT,BAUDRATE,serial.EIGHTBITS, serial.PARITY_NONE, serial.STOPBITS_ONE, timeout=16)
if ser.isOpen():
print "Serial port opened at: Baud:",COMPORT,BAUDRATE
cmd = read_file()
# returns a list of commands [msg1,msg2....]
n = 100
for index in cmd:
t = timeit.timeit("com_loop(index)","from __main__ import com_loop;index=%s;" % index,number = n)
print t/100
With datetime I get 2 milli-sec to execute a command & with timeit I get 200 milli-sec for the same command.
I suspect I'm not calling timeit() properly, can someone point me in the right direction?
I'd assume 200µs is closer to the truth, considering your comport will have something like 115200baud; assuming messages are 8 bytes long, transmitting one message would take about 9/115200 s ~= 10/100000 = 1/10,000 = 100µs on the serial line alone. Being faster than that will be pretty impossible.
Python is definitely not the language of choice to do timing characterization at these scales. You will need to get a logic analyzer, or work very close to the serial controller (which I hope is directly attached to your PC's IO controller and not some USB devices, because that will introduce latencies in the same order of magnitude, at least). If you're talking about microseconds, the limiting factor in measurement is usually the random time it takes for your PC to react to an interrupt, the OS to run the interrupt service routine, the scheduler to continue your userland process, and then starts python with its levels and levels of indirection. You're basically measuring the size of single grains of sand by holding a banana next to them.
Writing a program that verify list of emails syntax and MX records, as blocking programming is time consuming, I want do this async or by threads, this my code:
with open(file_path) as f:
# check the status of file, if away then file pointer will be at the last index
if (importState.status == ImportStateFile.STATUS_AWAY):
f.seek(importState.fileIndex, 0)
while True:
# the number of emails to process is configurable 10 or 20
emails = list(islice(f, app.config['NUMBER_EMAILS_TO_PROCESS']))
if len(emails) == 0:
break;
importState.fileIndex = importState.fileIndex + len(''.join(emails))
for email in emails:
email = email.strip('''<>;,'\r\n ''').lower()
d = threads.deferToThread(check_email, email)
d.addCallback(save_email_status, email, importState)
# set the number of emails processed
yield set_nbrs_emails_process(importState)
# do an insert of all emails
yield reactor.callFromThread(db.session.commit)
# set file status as success
yield finalize_import_file_state(importState)
reactor.callFromThread(reactor.stop)
Check email function:
def check_email(email):
pipe = subprocess.Popen(["./check_email", '--email=%s' % email], stdout=subprocess.PIPE)
status = pipe.stdout.read()
try:
status = int(status)
except ValueError:
status = -1
return status
what I need is to process 10 emails in same time and wait for result.
I'm not sure why there are threads involved in your example code. You don't need threads to interact with email with Twisted, nor to do so concurrently.
If you have an asynchronous function that returns a Deferred, you can just call it ten times and the ten different streams of work will proceed in parallel:
for i in range(10):
async_check_email_returning_deferred()
If you want to know when all ten results are available, you can use gatherResults:
from twisted.internet.defer import gatherResults
...
email_results = []
for i in range(10):
email_results.append(async_check_mail_returning_deferred())
all_results = gatherResults(email_results)
all_results is a Deferred that will fire when all of the Deferreds in email_results have fired (or when the first of them fires with a Failure).
I'm trying to check for errors in a log file of a running embedded system.
I already have implemented paramiko in my scripts, as I've been told this is the best way to use ssh in python.
Now when I tail the log file I see that there is a big delay build up. Which increases with about 30 seconds per minute.
I already used a grep to decrease the number of lines which is printed as I thought I was receiving too much input but that isn't the case.
How can I decrease this delay or stop the delay from increasing during runtime. I want to tail for hours...
def mkssh_conn(addr):
"""returns an sshconnection"""
paramiko.util.logging.getLogger('paramiko').setLevel(logging.WARN)
sshcon = paramiko.SSHClient()
sshcon.set_missing_host_key_policy(paramiko.AutoAddPolicy())
sshcon.connect(addr , username, password)
return sshcon
while True:
BUF_SIZE = 1024
client = mkssh_conn() #returns a paramiko.SSHClient()
transport = client.get_transport()
transport.set_keepalive(1)
channel = transport.open_session()
channel.settimeout(delta)
channel.exec_command( 'killall tail')
channel = transport.open_session()
channel.settimeout(delta)
cmd = "tail -f /log/log.log | grep -E 'error|statistics'"
channel.exec_command(cmd)
while transport.is_active():
print "transport is active"
rl, wl, xl = select.select([channel], [], [], 0.0)
if len(rl) > 0:
buf = channel.recv(BUF_SIZE)
if len(buf) > 0:
lines_to_process = LeftOver + buf
EOL = lines_to_process.rfind("\n")
if EOL != len(lines_to_process)-1:
LeftOver = lines_to_process[EOL+1:]
lines_to_process = lines_to_process[:EOL]
else:
LeftOver = ""
for line in lines_to_process.splitlines():
if "error" in line:
report_error(line)
print line
client.close()
I've found a solution:
It seems that if I lower BUF_SIZE to 256 the delay decreases. Obviously.
I need to recheck if the delay still increases during runtime or not.
BUFFER_SIZE should be on the higher end to reduce the cpu cycles ( and in turn to reduce the overall delay due to network latency ) in case if you are working with high throughput tailed pipe.
Further making the BUFFER_SIZE to higher number, should not reduce the performance. ( if paramiko doesn't wail till the buffer fills out in low throughput pipe )
Contradiction between #studioj 's answer and this, probably due to the upgrade of paramiko ( fixed now )