So I have a script that isn't quite working yet but I am hoping to get it to a point where it keeps trying to connect to a server until it finally succeeds (using the paramiko library). In simplistic terms, this is what my code is like:
canConnect = False
while canConnect == False:
try:
stdin, stdout, stderr = ssh.exec_command('ps')
if stdout.read():
canConnect = True
else:
# cannot connect
time.sleep(20)
except:
# cannot connect
time.sleep(20)
Now... this would be quite basic for a simple if statement but gets more complicated because I need to use "try" and "except". If the code can connect successfully (using "ps" as a random command that returns content and will prove the server is connectable), I assume it passes into the if condition that then sets canConnect to True and stops the loop. If it cannot connect, I think Paramiko will throw an exception (I put the "else" command there just in case), but once it hits the "except", it should wait for 20 seconds and then I assume the while statement will take the code back to the beginning and start again? What I have witnessed so far is that some kind of loop is happening, but it doesn't actually appear to be attempting to connect to the server.
Also, an unrelated question, documentation is scarce but I assume Paramiko /has/ to take 3 arguments like that to perform an exec_command (regardless of variables assigned, they will take standard output In, Out, Err in that order?)? I also assume it is uncommon to assign multiple comma-delimited variables to something like that, besides lists or method calls?
I think your use of except: may be masking the real problem, as it catches all exceptions, and disregards them. That would explain the some kind of loop is happening, but it doesn't actually appear to be attempting to connect to the server behavior. consider changing that to something like:
except (paramiko.SSHException, socket.error)
Related
I wonder if the below python code (specifically http server) ever crashes? Assuming that there is no grammer error in any of the library code(already compiled), what I think that handling the exceptions in a while loop should be sufficient for this code not to crash anytime. I tried the below code for a while and never crashed, but I wonder if theoretically or practically possible for this program to crash?
while True:
try:
server = HTTPServer(('', PORT_NUMBER), myHandler)
server.serve_forever()
except:
try:
server.socket.close()
except:
pass
The actual reason I am asking this question that I don't want to deal with UNIX staff to watch the process and restart it if it crashes. Is the above solution sufficient?
Thanks.
If "except" block has worng code, it can crash cause of it. I mean, something like that:
# path/to/py3
FOO = [1,2,3]
try:
# index out of bound, try block has error, so it goes ahead and executes except-block
print(FOO[4])
except:
# if there is some kind of error, like Syntax error, program can crash
print "Index out of bound!"
print("2"+2)
print(FOO["BAR"])
but if exception block has the correct logic too, then programm should work without crashing
Like Klaus D. already mentioned in his comment, there can be cases where the socket close code in your except block crashes. You could optionally also throw a try except around that as well...
Another option is to use something like this (no UNIX involved):
http://supervisord.org/
It's easy to run and will automatically restart your program if it crashes.
I'm trying to receive some data from thread, but every time it pass through exception, it not pass inside Try, i don't know what is wrong. I did it once, and i've searched every where. If someone please could help.
def receving(name, sock):
while run:
try:
tLock.acquire()
data = sock.recvfrom(1024)
except:
pass
finally:
tLock.release()
return data
host = socket.gethostbyname(socket.gethostname())
server = (host,5000)
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind((host,port))
s.setblocking(0)
pool = ThreadPool(processes=1)
async_result = pool.apply_async(receving, ('arg qualquer', s))
return_val = async_result.get()
print(return_val)
run = True
while run:
return_val = async_result.get()
print(return_val)
The error message is this:
return data
UnboundLocalError: local variable 'data' referenced before assignment
I've already tried to initialize before try: but the output is the same as default, it jumps Try: same way.
Also tried to make it global but no success.
The exception you describe is very straight forward. It's all in the function at the top of your code:
def receving(name, sock):
while run:
try:
tLock.acquire()
data = sock.recvfrom(1024)
except:
pass
finally:
tLock.release()
return data
If the code in the try block causes an exception, the assignment to data won't have run. So when you try to return data later on, the local variable has no value and so it doesn't work.
It's not hard to fix that specific issue. Try putting data = None or something similar in the except clause, instead of just pass. That way, data will be defined (albeit perhaps with a value that's not very useful) regardless of whether there was an exception or not.
You should however consider tightening up the except clause so that you're not ignoring all exceptions. That's often a bad idea, since it can cause the program to run even with really broken code in it. For instance, you've never defined tLock in the code you've shown, and the try would catch the NameError caused by trying to acquire it (you'd still get an exception though when the finally clause tries to release it, so I'm guessing this isn't a real issue in your code). Normally you should specify the exception types you want to catch. I'm not exactly sure which ones would be normal for your current code, so I'll leave picking them to you.
You might also consider not having an except clause at all, if there's no reasonable result to return. That way, the exception would "bubble out" of your function and it would be the caller's responsibility to deal with it. For some kinds of exceptions (e.g. ones cause by programming bugs, not expected situations), this is usually the best way to go.
There's a lot of other weird stuff in your code though, so I'd expect you'll run into other issues after fixing the first one. For instance, you always return from the first iteration of your while loop (assuming I fixed your messed up indentation correctly), so there's not really much point in having it at all. If the return data line is actually indented less (it's at the same level as while run, then the loop will make the code inside run more than once, but it will never stop running since nothing inside it will ever change the value of the global run variable.
There may be other issues too, but it's not entirely obvious to me what you're trying to do, so I can't help with those. Multi-threaded code and network programming can be very tough to get right even for experienced programmers, so if you're new it might be a better idea to start with something simpler first.
I have a process that runs data acquisition using PySerial. It's working fine now, but there's a weird thing I had to do to make it work continuously, and I'm not sure this is normal, so I'm asking this question.
What happens: It looks like that the connection drops now and then! Around once every 30-60 minutes, with big error bars (could go for hours and be OK, but sometimes happens often).
My question: Is this standard?
My temporary solution: I wrote a simple "reopen" function that looks like this:
def ReopenDevice(devObject):
try:
devObject.close()
devObject.open()
except Exception as e:
print("Error while trying to connect to device " + devObject.port + ". The error says: " + str(e))
time.sleep(2)
And what I do is that if data pulling fails for 2 minutes, I reopen the device with this function, and it continues working well with no problems.
My program model: It's a GUI program, where the user clicks something like "Start", and that button does some preparations and runs a function through multiprocessing.Process() that starts with:
devObj = serial.Serial()
#... other params
devObj.open()
and that function then runs a while loop that keeps polling data with something like:
bytesToRead = devObj.inWaiting()
if bytesToRead != 0:
buffer = decodeString(devObj.read(bytesToRead))
#process buffer and push it to a list...
The way I know that the problem happened, is that devObj.inWaiting() Keeps returning zero... no matter how much data there's on the device!
Is this behavior expected and should always be considered whether it happens or doesn't happen?
The problem reduced a lot after not calling inWaiting() very frequently. Anyway, I kept the reconnect part to ensure that my program never fails. Thanks for "Kobi K" for suggesting the possible cause of the problem.
I will write a SSH communicator class on Python. I have telnet communicator class and I should use functions like at telnet. Telnet communicator have read_until and read_very_eager functions.
read_until : Read until a given string is encountered or until timeout.
read_very_eager : Read everything that's possible without blocking in I/O (eager).
I couldn't find these functions for SSH communicator. Any idea?
You didn't state it in the question, but I am assuming you are using Paramiko as per the tag.
read_until: Read until a given string is encountered or until timeout.
This seems like a very specialized function for a particular high level task. I think you will need to implement this one. You can set a timeout using paramiko.Channel.settimeout and then read in a loop until you get either the string you want or a timeout exception.
read_very_eager: Read everything that's possible without blocking in I/O (eager).
Paramiko doesn't directly provide this, but it does provide primitives for non-blocking I/O and you can easily put this in a loop to slurp in everything that's available on the channel. Have you tried something like this?
channel.setblocking(True)
resultlist = []
while True:
try:
chunk = channel.recv(1024)
except socket.timeout:
break
resultlist.append(chunk)
return ''.join(resultlist)
Hi there even i was searching solution for the same problem.
I think it might help you ....
one observation, tell me if you find solution.
I wont get output if i remove 6th line.
I was actually printing 6th line to know the status, later i found recv_exit_status() should be called for execution of this code.
import paramiko,sys
trans = paramiko.Transport((host, 22))
trans.connect(username = user, password = passwd)
session = trans.open_channel("session")
session.exec_command('grep -rE print .')
session.recv_exit_status()
while session.recv_ready():
temp = session.recv(1024)
print temp
1.Read until > search for the data you are searching for and break the loop
2.Read_very_eager > use the above mentioned code.
I'm writing a program that adds normal UNIX accounts (i.e. modifying /etc/passwd, /etc/group, and /etc/shadow) according to our corp's policy. It also does some slightly fancy stuff like sending an email to the user.
I've got all the code working, but there are three pieces of code that are very critical, which update the three files above. The code is already fairly robust because it locks those files (ex. /etc/passwd.lock), writes to to a temporary files (ex. /etc/passwd.tmp), and then, overwrites the original file with the temporary. I'm fairly pleased that it won't interefere with other running versions of my program or the system useradd, usermod, passwd, etc. programs.
The thing that I'm most worried about is a stray ctrl+c, ctrl+d, or kill command in the middle of these sections. This has led me to the signal module, which seems to do precisely what I want: ignore certain signals during the "critical" region.
I'm using an older version of Python, which doesn't have signal.SIG_IGN, so I have an awesome "pass" function:
def passer(*a):
pass
The problem that I'm seeing is that signal handlers don't work the way that I expect.
Given the following test code:
def passer(a=None, b=None):
pass
def signalhander(enable):
signallist = (signal.SIGINT, signal.SIGQUIT, signal.SIGABRT, signal.SIGPIPE, signal.SIGALRM, signal.SIGTERM, signal.SIGKILL)
if enable:
for i in signallist:
signal.signal(i, passer)
else:
for i in signallist:
signal.signal(i, abort)
return
def abort(a=None, b=None):
sys.exit('\nAccount was not created.\n')
return
signalhander(True)
print('Enabled')
time.sleep(10) # ^C during this sleep
The problem with this code is that a ^C (SIGINT) during the time.sleep(10) call causes that function to stop, and then, my signal handler takes over as desired. However, that doesn't solve my "critical" region problem above because I can't tolerate whatever statement encounters the signal to fail.
I need some sort of signal handler that will just completely ignore SIGINT and SIGQUIT.
The Fedora/RH command "yum" is written is Python and does basically exactly what I want. If you do a ^C while it's installing anything, it will print a message like "Press ^C within two seconds to force kill." Otherwise, the ^C is ignored. I don't really care about the two second warning since my program completes in a fraction of a second.
Could someone help me implement a signal handler for CPython 2.3 that doesn't cause the current statement/function to cancel before the signal is ignored?
As always, thanks in advance.
Edit: After S.Lott's answer, I've decided to abandon the signal module.
I'm just going to go back to try: except: blocks. Looking at my code there are two things that happen for each critical region that cannot be aborted: overwriting file with file.tmp and removing the lock once finished (or other tools will be unable to modify the file, until it is manually removed). I've put each of those in their own function inside a try: block, and the except: simply calls the function again. That way the function will just re-call itself in the event of KeyBoardInterrupt or EOFError, until the critical code is completed.
I don't think that I can get into too much trouble since I'm only catching user provided exit commands, and even then, only for two to three lines of code. Theoretically, if those exceptions could be raised fast enough, I suppose I could get the "maximum reccurrsion depth exceded" error, but that would seem far out.
Any other concerns?
Pesudo-code:
def criticalRemoveLock(file):
try:
if os.path.isFile(file):
os.remove(file)
else:
return True
except (KeyboardInterrupt, EOFError):
return criticalRemoveLock(file)
def criticalOverwrite(tmp, file):
try:
if os.path.isFile(tmp):
shutil.copy2(tmp, file)
os.remove(tmp)
else:
return True
except (KeyboardInterrupt, EOFError):
return criticalOverwrite(tmp, file)
There is no real way to make your script really save. Of course you can ignore signals and catch a keyboard interrupt using try: except: but it is up to your application to be idempotent against such interrupts and it must be able to resume operations after dealing with an interrupt at some kind of savepoint.
The only thing that you can really to is to work on temporary files (and not original files) and move them after doing the work into the final destination. I think such file operations are supposed to be "atomic" from the filesystem prospective. Otherwise in case of an interrupt: restart your processing from start with clean data.