Some of you might remember a question very similar to this, as I seeked your help writin the original util in C (using libssh2 and openssl).
I'm now trying to port it to python and got stuck at an unexpected place. Ported about 80% of the core and functionality in 30 minutes, and then spend 10hours+ and still haven't finished that ONE function, so I'm here again to ask for you help one more time :)
The whole source (~130 lines, should be easily readable, not complex) is available here: http://pastebin.com/Udm6Ehu3
The connecting, switching on SSL, handshaking, authentication and even sending (encrypted) commands works fine (I can see from my routers log that I log in with proper user and password).
The problem is with ftp_read in the tunnel scenario (else from self.proxy is None). One attempt was this:
def ftp_read(self, trim=False):
if self.proxy is None:
temp = self.s.read(READBUFF)
else:
while True:
try:
temp = self.sock.bio_read(READBUFF)
except Exception, e:
print type(e)
if type(e) == SSL.WantReadError:
try:
self.chan.send(self.sock.bio_read(10240))
except Exception, e:
print type(e)
self.chan.send(self.sock.bio_read(10240))
elif type(e) == SSL.WantWriteError:
self.chan.send(self.sock.bio_read(10240))
But I end up stuck at either having a blocked waiting for bio read (or channel read in the ftp_write function), or exception OpenSSL.SSL.WantReadError which, ironicly, is what I'm trying to handle.
If I comment out the ftp_read calls, the proxy scenario works fine (logging in, sending commands no problem), as mentioned. So out of read/write unencrypted, read/write encrypted I'm just missing the read tunnel encrypted.
I've spend 12hours+ now, and feel like I'm getting nowhere, so any thoughts are highly appreciated.
EDIT: I'm not asking someone to write the function for me, so if you know a thing or two about SSL (especially BIOs), and you can see an obvious flaw in my interaction between tunnel and BIO, that'll suffice as a answer :) Like: maybe the ftp_write returns more data than those 10240 bytes requested (or just sends two texts ("blabla\n", "command done.\n")) so it isn't properly flushed. Which might be true, but apparently I can't rely on .want_write()/.want_read() from pyOpenSSL to report anything but 0 bytes available.
Okay, so I think I manged to sort it out.
sarnold, you'll like this updated version:
def ftp_read(self, trim=False):
if self.proxy is None:
temp = self.s.read(READBUFF)
else:
temp = ""
while True:
try:
temp += self.sock.recv(READBUFF)
break
except Exception, e:
if type(e) == SSL.WantReadError:
self.ssl_wants_read()
elif type(e) == SSL.WantWriteError:
self.ssl_wants_write()
where ssl_wants_* is:
def ssl_wants_read(self):
try:
self.chan.send(self.sock.bio_read(10240))
except Exception, e:
chan_output = None
chan_output = self.chan.recv(10240)
self.sock.bio_write(chan_output)
def ssl_wants_write(self):
self.chan.send(self.sock.bio_read(10240))
Thanks for the input, sarnold. It made things a bit clearer and easier to work with. However, my issue seemed to be one missed error handling (broke out of SSL.WantReadError exception too soon).
Related
I'm trying to receive some data from thread, but every time it pass through exception, it not pass inside Try, i don't know what is wrong. I did it once, and i've searched every where. If someone please could help.
def receving(name, sock):
while run:
try:
tLock.acquire()
data = sock.recvfrom(1024)
except:
pass
finally:
tLock.release()
return data
host = socket.gethostbyname(socket.gethostname())
server = (host,5000)
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind((host,port))
s.setblocking(0)
pool = ThreadPool(processes=1)
async_result = pool.apply_async(receving, ('arg qualquer', s))
return_val = async_result.get()
print(return_val)
run = True
while run:
return_val = async_result.get()
print(return_val)
The error message is this:
return data
UnboundLocalError: local variable 'data' referenced before assignment
I've already tried to initialize before try: but the output is the same as default, it jumps Try: same way.
Also tried to make it global but no success.
The exception you describe is very straight forward. It's all in the function at the top of your code:
def receving(name, sock):
while run:
try:
tLock.acquire()
data = sock.recvfrom(1024)
except:
pass
finally:
tLock.release()
return data
If the code in the try block causes an exception, the assignment to data won't have run. So when you try to return data later on, the local variable has no value and so it doesn't work.
It's not hard to fix that specific issue. Try putting data = None or something similar in the except clause, instead of just pass. That way, data will be defined (albeit perhaps with a value that's not very useful) regardless of whether there was an exception or not.
You should however consider tightening up the except clause so that you're not ignoring all exceptions. That's often a bad idea, since it can cause the program to run even with really broken code in it. For instance, you've never defined tLock in the code you've shown, and the try would catch the NameError caused by trying to acquire it (you'd still get an exception though when the finally clause tries to release it, so I'm guessing this isn't a real issue in your code). Normally you should specify the exception types you want to catch. I'm not exactly sure which ones would be normal for your current code, so I'll leave picking them to you.
You might also consider not having an except clause at all, if there's no reasonable result to return. That way, the exception would "bubble out" of your function and it would be the caller's responsibility to deal with it. For some kinds of exceptions (e.g. ones cause by programming bugs, not expected situations), this is usually the best way to go.
There's a lot of other weird stuff in your code though, so I'd expect you'll run into other issues after fixing the first one. For instance, you always return from the first iteration of your while loop (assuming I fixed your messed up indentation correctly), so there's not really much point in having it at all. If the return data line is actually indented less (it's at the same level as while run, then the loop will make the code inside run more than once, but it will never stop running since nothing inside it will ever change the value of the global run variable.
There may be other issues too, but it's not entirely obvious to me what you're trying to do, so I can't help with those. Multi-threaded code and network programming can be very tough to get right even for experienced programmers, so if you're new it might be a better idea to start with something simpler first.
So, I've always had this problem, and no matter what, can't seem to solve it.
When using multithreading, I can-simply-not get the classes to talk to each other.
Yes, I have some pretty ugly code to back my claims.
In this case, I will be using code from a small proxy as an example.
This class handles data from the server to the client that is intercepted;
class HandleInbound(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
global ftwn
self.dsi.connect(('208.95.184.130',7255))
self.dsi.setblocking(0)
while True:
try:
f = open('odat.tmp','r+')
a = f.read()
if len(a) != 0:
self.dsi.send(open('odat.tmp').read())
open('odat.tmp','w').write("")
dat = self.dsi.recv(65356)
open('idat.tmp','w').write(dat)
udat = dat.encode("hex")
Log(udat, 0, ftwn)
except socket.timeout:
print("DSI Timedout")
except socket.error as e:
Log(e, 2, ftwn)
except Exception as e:
raise
print("In;",e)
self.dsi.close()
sys.exit()
This is my problemitc class. Keep that in mind.
This one works as intended, handling traffic from the client to the server;
class HandleOutbound(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
global conn, addr, ftwn
print("Client connected from {}.".format(addr))
conn.setblocking(0)
while True:
try:
f = open('idat.tmp','r+')
a = f.read()
if len(a) != 0:
conn.send(open('idat.tmp').read())
open('idat.tmp','w').write("")
dat = conn.recv(65356)
open('odat.tmp','w').write(dat)
udat = dat.encode("hex")
Log(udat, 0, ftwn)
except socket.timeout:
print("Conn Timedout")
except socket.error as e:
Log(e, 2, ftwn)
except Exception as e:
print("Out;",e)
conn.close()
sys.exit()
As you can see, the current form of communications is through temporary files.
HandleOutbound, or for lack of time and space, Class 2, reads a file. If there is no data in the file, it attempts to get some and put it there for the HandleInbound, or, Class 1 to read.
Class 2 does it's job, and Class 1 gets lazy at the sending the data part of the bargain, and later fails to even notice the file and code are there, however, it does have time to write 2GB log files of "Non-blocking errors" to my computer in the 5 minutes it takes me to figure out what and why.
I have previously, and always try first, with varying levels of success, global variables. Today it was having none of that.
I also attempted to communicate straight from Class 1 to Class 2, or even using Booleans, but my luck skill ran out.
I have rewritten Class 1 multiple times over. First to see if something was hidden. No change. Then to match identically, or as close to as possible, to Class 2. I eventually also C/P'd the thing and renamed the variables.
At least I am having consistent results - No changes!
There also seems to be little or no beginner or intermediate guides or documents that corner and tackle this issue, as I have been looking for a while, as in years-keeping in mind I can't "Just google it".
The logs, and console leave no help other than "Non-blocking socket operation was unable to complete instantly".
I know the code is bad, the style is sloppy, and fragmented, but it should, in theory, work regardless, or throw a fit of varying levels constantly, correct?
probably a simple question as I fairly new to python and programming in general but I am currently working on improving a program of mine and can't figure out how to keep the program going if an exception is caught. Maybe I am looking at it the wrong way but for example I have something along these lines:
self.thread = threading.Thread(target=self.run)
self.thread.setDaemon(True)
self.thread.start()
def run(self):
logging.info("Starting Awesome Program")
try:
while 1:
awesome_program(self)
except:
logging.exception('Got exception on main handler')
OnError(self)
def OnError(self):
self.Destroy()
Obviously I am currently just killing the program when an error is reached. awesome_program is basically using pyodbc to connect and run queries on a remote database. The problem arises when connection is lost. If I don't catch the exceptions the program just freezes so I set it up as it is above which kills the program but this is not always ideal if no one is around to manually restart it. Is there an easy way to either keep the program running or restert it. Feel free to berate me for incorrect syntax or poor programming skills. I am trying to teach myself and am still very much a novice and there is plenty I don't understand or am probably not doing correctly. I can post more of the code if needed. I wasn't sure how much to post without being overwhelming.
Catch the exception within the loop, and continue, even if an exception is caught.
def run(self):
logging.info("Starting Awesome Program")
while 1:
try:
awesome_program(self)
except:
logging.exception('Got exception on main handler')
OnError(self)
BTW:
Your indentation seems messed up.
I'd prefer while True. Python has bool type, unlike C, so when a bool is expected - give while a bool.
You're looking for this:
def run(self):
while True:
try:
do_things()
except Exception as ex:
logging.info("Caught exception {}".format(ex))
Take a look at Python Exception Handling, and in particular Try...Except. It will allow you to catch particular errors and handle them however you choose fit, even ignore them completely, if possible. For example:
try:
while something == True:
do_stuff()
except ExceptionType:
print "Something bad happened!" #An error occurred, but the script continues
except:
print "Something worse happened!"
raise #a worse error occurred, now we kill it
do_more_stuff()
I've got a large bulk downloading application written in Python/Mechanize, aiming to download something like 20,000 files. Clearly, any downloader that big is occasionally going to run into some ECONNRESET errors. Now, I know how to handle each of these individually, but there's two problems with that:
I'd really rather not wrap every single outbound web call in a try/catch block.
Even if I were to do so, there's trouble with knowing how to handle the errors once the exception has thrown. If the code is just
data = browser.response().read()
then I know precisely how to deal with it, namely:
data = None
while (data == None):
try:
data = browser.response().read()
except IOError as e:
if e.args[1].args[0].errno != errno.ECONNRESET:
raise
data = None
but if it's just a random instance of
browser.follow_link(link)
then how do I know what Mechanize's internal state looks like if an ECONNRESET is thrown somewhere in here? For example, do I need to call browser.back() before I try the code again? What's the proper way to recover from that kind of error?
EDIT: The solution in the accepted answer certainly works, and in my case it turned out to be not so hard to implement. I'm still academically interested, however, in whether there's an error handling mechanism that could result in quicker error catching.
Perhaps place the try..except block higher up in the chain of command:
import collections
def download_file(url):
# Bundle together the bunch of browser calls necessary to download one file.
browser.follow_link(...)
...
response=browser.response()
data=response.read()
urls=collections.deque(urls)
while urls:
url=urls.popleft()
try:
download_file(url)
except IOError as err:
if err.args[1].args[0].errno != errno.ECONNRESET:
raise
else:
# if ECONNRESET error, add the url back to urls to try again later
urls.append(url)
I've written a simple multi-threaded game server in python that creates a new thread for each client connection. I'm finding that every now and then, the server will crash because of a broken-pipe/SIGPIPE error. I'm pretty sure it is happening when the program tries to send a response back to a client that is no longer present.
What is a good way to deal with this? My preferred resolution would simply close the server-side connection to the client and move on, rather than exit the entire program.
PS: This question/answer deals with the problem in a generic way; how specifically should I solve it?
Assuming that you are using the standard socket module, you should be catching the socket.error: (32, 'Broken pipe') exception (not IOError as others have suggested). This will be raised in the case that you've described, i.e. sending/writing to a socket for which the remote side has disconnected.
import socket, errno, time
# setup socket to listen for incoming connections
s = socket.socket()
s.bind(('localhost', 1234))
s.listen(1)
remote, address = s.accept()
print "Got connection from: ", address
while 1:
try:
remote.send("message to peer\n")
time.sleep(1)
except socket.error, e:
if isinstance(e.args, tuple):
print "errno is %d" % e[0]
if e[0] == errno.EPIPE:
# remote peer disconnected
print "Detected remote disconnect"
else:
# determine and handle different error
pass
else:
print "socket error ", e
remote.close()
break
except IOError, e:
# Hmmm, Can IOError actually be raised by the socket module?
print "Got IOError: ", e
break
Note that this exception will not always be raised on the first write to a closed socket - more usually the second write (unless the number of bytes written in the first write is larger than the socket's buffer size). You need to keep this in mind in case your application thinks that the remote end received the data from the first write when it may have already disconnected.
You can reduce the incidence (but not entirely eliminate) of this by using select.select() (or poll). Check for data ready to read from the peer before attempting a write. If select reports that there is data available to read from the peer socket, read it using socket.recv(). If this returns an empty string, the remote peer has closed the connection. Because there is still a race condition here, you'll still need to catch and handle the exception.
Twisted is great for this sort of thing, however, it sounds like you've already written a fair bit of code.
Read up on the try: statement.
try:
# do something
except socket.error, e:
# A socket error
except IOError, e:
if e.errno == errno.EPIPE:
# EPIPE error
else:
# Other error
SIGPIPE (although I think maybe you mean EPIPE?) occurs on sockets when you shut down a socket and then send data to it. The simple solution is not to shut the socket down before trying to send it data. This can also happen on pipes, but it doesn't sound like that's what you're experiencing, since it's a network server.
You can also just apply the band-aid of catching the exception in some top-level handler in each thread.
Of course, if you used Twisted rather than spawning a new thread for each client connection, you probably wouldn't have this problem. It's really hard (maybe impossible, depending on your application) to get the ordering of close and write operations correct if multiple threads are dealing with the same I/O channel.
I face with the same question. But I submit the same code the next time, it just works.
The first time it broke:
$ packet_write_wait: Connection to 10.. port 22: Broken pipe
The second time it works:
[1] Done nohup python -u add_asc_dec.py > add2.log 2>&1
I guess the reason may be about the current server environment.
My answer is very close to S.Lott's, except I'd be even more particular:
try:
# do something
except IOError, e:
# ooops, check the attributes of e to see precisely what happened.
if e.errno != 23:
# I don't know how to handle this
raise
where "23" is the error number you get from EPIPE. This way you won't attempt to handle a permissions error or anything else you're not equipped for.