So, I've always had this problem, and no matter what, can't seem to solve it.
When using multithreading, I can-simply-not get the classes to talk to each other.
Yes, I have some pretty ugly code to back my claims.
In this case, I will be using code from a small proxy as an example.
This class handles data from the server to the client that is intercepted;
class HandleInbound(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
global ftwn
self.dsi.connect(('208.95.184.130',7255))
self.dsi.setblocking(0)
while True:
try:
f = open('odat.tmp','r+')
a = f.read()
if len(a) != 0:
self.dsi.send(open('odat.tmp').read())
open('odat.tmp','w').write("")
dat = self.dsi.recv(65356)
open('idat.tmp','w').write(dat)
udat = dat.encode("hex")
Log(udat, 0, ftwn)
except socket.timeout:
print("DSI Timedout")
except socket.error as e:
Log(e, 2, ftwn)
except Exception as e:
raise
print("In;",e)
self.dsi.close()
sys.exit()
This is my problemitc class. Keep that in mind.
This one works as intended, handling traffic from the client to the server;
class HandleOutbound(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
global conn, addr, ftwn
print("Client connected from {}.".format(addr))
conn.setblocking(0)
while True:
try:
f = open('idat.tmp','r+')
a = f.read()
if len(a) != 0:
conn.send(open('idat.tmp').read())
open('idat.tmp','w').write("")
dat = conn.recv(65356)
open('odat.tmp','w').write(dat)
udat = dat.encode("hex")
Log(udat, 0, ftwn)
except socket.timeout:
print("Conn Timedout")
except socket.error as e:
Log(e, 2, ftwn)
except Exception as e:
print("Out;",e)
conn.close()
sys.exit()
As you can see, the current form of communications is through temporary files.
HandleOutbound, or for lack of time and space, Class 2, reads a file. If there is no data in the file, it attempts to get some and put it there for the HandleInbound, or, Class 1 to read.
Class 2 does it's job, and Class 1 gets lazy at the sending the data part of the bargain, and later fails to even notice the file and code are there, however, it does have time to write 2GB log files of "Non-blocking errors" to my computer in the 5 minutes it takes me to figure out what and why.
I have previously, and always try first, with varying levels of success, global variables. Today it was having none of that.
I also attempted to communicate straight from Class 1 to Class 2, or even using Booleans, but my luck skill ran out.
I have rewritten Class 1 multiple times over. First to see if something was hidden. No change. Then to match identically, or as close to as possible, to Class 2. I eventually also C/P'd the thing and renamed the variables.
At least I am having consistent results - No changes!
There also seems to be little or no beginner or intermediate guides or documents that corner and tackle this issue, as I have been looking for a while, as in years-keeping in mind I can't "Just google it".
The logs, and console leave no help other than "Non-blocking socket operation was unable to complete instantly".
I know the code is bad, the style is sloppy, and fragmented, but it should, in theory, work regardless, or throw a fit of varying levels constantly, correct?
Related
I have a program with some low-level hardware components, which may fail(not initialized, timeout, comm issues, invalid commands etc.). They live in a server, which receives requests from a webclient.
So my idea is to have custom exceptions to capture what may fail in which drive - so that I can in some cases take remediation actions (e.g. try to reset the adapter if it's a comm problem etc.), or bubble up the errors in the cases where I can't do anything low-level, perhaps so that the server can return a generic error message to the webclient.
For instance:
class DriveException(Exception):
""" Raised when we have a drive-specific problem """
def __init__(self, message, drive=None, *args):
self.message = message
self.drive = drive
super().__init__(message, drive, *args)
But then that drive may have had a problem because, say, the ethernet connexion didn't respond:
class EthernetCommException(Exception):
""" Raised when ethernet calls failed """
In the code, I can ensure my exceptions bubble up this way:
# ... some code ....
try:
self.init_controllers() # ethernet cx failed, or key error etc.
except Exception as ex:
raise DriveException(ex) from ex
# .... more code....
I have a high-level try/except in the server to ensure it keeps responding to requests & doesn't crash in case of a low-level component not responding. That mechanic works fine.
However, I have many different drives. I'd rather avoid putting lots of try/except everywhere in my code. My current idea is to do something like:
def koll_exception(func):
""" Raises a drive exception if needed """
#functools.wraps(func)
def wrapper_exception(*args, **kwargs):
try:
value = func(*args, **kwargs)
return value
except Exception as ex:
raise DriveException(ex, drive=DriveEnum.KOLLMORGAN) from ex
return wrapper_exception
So that I can just dO:
#koll_exception
def risky_call_to_kolldrive():
#doing stuff & raising a drive exception if anything goes wrong
# then anywhere in the code
foo = risky_call_to_kolldrive()
My prototype seems to work fine with the decorator. However I've search a bit about using to approach to try/except and was somewhat surprise not to find much about it. Is there a good reason why people don't do that I'm not seeing? Other than they usually just wrap everything in a high-level try/catch & don't bother much more with it?
I'm trying to receive some data from thread, but every time it pass through exception, it not pass inside Try, i don't know what is wrong. I did it once, and i've searched every where. If someone please could help.
def receving(name, sock):
while run:
try:
tLock.acquire()
data = sock.recvfrom(1024)
except:
pass
finally:
tLock.release()
return data
host = socket.gethostbyname(socket.gethostname())
server = (host,5000)
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind((host,port))
s.setblocking(0)
pool = ThreadPool(processes=1)
async_result = pool.apply_async(receving, ('arg qualquer', s))
return_val = async_result.get()
print(return_val)
run = True
while run:
return_val = async_result.get()
print(return_val)
The error message is this:
return data
UnboundLocalError: local variable 'data' referenced before assignment
I've already tried to initialize before try: but the output is the same as default, it jumps Try: same way.
Also tried to make it global but no success.
The exception you describe is very straight forward. It's all in the function at the top of your code:
def receving(name, sock):
while run:
try:
tLock.acquire()
data = sock.recvfrom(1024)
except:
pass
finally:
tLock.release()
return data
If the code in the try block causes an exception, the assignment to data won't have run. So when you try to return data later on, the local variable has no value and so it doesn't work.
It's not hard to fix that specific issue. Try putting data = None or something similar in the except clause, instead of just pass. That way, data will be defined (albeit perhaps with a value that's not very useful) regardless of whether there was an exception or not.
You should however consider tightening up the except clause so that you're not ignoring all exceptions. That's often a bad idea, since it can cause the program to run even with really broken code in it. For instance, you've never defined tLock in the code you've shown, and the try would catch the NameError caused by trying to acquire it (you'd still get an exception though when the finally clause tries to release it, so I'm guessing this isn't a real issue in your code). Normally you should specify the exception types you want to catch. I'm not exactly sure which ones would be normal for your current code, so I'll leave picking them to you.
You might also consider not having an except clause at all, if there's no reasonable result to return. That way, the exception would "bubble out" of your function and it would be the caller's responsibility to deal with it. For some kinds of exceptions (e.g. ones cause by programming bugs, not expected situations), this is usually the best way to go.
There's a lot of other weird stuff in your code though, so I'd expect you'll run into other issues after fixing the first one. For instance, you always return from the first iteration of your while loop (assuming I fixed your messed up indentation correctly), so there's not really much point in having it at all. If the return data line is actually indented less (it's at the same level as while run, then the loop will make the code inside run more than once, but it will never stop running since nothing inside it will ever change the value of the global run variable.
There may be other issues too, but it's not entirely obvious to me what you're trying to do, so I can't help with those. Multi-threaded code and network programming can be very tough to get right even for experienced programmers, so if you're new it might be a better idea to start with something simpler first.
Clients send in data through a socket. The data is parsed up, and placed into lists.
This works fine.
Sometimes it contains duplicate data and I want to replace the old data with this data so I .pop() it out and append in the new data.
This works fine... for a while.
There's a slowdown somewhere. I'm judging the speed on a pretty consistent amount of data. It speeds through it all to start with and lasts about 10 minutes or so. In that time it's had to constantly clear old matches and the list size has been around the same.
But then the console becomes a fairly slow wall of "removing dupe:" when it was flying through that amount before.
And because that takes time more gets added to the queue and it becomes a never ending cycle which it can't catch up on.
Snippet of current code:
def QDUmp(): #Runs as a thread
while 1:
while not q.empty():
print q.get()
XMLdataparse = []
del XMLdataparse[:]
XMLdataparse[:] = []
XMLdataparse = q.get().split('--ListBreaker--')
if len(XMLdataparse) == 20:
if "EventText" in XMLdataparse[0]:
TheCounter = len(EventTags)-1
for Events in reversed(EventTags):
try:
EventN = EventNames[TheCounter]
PlaceN = PlaceNames[TheCounter]
TypeN = BetHorsessToMake[TheCounter]
OldTag = EventTags[TheCounter]
if EventN == str(XMLdataparse[2]) and PlaceN == str(XMLdataparse[3]) and TypeN == str(XMLdataparse[4]):
print "removing dupe: ",TypeN
EventTags.pop(TheCounter)
EventTimes.pop(TheCounter)
EventNames.pop(TheCounter)
PlaceNames.pop(TheCounter)
TheCounter = TheCounter - 1
except:
print "problem removing a duplicate result"
if float(XMLdataparse[6]) > float(XMLdataparse[18]):
EventTags.append(XMLdataparse[0])
EventTimes.append(XMLdataparse[1])
EventNames.append(XMLdataparse[2])
PlaceNames.append(XMLdataparse[3])
class ThreadedServer(object):
def __init__(self, host, port):
self.host = host
self.port = port
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.sock.bind((self.host, self.port))
def listen(self):
self.sock.listen(5)
while True:
client, address = self.sock.accept()
client.settimeout(60)
threading.Thread(target = self.listenToClient,args = (client,address)).start()
def listenToClient(self, client, address):
size = 1024
while True:
try:
data = client.recv(size)
if data:
try:
BigSocketParse = []
del BigSocketParse[:]
BigSocketParse[:] = []
BigSocketParse = data.split('--MarkNew--')
print "Putting data in queue"
for eachmatch in BigSocketParse:
q.put(str(eachmatch))
except:
print "Unable to parse socket text."
#q.put(data)
#QCheck.start()
else:
raise error('Client disconnected')
except:
client.close()
CheckQ = Thread(target = QDUmp)
CheckQ.start()
ThreadedServer('',1234).listen()
The data is sent in with one larger socket using --MarkNew-- as a delimiter and I break it up to the list parts with --ListBreaker-- after that. Maybe not the most efficient way of doing things but the sockets are largely the same size as well so the slowdown has to be the way I'm dealing with the list.
Of the top of my head it's not efficient in the first place because it has to go through the whole list. But I don't know another way to get rid of the duplicates.
Any pointers on this would be really appreciated.
Updates:
I'd found a way to have it deal with maybe a dozen of two entries rather than a few hundred and compare the entire socket data coming in rather than each individual part but that won't do the task unfortunately. I have to be able to keep individual parts if they're new and remove the duplicates. I can't figure out a way to not have to do this.
Was considering multi threading it. It might hog resources but at least it wouldn't grind everything to a halt but the whole reason I started to use a queue was to not have multiple threads reading and writing to these lists at the same time.
Update #2:
Hang on...
The exception handler doesn't move TheCounter on which means it wouldn't move on properly though if that is throwing exceptions it'll throw the whole thing out of whack anyway. That might explain one or two other bugs that cropped up when it started to slow down.
Going to rework things a bit using just the one list.
I could change it to be one list and not several or only compare one entry and not three but I find it hard to believe that could be the cause of the slowdown.
Update #3:
Reduced it to popping from just one list. The function writing the data to the XML now copies the list first so it's working from a different one than the queue is writing to.
This has improved things but it still moves slower than I'd expect.
I've reduced it down to dealing with one list and that seems to have sped it up and fixed a few other problems at the same time.
I know it looks weird but I think this is about as efficient as I'm going to be able to get it.
probably a simple question as I fairly new to python and programming in general but I am currently working on improving a program of mine and can't figure out how to keep the program going if an exception is caught. Maybe I am looking at it the wrong way but for example I have something along these lines:
self.thread = threading.Thread(target=self.run)
self.thread.setDaemon(True)
self.thread.start()
def run(self):
logging.info("Starting Awesome Program")
try:
while 1:
awesome_program(self)
except:
logging.exception('Got exception on main handler')
OnError(self)
def OnError(self):
self.Destroy()
Obviously I am currently just killing the program when an error is reached. awesome_program is basically using pyodbc to connect and run queries on a remote database. The problem arises when connection is lost. If I don't catch the exceptions the program just freezes so I set it up as it is above which kills the program but this is not always ideal if no one is around to manually restart it. Is there an easy way to either keep the program running or restert it. Feel free to berate me for incorrect syntax or poor programming skills. I am trying to teach myself and am still very much a novice and there is plenty I don't understand or am probably not doing correctly. I can post more of the code if needed. I wasn't sure how much to post without being overwhelming.
Catch the exception within the loop, and continue, even if an exception is caught.
def run(self):
logging.info("Starting Awesome Program")
while 1:
try:
awesome_program(self)
except:
logging.exception('Got exception on main handler')
OnError(self)
BTW:
Your indentation seems messed up.
I'd prefer while True. Python has bool type, unlike C, so when a bool is expected - give while a bool.
You're looking for this:
def run(self):
while True:
try:
do_things()
except Exception as ex:
logging.info("Caught exception {}".format(ex))
Take a look at Python Exception Handling, and in particular Try...Except. It will allow you to catch particular errors and handle them however you choose fit, even ignore them completely, if possible. For example:
try:
while something == True:
do_stuff()
except ExceptionType:
print "Something bad happened!" #An error occurred, but the script continues
except:
print "Something worse happened!"
raise #a worse error occurred, now we kill it
do_more_stuff()
Some of you might remember a question very similar to this, as I seeked your help writin the original util in C (using libssh2 and openssl).
I'm now trying to port it to python and got stuck at an unexpected place. Ported about 80% of the core and functionality in 30 minutes, and then spend 10hours+ and still haven't finished that ONE function, so I'm here again to ask for you help one more time :)
The whole source (~130 lines, should be easily readable, not complex) is available here: http://pastebin.com/Udm6Ehu3
The connecting, switching on SSL, handshaking, authentication and even sending (encrypted) commands works fine (I can see from my routers log that I log in with proper user and password).
The problem is with ftp_read in the tunnel scenario (else from self.proxy is None). One attempt was this:
def ftp_read(self, trim=False):
if self.proxy is None:
temp = self.s.read(READBUFF)
else:
while True:
try:
temp = self.sock.bio_read(READBUFF)
except Exception, e:
print type(e)
if type(e) == SSL.WantReadError:
try:
self.chan.send(self.sock.bio_read(10240))
except Exception, e:
print type(e)
self.chan.send(self.sock.bio_read(10240))
elif type(e) == SSL.WantWriteError:
self.chan.send(self.sock.bio_read(10240))
But I end up stuck at either having a blocked waiting for bio read (or channel read in the ftp_write function), or exception OpenSSL.SSL.WantReadError which, ironicly, is what I'm trying to handle.
If I comment out the ftp_read calls, the proxy scenario works fine (logging in, sending commands no problem), as mentioned. So out of read/write unencrypted, read/write encrypted I'm just missing the read tunnel encrypted.
I've spend 12hours+ now, and feel like I'm getting nowhere, so any thoughts are highly appreciated.
EDIT: I'm not asking someone to write the function for me, so if you know a thing or two about SSL (especially BIOs), and you can see an obvious flaw in my interaction between tunnel and BIO, that'll suffice as a answer :) Like: maybe the ftp_write returns more data than those 10240 bytes requested (or just sends two texts ("blabla\n", "command done.\n")) so it isn't properly flushed. Which might be true, but apparently I can't rely on .want_write()/.want_read() from pyOpenSSL to report anything but 0 bytes available.
Okay, so I think I manged to sort it out.
sarnold, you'll like this updated version:
def ftp_read(self, trim=False):
if self.proxy is None:
temp = self.s.read(READBUFF)
else:
temp = ""
while True:
try:
temp += self.sock.recv(READBUFF)
break
except Exception, e:
if type(e) == SSL.WantReadError:
self.ssl_wants_read()
elif type(e) == SSL.WantWriteError:
self.ssl_wants_write()
where ssl_wants_* is:
def ssl_wants_read(self):
try:
self.chan.send(self.sock.bio_read(10240))
except Exception, e:
chan_output = None
chan_output = self.chan.recv(10240)
self.sock.bio_write(chan_output)
def ssl_wants_write(self):
self.chan.send(self.sock.bio_read(10240))
Thanks for the input, sarnold. It made things a bit clearer and easier to work with. However, my issue seemed to be one missed error handling (broke out of SSL.WantReadError exception too soon).