I'm using Python 3.4 on an RaspberryPi to read and upload data to Weather Underground. It works great most of the time, but occasionally either my internet connections is poor or Weather Underground servers are slow The other day I got this error:
socket.timeout: _ssl.c:584: The handshake operation timed out
I have try/except code, but it didn't match with any of the exceptions. I assumed the last "except:" would have caught the error, but I guess not. Should I just add "except socket.timeout:"?
try:
r = requests.get(full_URL, timeout=5) # send data to WU
# If uploaded successfully, website will reply with 200
if r.status_code == 200:
return(True)
else:
print('Upload Error: {} {}'.format(r.status_code, r.text))
return(False)
except requests.exceptions.ConnectionError:
print("Upload Error in upload2WU() - ConnectionError")
return(False)
except requests.exceptions.NewConnectionError:
print("Upload Error in upload2WU() - NewConnectionError")
return(False)
except requests.exceptions.MaxRetryError:
print("Upload Error in upload2WU() - MaxRetryError")
return(False)
except socket.gaierror:
print("Upload Error in upload2WU() - socket.gaierror")
return(False)
except:
print("Upload Error in upload2WU() - other")
return(False)
I do have two other places I'm using requests.get(), but they both use try: and except:
try:
response = requests.get(getUrl, timeout=5).json()
if len(response) > 1:
if isNumber(response['current_observation']['precip_today_in']):
daily_rain = float(response['current_observation']['precip_today_in'])
print('Suntec station daily rain={}'.format(daily_rain))
return(daily_rain)
return(ERR_INVALID_DATA)
except:
print("Error in WU_download.py getDailyRain() - failed get() request")
return(ERR_FAILED_GET)
Here's the other one:
try:
response = requests.get(getUrl, timeout=5).json()
if len(response) > 1: # valid response returns 3, if there's an error, the len() is 1
if isNumber(response['current_observation']['pressure_in']):
nearby_pressure = float(response['current_observation']['pressure_in'])
nearby_last_update_time = int(response['current_observation']['observation_epoch'])
if(nearby_pressure) > 25: # a pressure less than 25 inHg isn't gonna be valid
return(nearby_pressure)
# Didn't get a valid pressure. Try the next station in WU_STATIONS tuple
print("Couldn't get pressure data from {}".format(WU_STATIONS[i]))
nearby_pressure = ERR_INVALID_DATA
nearby_last_update_time = 0
i = i + 1
time.sleep(10)
except:
print("Error in WU_download.py getPressure(), failed get request for station {}".format(WU_STATIONS[i]))
i = i + 1
if (i >= len(WU_STATIONS)):
return(ERR_FAILED_GET)
Related
i'm getting some occasional Unpickling errors, but more times than not, it works fine. Essentially I'm generating images on the server side, and using pickle to transmit them to the client side.
Essentially, I am using my send() function to let the client know how many bytes the pickled data is, so it can use rscSock.recv() with the amount of bytes+1 when I use conn.send(graphs) to prevent this exact thing. And it works, most of the time. Occasionally I'm getting pickle truncated, and I can't seem to find out why. I tried using a while loop to receive it in blocks of 4096 from code I found on here (python 3.6 socket pickle data was truncated), but it hangs on the recv. Not sure what to do.
Server Code:
elif cmd['cmd'] == 'RSC_VIEW_GRAPHS':
graphs = pickle.dumps(genGraphs(userSession['uid'], cmd['arg0'], cmd['arg1']))
send(conn, 'RSC_IMG_DATA', len(graphs))
conn.send(graphs)
del graphs
Client Code
send(rscSock, 'RSC_VIEW_GRAPHS', radioVar.get(), str(dateObj.date()))
resp = receive(rscSock)
if resp['resp'] == 'RSC_IMG_DATA':
graphs = pickle.loads(rscSock.recv(int(resp['arg0'])+1))
The graphs variable being fulfilled by the genGraphs() function is returning an array of BytesIO objects, as shown here by the end of the genGraphs() function:
imgs = []
for x in statDict:
# Filler Code removed, irrelevant to post
imgs.append(io.BytesIO())
plt.savefig(imgs[-1], format='png')
plt.close()
return imgs
And lastly, here are the send() and receive() functions for both the client and server:
Client
def send(conn, cmd, *argv):
try:
cmdObj = {'cmd': cmd}
y = 0
for x in argv:
cmdObj['arg'+str(y)] = x
y+=1
cmdObj['key'] = sessionKey
obj = str.encode(json.dumps(cmdObj))
objLen = str(len(obj)).encode()
if conn.send(objLen):
if conn.recv(12).decode() == "RSC_LEN_OK":
if conn.send(obj):
if debug == 1: print("Sending '", obj, "' with length '", objLen ,"'")
return True
return False
except (ConnectionResetError, ConnectionAbortedError):
if cmdObj['arg1'] == 0:
return True
else:
messagebox.showerror("Real Estate Stat Counter", "Lost server connection. Please log back in.")
return False
def receive(conn):
try:
dataSize = int(conn.recv(8))
if dataSize < 16384:
conn.send(str.encode("RSC_LEN_OK"))
data = json.loads(conn.recv(dataSize).decode())
if debug == 1: print("Received '", data, "' with length '", dataSize ,"'")
return data
else:
conn.send(str.encode("RSC_LEN_NO"))
return False
except (OSError, UnicodeDecodeError, json.decoder.JSONDecodeError) as e:
return False
Server
def send(conn, resp, *argv):
try:
respObj = {'resp': resp}
y = 0
for x in argv:
respObj["arg"+str(y)] = x
y+=1
obj = str.encode(json.dumps(respObj))
objLen = str(len(obj)).encode()
if conn.send(objLen):
if conn.recv(12).decode() == "RSC_LEN_OK":
if conn.send(obj):
if debug == 1: print("Sending '", obj, "' with length '", objLen ,"'")
return True
logging.log("WARN: send() failure")
return False
except ConnectionResetError:
logging.log("INFO: Client connection lost, terming socket")
conn.close()
return False
def receive(conn):
try:
dataSize = int(conn.recv(8))
if dataSize < 16384:
conn.send(str.encode("RSC_LEN_OK"))
data = json.loads(conn.recv(dataSize).decode())
if debug == 1: print("Received '", data, "' with length '", dataSize ,"'")
return data
else:
conn.send(str.encode("RSC_LEN_NO"))
return False
except (OSError, UnicodeDecodeError, json.decoder.JSONDecodeError) as e:
logging.log("WARN: receive() received raw data:", conn.recv(16384).decode())
logging.log("WARN: receive() exception:", e)
return False
except ValueError:
logging.log("WARN: receive() did not get a valid byte length first")
return False
So based on jasonharper's comment, I ended up revising only the client code:
data = []
while len(b"".join(data)) < int(resp['arg0']):
data.append(rscSock.recv(4096))
graphs = pickle.loads(b"".join(data))
Now it will check in a loop if it's received the amount of bytes that it was originally told before sending the BytesIO object array.
Working good so far!!
I've recently been trying to create a torrent client in python, and have just got the UDP announce protocol to work.
The tracker accepts my connect request just fine but only returns my IP and port as the peer list when I announce to it...
I've tried to look at the same torrents in other torrent clients and they have multiple working peers while my request only shows me my computer (I've tried this on many torrents, all return just my IP and port)
Here's the code for the sending function itself:
async def announce_udp(self, try_num = 1):
self.sock.settimeout(15)
answer = {}
inner_while = False
while try_num < 4:
while try_num < 4:
try:
print("trying to send")
sended = self.send(1, self.announce_payload())
print("sending the following packet: {0}".format(sended))
print(self.url)
inner_while = True
break
except Exception:
print("problem in sending")
try_num += 1
if not inner_while:
break
try:
answer = self.interpret(15)
break
except Exception:
print("problem in receiving")
try_num += 1
print("announce answer is: {0}".format(answer))
return answer
here's the code for the make payload function:
def announce_payload(self, downloaded = 0, left = 0, uploaded = 0, event = 0, key = get_transaction_id()):
payload = [self.torrent.get_torrent_info_hash_decoded(), get_peer_id().encode(), downloaded,
self.torrent.get_torrent_size(), uploaded, event, 0, key, -1, 6988]
p_tosend = None
try:
p_tosend = struct.pack('!20s20sqqqiIIiH', *payload)
except Exception as e:
print("there was an error: {0}".format(e))
return p_tosend
here's the code for the interpret + process function:
def interpret(self, timeout=10):
self.sock.settimeout(timeout)
print("got to interpret")
try:
response = self.sock.recv(10240)
print("answer recieved")
except socket.timeout:
print("no answer, try again")
raise TrackerResponseException("no answer", 0)
headers = response[:8]
payload = response[8:]
action, trans_id = struct.unpack('!ll', headers)
try:
trans = self.transactions[trans_id]
except KeyError:
raise TrackerResponseException("InvalidTransaction: id not found", trans_id)
try:
trans['response'] = self.process(action, payload, trans)
except Exception as e:
trans['response'] = None
print("error occured: {0}".format(e))
trans['completed'] = True
del self.transactions[trans_id]
#print(trans)
return trans
def process_announce(self, payload, trans):
response = {}
info = payload[:struct.calcsize("!lll")]
interval, leechers, seeders = struct.unpack("!lll", info)
print(interval, leechers, seeders, "noamsssssss")
peer_data = payload[struct.calcsize("!lll"):]
peer_size = struct.calcsize("!lH")
num_of_peers = int(len(peer_data) / peer_size)
print("the number of peers is: {0} and the peer data is: {1}".format(num_of_peers, peer_data))
print()
peers = []
for peer_offset in range(num_of_peers):
off = peer_size * peer_offset
peer = peer_data[off:off + peer_size]
addr, port = struct.unpack("!lH", peer)
peers.append({
'addr': socket.inet_ntoa(struct.pack('!L', addr)),
'port': port,
})
print(payload)
return dict(interval=interval, leechers=leechers, seeders=seeders, peers=peers)
I'm sorry if any of this is irrelevant, but I want to give you all of the code incase it tells you something.
(get_peer_id() returns a random peer id per the tracker protocol specification, and the get_transaction_id() returns random.randint(0, 1 << 32 - 1))
EDIT:
Alright, I've found the problem and now I'm feeling pretty dumb...
turns out even in the udp tracker whenever you send the info hash it has to be SHA1 encoded.
Hopefully this can help someone if they are stuck in the same problem :)
I have try/except block which handles API requests to some client.
while attempts < 10:
try:
r = requests.post(server, data=contents,
auth=HTTPBasicAuth(service_userid, service_pswd))
r.raise_for_status()
except requests.exceptions.HTTPError as errh:
print ('Http Error:',errh)
attempts += 1
if attempts == 10:
body = 'Http Error: ' + str(errh)
subject = 'Failure'
sendEmailMessage(SMPTHOST, fromEmailAddr, toEmailAddr, subject, body)
except requests.exceptions.ConnectionError as errc:
print ('Error Connecting:',errc)
attempts += 1
if attempts == 10:
body = 'Error Connecting: ' + str(errh)
subject = 'Failure'
sendEmailMessage(SMPTHOST, fromEmailAddr, toEmailAddr, subject, body)
except requests.exceptions.Timeout as errt:
print ('Timeout Error:',errt)
attempts += 1
if attempts == 10:
body = 'Timeout Error: ' + str(errh)
subject = 'Failure'
sendEmailMessage(SMPTHOST, fromEmailAddr, toEmailAddr, subject, body)
except requests.exceptions.RequestException as err:
print ('Unidentified error: ',err)
attempts += 1
if attempts == 10:
body = 'Unidentified error: ' + str(errh)
subject = 'Failure'
sendEmailMessage(SMPTHOST, fromEmailAddr, toEmailAddr, subject, body)
How can I simplify above code?
In general, I would like to handle HTTP response error codes. I want to send an e-mail with specific error information only in case I get at least 10 error codes for the same call.
Since the action to perform is the same in each case, just group the exceptions into a single one, then customize the messages according to the error class / class name:
except (requests.exceptions.HTTPError,requests.exceptions.ConnectionError,requests.exceptions.RequestException,requests.exceptions.Timeout) as err:
error_message = "{}: ".format(err.__class__.__name__,err)
print (error_message)
attempts += 1
if attempts == 10:
body = error_message
subject = 'Failure'
sendEmailMessage(SMPTHOST, fromEmailAddr, toEmailAddr, subject, body)
If you need an indirection, just create a dictionary class name => string/action to perform/whatever.
while var == 1:
test_url = 'https://testurl.com'
get_response = requests.get(url=test_url)
parsed_json = json.loads(get_response.text)
test = requests.get('https://api.telegram.org/botid/' + 'sendMessage', params=dict(chat_id=str(0815), text="test"))
ausgabe = json.loads(test.text)
print(ausgabe['result']['text'])
time.sleep(3)
How do i put in a try-catch routine to this code, once per 2 days i get an Error in Line 4 at json.loads() and i cant reproduce it. What i´m trying to do is that the while loop is in a "try:" block and an catch block that only triggers when an error occurs inside the while loop. Additionally it would be great if the while loop doesnt stop on an error. How could i do this. Thank you very much for your help. (I started programming python just a week ago)
If you just want to catch the error in forth line, a "Try except" wrap the forth line will catch what error happened.
while var == 1:
test_url = 'https://testurl.com'
get_response = requests.get(url=test_url)
try:
parsed_json = json.loads(get_response.text)
except Exception as e:
print(str(e))
print('error data is {}',format(get_response.text))
test = requests.get('https://api.telegram.org/botid/' + 'sendMessage', params=dict(chat_id=str(0815), text="test"))
ausgabe = json.loads(test.text)
print(ausgabe['result']['text'])
time.sleep(3)
You can simply
while var == 1:
try:
test_url = 'https://testurl.com'
get_response = requests.get(url=test_url)
parsed_json = json.loads(get_response.text)
test = requests.get('https://api.telegram.org/botid/' + 'sendMessage', params=dict(chat_id=str(0815), text="test"))
ausgabe = json.loads(test.text)
print(ausgabe['result']['text'])
time.sleep(3)
except Exception as e:
print "an exception {} of type {} occurred".format(e, type(e).__name__)
I wrote a hiscore checker for a game that I play, basically you enter a list of usernames into the .txt file & it outputs the results in found.txt.
However if the page responds a 404 it throws an error instead of returning output as " 0 " & continuing with the list.
Example of script,
#!/usr/bin/python
import urllib2
def get_total(username):
try:
req = urllib2.Request('http://services.runescape.com/m=hiscore/index_lite.ws?player=' + username)
res = urllib2.urlopen(req).read()
parts = res.split(',')
return parts[1]
except urllib2.HTTPError, e:
if e.code == 404:
return "0"
except:
return "err"
filename = "check.txt"
accs = []
handler = open(filename)
for entry in handler.read().split('\n'):
if "No Displayname" not in entry:
accs.append(entry)
handler.close()
for account in accs:
display_name = account.split(':')[len(account.split(':')) - 1]
total = get_total(display_name)
if "err" not in total:
rStr = account + ' - ' + total
handler = open('tried.txt', 'a')
handler.write(rStr + '\n')
handler.close()
if total != "0" and total != "49":
handler = open('found.txt', 'a')
handler.write(rStr + '\n')
handler.close()
print rStr
else:
print "Error searching"
accs.append(account)
print "Done"
HTTPERROR exception that doesn't seem to be working,
except urllib2.HTTPError, e:
if e.code == 404:
return "0"
except:
return "err"
Error response shown below.
Now I understand the error shown doesn't seem to be related to a response of 404, however this only occurs with users that return a 404 response from the request, any other request works fine. So I can assume the issue is within the 404 response exception.
I believe the issue may lay in the fact that the 404 is a custom page which you get redirected too?
so the original page is " example.com/index.php " but the 404 is " example.com/error.php "?
Not sure how to fix.
For testing purposes, format to use is,
ID:USER:DISPLAY
which is placed into check.txt
It seems that total can end up being None. In that case you can't check that it has 'err' in it. To fix the crash, try changing that line to:
if total is not None and "err" not in total:
To be more specific, get_total is returning None, which means that either
parts[1] is None or
except urllib2.HTTPError, e: is executed but e.code is not 404.
In the latter case None is returned as the exception is caught but you're only dealing with the very specific 404 case and ignoring other cases.