How to handle errors using Nominatim? - python

I am running a Nominatim web service by geopy, but it fails very often, due to usage policy or maybe Internet connection. How can I handle failing connection with a stop, and rerun the code after several seconds or minutes. The error message is:
GeocoderServiceError: <urlopen error [Errno 10060] A connection attempt failed because the
connected party did not properly respond after a period of time, or established connection
failed because connected host has failed to respond>
and
GeocoderServiceError: HTTP Error 420: unused
The pseudo codes would be like:
try:
run web service
except:
stop several seconds or minutes and rerun webservice at the same line and loops
(if it fails again)
stop 30 minutes and rerun webservice
Any hint or suggestions will be the most welcome.
Thanks!

Thanks for the above comments. Revising try/ except is the solution.
Based on the geopy documentation (http://geopy.readthedocs.org/en/latest/#exceptions), the most common exception of using geopy is GeocoderServiceError. Here is the revising code to handle errors.
try:
run web service
except geopy.exc.GeocoderServiceError as e:
if e.message == 'HTTP Error 420: unused':
time.sleep(1800)
run web service
elif e.message == '<urlopen error [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>':
time.sleep(5)
run web service

Related

pip._vendor strange behaviour but requests works

I got issue when running my code. I see following error. The interesting thing is when i take this code to other environment like visual code the code is working and i am getting response... I think something is wrong with from pip._vendor import requests which pycharm adding automatically. For instance in visual code is adding imports requests and it works. What should i do this code to run correctly in pycharm?
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed
to respond
During handling of the above exception, another exception occurred:
self, "Failed to establish a new connection: %s" % e
pip._vendor.urllib3.exceptions.NewConnectionError: <pip._vendor.urllib3.connection.HTTPSConnection object at 0x00000217D0A2B9C8>: Failed to establish a new connection: [WinError 10060] A connection atte
mpt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
raise MaxRetryError(_pool, url, error or ResponseError(cause))
pip._vendor.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='xxx.xxx', port=443): Max retries exceeded with url: /api/ser (Caused by NewConnectionError('<pip._vend
or.urllib3.connection.HTTPSConnection object at 0x00000217D0A2B9C8>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respo
nd after a period of time, or established connection failed because connected host has failed to respond'))
This is my code:
from pip._vendor import requests
if __name__ == '__main__':
print(msg)
data = {
"Ex": 22
}
}
headers = {
"Authorization": "Bearer xxxx"
}
response = requests.post("https://xxx.xxx/api/ser", headers=headers, json=data)
print(response.json())
You are importing the _vendor import of your pip library. It works, but you can just do import requests instead (if it has been already installed).

Python/odoo - Magento Call/Request

Lets consider 'mywebsite' as my website uname as username and pwd as password.
Now the scenario is i have a system that was previously working but now when i am trying to connect to my magento from odoo it returns me an error
<ProtocolError for mywebsite/index.php/api/xmlrpc/: 301 Moved Permanently>
however this particular url ie https://mywebsite.com is accesible if you hit it on browser and also returns a true result when hit with Postman
i tried to hit the same url using a python script
import xmlrpclib
server = xmlrpclib.ServerProxy('https://mywebsite.com')
session = server.login('uname','pwd')
Multiple times over multiple environments
when i execute this script from the same environment that my server is hosted upon i get the same error
Error 301 Moved Permenantly
Now when i hit the same above script from my local enviroment i get
SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
which i assumed arises due to using https so when i change the url with http i get same error back which is
xmlrpclib.ProtocolError: <ProtocolError for mywebsite.com/: 301 Moved Permanently>
Hitting the above scripts from a staging environment gets me the same result as my local enviroment
also when i change the above script and run it using ip of the website along with port i get
socket.error: [Errno 110] Connection timed out
then i tried changing the script and running it with this code
import urllib
print urllib.urlopen("http://mywebsite.com/").getcode()
when i run this code from my local machine i get
Error 403 Forbidden Request
Hitting this new code with ip of website with port gets me
IOError: [Errno socket error] [Errno 110] Connection timed out
When i hit this code without mentioning the port i get
SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
now hitting these code from live environment using mywebsite.com gets me
Error 403
using ip without the port
[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:590)
with the ip and port
IOError: [Errno socket error] [Errno 110] Connection timed out
Any pointers or suggestions would be appreciated
if there are any silly mistakes please excuse them as i am a amateur in odoo/python
also if you have any other way to check if a url is hit-able please do let me know
Well it seems that when i entered the mywebsite url in odoo it was being appended in the backend by /index.php/api/xmlrpc which was working fine for some time
but now due to some changes it doesnt accept index.php anymore as it is automatically routed(mayb)
anyways i solved the error by changing the appending string to /api/xmlrpc

Python download big number files Time Out HTTP Error 502: Bad Gateway

I want to use Python3 to download a lot of images.
I was using urllib.request.urlretrieve to download,and set time out with socket.setdefaulttimeout
And I think it won't output time out error,but ....
Here my code
import threading
import socket
socket.setdefaulttimeout(60.0)
def multi_thread():
for file in self.url_list:
t = threading.Thread(target=fun, args=[file])
threads_task.append(t)
for task in threads_task:
task.start()
for task in threads_task:
def fun():
try:
urllib.request.urlretrieve(url, file_name)
except Exception as e:
print(e)
Error:
HTTP Error 502: Bad Gateway
HTTP Error 502: Bad Gateway
HTTP Error 502: Bad Gateway
HTTP Error 502: Bad Gateway
HTTP Error 502: Bad Gateway
<urlopen error [WinError 10060]
<urlopen error [WinError 10060]
<urlopen error [WinError 10060]
The problem is that I used the code to download 10 images,it works well.
But 1000+ images ,too many time out.maybe hundreds of time out.
Then I changed my idea ,drop multithreading,download images one by one,it works pretty good,and only two timeout errors.
So how can i deal with it?
Really need your help.Thanks.
The issue is not with your number of threads. For 1000+ files, you are running 1000+ concurrent threads, that cause the target server to overload. You need to limit concurrent threads. You can do this by multiprocessing.Pool.
from multiprocessing import Pool
import socket
socket.setdefaulttimeout(60.0)
def multi_thread():
pool = Pool(processes=4) # 4 concurrent processes/threads
pool.map(fun, self.url_list) # apply method 'fun' to all the items in 'self.url_list'
def fun(file):
url, filename = file.url, file.filename
try:
urllib.request.urlretrieve(url, file_name)
except Exception as e:
print(e)
In this way, your all 1000+ files will be downloaded in multiple threads/subprocesses, but there will only be 4 concurrent threads.

FTP Connection/Instantiation Hangs Application

I am attempting to open a connection via FTP using the following simple code. But the code is just hanging at this line. Its not advancing, its not throwing any exceptions or errors. My code is 6 months old and I've been able to use this code to connect to my website and download files all this time. Today its just started to hang when I go to open a FTP connection.
Do you know what could be going wrong?
ftp = ftplib.FTP("www.mySite.com") # hangs on this line
print("Im alive") # Never get printed out
ftp.login(username, password)
I administer the website with a couple of other people but we haven't changed anything.
Edit: Just tried to FTP in using Filezilla with the same username and password and it failed. The output was:
Status: Resolving address of www.mySite.com
Status: Connecting to IPADDRESS...
Status: Connection established, waiting for welcome message...
Error: Connection timed out
Error: Could not connect to server
Status: Waiting to retry...
Status: Resolving address of www.mySite.com
Status: Connecting to IPADDRESS...
Status: Connection established, waiting for welcome message...
Error: Connection timed out
Error: Could not connect to server
Looks like you have server issues, but if you'd like the Python program to error out instead of waiting forever for the server, you can specify a timeout kwarg to ftplib.FTP. From the docs (https://docs.python.org/2/library/ftplib.html#ftplib.FTP)
class ftplib.FTP([host[, user[, passwd[, acct[, timeout]]]]])
Return a new instance of the FTP class. When host is given, the method call connect(host) is made. When user is given, additionally
the method call login(user, passwd, acct) is made (where passwd and
acct default to the empty string when not given). The optional timeout
parameter specifies a timeout in seconds for blocking operations like
the connection attempt (if is not specified, the global default
timeout setting will be used).
Changed in version 2.6: timeout was added.

How can I get Pika to retry connecting to RabbitMQ if it fails the first time?

I'm trying to get my program, which uses Pika, to continually retry connecting to RabbitMQ on failure. From what I've seen of the Pika docs, there's a SimpleReconnectionStrategy class that can be used to accompish this but it doesn't seem to be working very well.
strategy = pika.SimpleReconnectionStrategy()
parameters = pika.ConnectionParameters(server)
self.connection = pika.AsyncoreConnection(parameters, True, strategy)
self.channel = self.connection.channel()
The connection should wait_for_open and setup the reconnection strategy.
However, when I run this, I get the following errors thrown:
error: uncaptured python exception, closing channel <pika.asyncore_adapter.RabbitDispatcher at 0xb6ba040c> (<class 'socket.error'>:[Errno 111] Connection refused [/usr/lib/python2.7/asyncore.py|read|79] [/usr/lib/python2.7/asyncore.py|handle_read_event|435] [/usr/lib/python2.7/asyncore.py|handle_connect_event|443])
error: uncaptured python exception, closing channel <pika.asyncore_adapter.RabbitDispatcher at 0xb6ba060c> (<class 'socket.error'>:[Errno 111] Connection refused [/usr/lib/python2.7/asyncore.py|read|79] [/usr/lib/python2.7/asyncore.py|handle_read_event|435] [/usr/lib/python2.7/asyncore.py|handle_connect_event|443])
These errors are continually thrown whilst Pika tries to connect. If I start the RabbitMQ server while my client is running, it will connect. I just don't like the sight of these errors... Are they normal? Am I doing this wrong?
import socket
...
while True:
connectSucceeded = False
try:
self.channel = self.connection.channel()
connectSucceeded = True
except socket.error:
pass
if connectSucceeded:
break
Something like the above is usually used. You could also add time.sleep() every time through the loop to try less frequently because sometimes servers do go down. In real production code I would also count the number of retries (or track the amount of time spent retrying) and give up after some interval. Sometimes it is better to log an error and crash.

Categories