<urlopen error [Errno -3] Temporary failure in name resolution> - python

The problem here is that whenever I try to do urlopen request with urllib this problem pops up. I have no idea what could be causing this, maybe an outside source has ip banned me from sending urllib requests? Maybe, since when I try to execute the same line of code on my laptop it runs perfectly.
Keep in mind that my laptop is not in the same network as my server where the code is erroring out.
I have tried to do this on all of my other servers that I have and urllib works perfectly. The thing is that this error just popped out one day when I ran the code on the server. No code was changed, it just decided to show that error.
import urllib.request
urllib.request.urlopen("http://example.com/").read()
I expect the urlopen request to return readable html data but instead, it returns me this error:
urllib.error.URLError: urlopen error [Errno -3] Temporary
failure in name resolution

Problem solved. The problem here was that the /etc/resolv.conf file was non-existent and the solution to that was to create it and add, "nameserver 8.8.8.8" to the file. After that and rebooting the server, everything came back to normal.

Related

<urlopen error [Errno 11001]: getaddrinfo failed>

Could not connect to IP power. Perhaps the selecting of your proxy server in Internet Explorer are incorrect. Sending request to IP Power :http<...>
Need help regarding this issue
I think you have provided an incorrect address to urlopen, otherwise there is a problem with your connection.Check these and if you couldn't resolve it, please update your question with your code.

Python WebCrawling urllib.error.URLError: <urlopen error Temporary failure in name resolution>

I'm crawling some data from the web, and since the data that I should get is huge, I have got more than 500 simultaneous requests (made through urllib.request.urlopen(url) by pooling via multiprocessing).
The problem here is that the following error is thrown:
urllib.error.URLError: urlopen error Temporary failure in name
resolution
After some research, I have found that this problem was caused by the fact that the connections cannot be closed when there is too much requests. But haven't yet found any way to solve this.
Should I limit the simultaneous connections at some safe range, or change urllib request configuration?
Development environment:
Ubuntu 16.04
Python 3.6
Try using Session Objects from the requests library. As noted in the documentation,
The Session object allows you to persist certain parameters across requests. It also persists cookies across all requests made from the Session instance, and will use urllib3's connection pooling. So if you're making several requests to the same host, the underlying TCP connection will be reused, which can result in a significant performance increase (see HTTP persistent connection).
Maybe this other thread about efficient web scraping can help you out.

Python requests / urllib2 socket.error: [Errno 104] Connection reset by peer

I have the following code:
import requests
requests.get('URL WITH PARAMS HERE', auth=('MY USERNAME', 'MY PASSWORD'))
It is used to hit an API, but it returns the following error:
"socket.error: [Errno 104] Connection reset by peer"
I am able to retrieve results using my browser. I am also able to cURL it and get results. The same problem happens when using urllib2, but for some reason pycurl seems to retrieve results.
Is there any solution to make it work or any idea as to the problem?
Your code is correct. The error might mean that the server on the other end is unhappy about what you're sending. You have to make sure you send it an appropriate request. To do that, you can:
Read the documentation about the host
Contact its owner
Check what your browser is sending when you successfully access your data
For the third option, use the integrated development tools on firefox, chrome, safari or your favorite browser (e.g for Firefox, read this)

sudden 'socket.gaierror': [Errno -2] Name or service not known

I am having the following error:
(Url and api are made up in this example).
(, ConnectionError(MaxRetryError("HTTPConnectionPool(host='urlICantDisplay.com', port=80): Max retries exceeded with url: /some_api/user_id/action_name (Caused by : [Errno -2] Name or service not known)",),), )
I use the same api for many users, but suddenly I start getting this error,
and from then I keep getting the error until I restart the process.
I've read this might be a congestion problem:
Random "[Errno -2] Name or service not known" errors
and pausing between calls might help, but this is a real time application that should not pause.
and also i would have presumed that the api would start working after a while,
I used the api again in the same process after 7 hours and still got the error.
I also read this is a dns error, but as i've said, dns works then suddenly stops working altogether.
Only restarting the process solved it.
I thought about saving the ip of the dns to stop asking the dns server to do it.
But i'm not sure if it will work or even connected.

Python ftplib connection error (gaierror)

I am trying to make a very basic FTP client in python, and within the first few lines of code I have already run into a problem
My Code:
from ftplib import FTP
ftp = FTP('ftp.mysite.com')
With this code, and with countless different urls used, I will always get the same error:
gaierror: [Errno 11004] getaddrinfo failed
I found myself here with this error trying to connect using the full path rather than just the hostname. Make sure you split that out and use cwd(path) after login().
For example:
ftp = FTP('ftp.ncdc.noaa.gov')
ftp.login()
ftp.cwd('pub/data/noaa/2013')
instead of:
# Doesn't work!!
ftp = FTP('ftp.ncdc.noaa.gov/pub/data/noaa')
ftp.login()
ftp.cwd('2013')
Kind of obvious in hindsight, but hopefully I help you notice your simple mistake!
Actually, this means that your computer can't resolve the domain name that you gave it. A detailed error description is available here. Try to use a well-known working FTP to test (e.g. ftp.microsoft.com). Then try to open the FTP you're trying to access with some FTP client.

Categories