I get a "Connection Error" error while capturing data. It works fine for a while then gives error, how can I overcome this error.
import requests
from bs4 import BeautifulSoup
url = "https://www.example.com"
for page in range(0,951,50):
new_url = url +page + "&pagingSize=50"
r = requests.get(new_url)
source = BeautifulSoup(r.content,"html.parser")
content = source.select('tr.searchResultsItem:not(.nativeAd, .classicNativeAd)')
print(content)
When I get this error, I want it to wait for a while and continue where it left off
Error:
ConnectionError: ('Connection aborted.', OSError("(10054, 'WSAECONNRESET')"))
You can workaround connection resets (and other networking problems) by implementing retries. Basically, you can tell requests to automatically retry if a problem occurs.
Here's how you can do it:
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
session = requests.Session()
# in case of error, retry at most 3 times, waiting
# at least half a second between each retry
retry = Retry(total=3, backoff_factor=0.5)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
Then, instead of:
r = requests.get(new_url)
you can use:
r = session.get(new_url)
See also the documentation for Retry for a full overview of the scenarios it supports.
Related
Thanks for reading. For a small reserach project, I'm trying to gather some data from KBB (www.kbb.com). However, I'm always getting a "urllib.error.HTTPError: HTTP Error 400: Bad Request" Error. I think I can access different websites with this simple piece of code. I'm not sure if this is an issue with the code or the specific website itself?
Maybe someone can point me in the right direction.
from urllib import request as urlrequest
proxy_host = '23.107.176.36:32180'
url = "https://www.kbb.com/gmc/canyon-extended-cab/2018/"
req = urlrequest.Request(url)
req.set_proxy(proxy_host, 'https')
page = urlrequest.urlopen(req)
print(page)
There are 2 issue but one solution as I found below
Is the proxy server which is refused.
You need authentication for the server in every case it responds with a 403 forbidden
Using urlib
from urllib import request as urlrequest
proxy_host = '23.107.176.36:32180'
url = "https://www.kbb.com/gmc/canyon-extended-cab/2018/"
req = urlrequest.Request(url)
# req.set_proxy(proxy_host, 'https')
page = urlrequest.urlopen(req)
print(req)
> urllib.error.HTTPError: HTTP Error 403: Forbidden
Using Requests
import requests
url = "https://www.kbb.com/gmc/canyon-extended-cab/2018/"
res = requests.get(url)
print(res)
# >>> <Response [403]>
Using PostMan
edit Solution
Setting a timeout litter longer it works. however I had to retry several times, because the proxy sometimes just dont' reponds
import urllib.request
proxy_host = '23.107.176.36:32180'
url = "https://www.kbb.com/gmc/canyon-extended-cab/2018/"
proxy_support = urllib.request.ProxyHandler({'https' : proxy_host})
opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)
res = urllib.request.urlopen(url, timeout=1000) # Set
print(res.read())
Result
b'<!doctype html><html lang="en"><head><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta charset="utf-8"><meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=5,minimum-scale=1"><meta http-equiv="x-dns-prefetch-control" content="on"><link rel="dns-prefetch preconnect" href="//securepubads.g.doubleclick.net" crossorigin><link rel="dns-prefetch preconnect" href="//c.amazon-adsystem.com" crossorigin><link .........
Using Requests
import requests
proxy_host = '23.107.176.36:32180'
url = "https://www.kbb.com/gmc/canyon-extended-cab/2018/"
# NOTE: we need a loger timeout for the proxy t response and set verify sale for an ssl error
r = requests.get(url, proxies={"https": proxy_host}, timeout=90000, verify=False) # Timeout are in milliseconds
print(r.text)
Your code appears to work fine without the set_proxy statement, I think it is most likely that your proxy server is rejecting the request rather than KBB.
I'm using a proxy service to cycle requests with different proxy ips for web scraping. Do I need to build in functionality to end requests so as to not overload the web server I'm scraping?
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlencode
import concurrent.futures
list_of_urls = ['https://www.example']
NUM_RETRIES = 3
NUM_THREADS = 5
def scrape_url(url):
params = {'api_key': 'API_KEY', 'url': url}
# send request to scraperapi, and automatically retry failed requests
for _ in range(NUM_RETRIES):
try:
response = requests.get('http://api.scraperapi.com/', params=urlencode(params))
if response.status_code in [200, 404]:
## escape for loop if the API returns a successful response
break
except requests.exceptions.ConnectionError:
response = ''
## parse data if 200 status code (successful response)
if response.status_code == 200:
## do stuff
with concurrent.futures.ThreadPoolExecutor(max_workers=NUM_THREADS) as executor:
executor.map(scrape_url, list_of_urls)
Hi if you are using the latest version of requests, then most probably it is keeping the TCP connection alive. What you can do is to define a request class and set it up not to keep the connections alive and then proceed normally with you code
s = requests.session()
s.config['keep_alive'] = False
As discussed here, there really isn't such a thing as an HTTP connection and what httplib refers to as the HTTPConnection is really the underlying TCP connection which doesn't really know much about your requests at all. Requests abstracts that away and you won't ever see it.
The newest version of Requests does in fact keep the TCP connection alive after your request.. If you do want your TCP connections to close, you can just configure the requests to not use keep-alive.
Alternatively
s = requests.session(config={'keep_alive': False})
Updated version of your code
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlencode
import concurrent.futures
list_of_urls = ['https://www.example']
NUM_RETRIES = 3
NUM_THREADS = 5
def scrape_url(url):
params = {'api_key': 'API_KEY', 'url': url}
s = requests.session()
s.config['keep_alive'] = False
# send request to scraperapi, and automatically retry failed requests
for _ in range(NUM_RETRIES):
try:
response = s.get('http://api.scraperapi.com/', params=urlencode(params))
if response.status_code in [200, 404]:
## escape for loop if the API returns a successful response
break
except requests.exceptions.ConnectionError:
response = ''
## parse data if 200 status code (successful response)
if response.status_code == 200:
## do stuff
with concurrent.futures.ThreadPoolExecutor(max_workers=NUM_THREADS) as executor:
executor.map(scrape_url, list_of_urls)
I am new at this, but trying to scrape data from the website that requires a log in. Getting an error trying to open it. It appear that the problems is in cookies, that they are not being properly stored?
import requests
from bs4 import BeautifulSoup
from urllib.request import urlopen
from http.cookiejar import CookieJar
import urllib
username = 'xxx'
password = 'xxx'
values = {'email': username, 'password': password}
session = requests.session()
login_url = 'https://login.aripaev.ee/Account/Login?ReturnUrl=%2fOAuth%2fAuthorize%3fclient_id%3dinfopank%26redirect_uri%3dhttps%253A%252F%252Finfopank.ee%252FAccount%252FLogin%253FreturnUrl%253D%25252F%2526returnAsRedirect%253DFalse%26state%3dLjNuwARtELJnVPcF8ka2Jg%26scope%3d%252FUserDataService%252Fjson%252FProfile%2520%252FUserDataService%252Fjson%252FPermissions%2520%252FUserDataService%252Fjson%252FOrders%2520%252FUserDataService%252Fv2%252Fjson%252FProfile%2520%252FUserDataService%252Fv2%252Fjson%252FPermissions%2520%252FUserDataService%252Fv2%252Fjson%252FOrders%26response_type%3dcode&client_id=infopank&redirect_uri=https%3A%2F%2Finfopank.ee%2FAccount%2FLogin%3FreturnUrl%3D%252F%26returnAsRedirect%3DFalse&state=LjNuwARtELJnVPcF8ka2Jg&scope=%2FUserDataService%2Fjson%2FProfile%20%2FUserDataService%2Fjson%2FPermissions%20%2FUserDataService%2Fjson%2FOrders%20%2FUserDataService%2Fv2%2Fjson%2FProfile%20%2FUserDataService%2Fv2%2Fjson%2FPermissions%20%2FUserDataService%2Fv2%2Fjson%2FOrders&response_type=code'
url = 'https://infopank.ee/ettevote/1/'
result = session.get(login_url)
result = session.post(login_url, data = values, headers = dict(referer=login_url))
cookieProcessor = urllib.request.HTTPCookieProcessor()
opener = urllib.request.build_opener(cookieProcessor)
page = urlopen(url)
Error message:
HTTPError: HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop.
The last 30x error message was:
Found
Any suggestions are welcome - thanks!
Don't mix urllib.request with requests. If you are going to use requests, it will just work fine.
Remove these lines from your program:
from urllib.request import urlopen
from http.cookiejar import CookieJar
import urllib
cookieProcessor = urllib.request.HTTPCookieProcessor()
opener = urllib.request.build_opener(cookieProcessor)
page = urlopen(url)
This code has the issue that it doesn't have the cookies that were in the requests.session and also that the call to urlopen uses the default opener which has no cookie support at all. Rather opener.open should have been used.
Replace this with:
page = session.get(url)
Then the requests.session keeps track of the cookies for you.
I am trying to open a page/link and catch the content in it.
It gives me the required content sometimes and throws error sometimes.
I see that if I refresh the page a few times - I get the content.
So, I want to reload the page and catch it.
Here's my pseudo code:
attempts = 0
while attempts:
try:
open_page = urllib2.Request(www.xyz.com)
# Or I think we can also do urllib2.urlopen(www.xyz.com)
break
except:
# here I want to refresh/reload the page
attempts += 1
My questions are:
1. How can I reload the page using urllib or urllib2 or requests or mechanize?
2. Can we loop try catch that way?
Thank you!
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
attempts = 10
retries = Retry(total=attempts,
backoff_factor=0.1,
status_forcelist=[ 500, 502, 503, 504 ])
sess = requests.Session()
sess.mount('http://', HTTPAdapter(max_retries=retries ))
sess.mount('https://', HTTPAdapter(max_retries=retries))
sess.get('http://www.google.co.nz/')
If you do while attempts when attempts equal to 0 you will never start the loop. I'd do it backwards, initialize attempts to equal your desired number of reloads:
attempts = 10
while attempts:
try:
open_page = urllib2.Request('www.xyz.com')
except:
attempts -= 1
else:
attempts = False
The follow function can refresh after some exception raised or the http response status code is not 200.
def retrieve(url):
while 1:
try:
response = requests.get(url)
if response.ok:
return response
else:
print(response.status)
time.sleep(3)
continue
except:
print(traceback.format_exc())
time.sleep(3)
continue
I was writing a Python script to grab lyrics of a song from azlyrics using the request module. This is the script I wrote:
import requests, re
from bs4 import BeautifulSoup as bs
url = "http://search.azlyrics.com/search.php"
payload = {'q' : 'shape of you'}
r = requests.get(url, params = payload)
soup = bs(r.text,"html.parser")
try:
link = soup.find('a', {'href':re.compile('http://www.azlyrics.com/lyrics/edsheeran/shapeofyou.html')})['href']
link = link.replace('http', 'https')
print(link)
raw_data = requests.get(link)
except Exception as e:
print(e)
but I got an exception stating :
Max retries exceeded with url: /lyrics/edsheeran/shapeofyou.html (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fbda00b37f0>: Failed to establish a new connection: [Errno 111] Connection refused',))
I read on the internet that I am probably trying to send too many requests. So I made the script sleep for some time :
import requests, re
from bs4 import BeautifulSoup as bs
from time import sleep
url = "http://search.azlyrics.com/search.php"
payload = {'q' : 'shape of you'}
r = requests.get(url, params = payload)
soup = bs(r.text,"html.parser")
try:
link = soup.find('a', {'href':re.compile('http://www.azlyrics.com/lyrics/edsheeran/shapeofyou.html')})['href']
link = link.replace('http', 'https')
sleep(60)
print(link)
raw_data = requests.get(link)
except Exception as e:
print(e)
but no luck!
So I tried the same with urllib.request
import requests, re
from bs4 import BeautifulSoup as bs
from time import sleep
from urllib.request import urlopen
url = "http://search.azlyrics.com/search.php"
payload = {'q' : 'shape of you'}
r = requests.get(url, params = payload)
soup = bs(r.text,"html.parser")
try:
link = soup.find('a', {'href':re.compile('http://www.azlyrics.com/lyrics/edsheeran/shapeofyou.html')})['href']
link = link.replace('http', 'https')
sleep(60)
print(link)
raw_data = urlopen(link).read()
except Exception as e:
print(e)
but then got different exception stating :
<urlopen error [Errno 111] Connection refused>
Can anyone one tell me whats wrong with it and how do I fix it?
Try it in your web browser; when you try to visit http://www.azlyrics.com/lyrics/edsheeran/shapeofyou.html it'll work fine, but when you try to visit https://www.azlyrics.com/lyrics/edsheeran/shapeofyou.html it won't work.
So remove your link = link.replace('http', 'https') line and try again.