I was not able to fetch url from biblegateway.com here it shows error as
urllib2.URLError: <urlopen error [Errno 1] _ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure> Please don't make duplicate as i went throught the sites which shown in duplicate i didn't understand by visiting that site .
here is my code
import urllib2
url = 'https://www.biblegateway.com/passage/?search=Deuteronomy+1&version=NIV'
response = urllib2.urlopen(url)
html = response.read()
print html
Here is a good reference for fetching url.
In python 3 you can have:
from urllib.request import urlopen
URL = 'https://www.biblegateway.com/passage/?search=Deuteronomy+1&version=NIV'
f = urlopen(URL)
myfile = f.read()
print(myfile)
Not sure it clears a ssl problem though. Maybe some clues here.
Related
# requirements
import pandas as pd
from urllib.request import Request, urlopen
from fake_useragent import UserAgent
from bs4 import BeautifulSoup
ua = UserAgent()
ua.ie
req = Request(df["URL"][0], headers={"User-Agent" : ua.ie})
html = urlopen(req).read()
soup_tmp = BeautifulSoup(html, "html.parser")
soup_tmp.find("p", "addy") #soup_find.select_one(".addy")
URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known>
I'm a student who studying python on vscode.
I don't know what I'm missing TT.
df["URL"][0] <- worked ..
anybody help me ..?
+
i solve it !!!!!
import requests
req = requests. get(df["URL"]49, headers={'user-agent' :ua.ie})
soup_tmp = BeautifulSoup(req.content, 'html.parser')
soup_tmp.select_one('.addy')
it works !!!!!!
Obviously, the problem is df["URL"][0] in the line:
req = Request(df["URL"][0], headers={"User-Agent" : ua.ie})
At the same time, you didn't provide the url you used. I used Google to test that it worked well:
url='https://www.google.com'
req = Request(url, headers={"User-Agent" : ua.ie})
You need to check whether the url you use is correct, which is not a problem with the codes.
I am new to web scraping. When I go to "https://pancakeswap.finance/prediction?token=BNB" and I right click on the page and investigate it, I get a complex html-page.
But when I try to get the same html-page through python with:
from urllib.request import urlopen
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python that doesn't verify HTTPS certificates by default
pass
else:
# Handle target environment that doesn't support HTTPS verification
ssl._create_default_https_context = _create_unverified_https_context
url = 'https://pancakeswap.finance/prediction?token=BNB'
page = urlopen(url)
html_bytes = page.read()
html = html_bytes.decode('utf-8')
print(html)
I get a different html-page. My goal is to scrape a specific value from the page, but I cannot access the value with the html-page I get through python.
Thanks for your help!
I'm using Python 3.7.3 and the requests_pkcs12 library to scrape a website where I must pass a certificate and password, then download and extract zip files from links on the page. I've got the first part working fine. But when I try to read the files using urllib, I get an error.
import urllib.request
from bs4 import BeautifulSoup
import requests
from requests_pkcs12 import get
# get page and setup BeautifulSoup
# r = requests.get(url) # old non-cert method
r = get(url, pkcs12_filename=certpath, pkcs12_password=certpwd)
# find zip files to download
soup = BeautifulSoup(r.content, "html.parser")
# Read files
i = 1
for td in soup.find_all(lambda tag: tag.name=='td' and tag.text.strip().endswith('DAILY.zip')):
link = td.find_next('a')
print(td.get_text(strip=True), link['href'] if link else '') # good
zipurl = 'https:\\my.downloadsite.com" + link['href'] if link else ''
print (zipurl) # good
# Read zip file from URL
url = urllib.request.urlopen(zipurl) # ERROR on this line SSLv3 alert handshake failure
zippedData = url.read()
I've seen various older posts with Python 2.x on ways to handle this, but wondering what the best way to do this now, with new libraries in Python 3.7.x.
Below is the stack trace of the error.
Answer was to not use urllib and instead use the same requests replacement that allows a pfx and password passed to it.
Last 2 lines:
url = urllib.request.urlopen(zipurl) # ERROR on this line SSLv3 alert handshake failure
zippedData = url.read()
should be replace with:
url = get(zipurl, pkcs12_filename=certpath, pkcs12_password=certpwd)
zippedData = url.content
I am trying to access a HTTPS webpage that has a login. I can't access it no matter what i do. here is the code i tried below.
import urllib.request
from bs4 import BeautifulSoup
proxy = urllib.request.ProxyHandler({'http':'http://proxyName:proxyNumber'})
opener = urllib.request.build_opener(proxy)
urllib.request.install_opener(opener)
response = urllib.request.urlopen('https://salesforce.com')
datum = response.read()
#.decode("UTF-8")
#response.close()
print(datum)
here is the ERROR
File S:\...py" line 8 in module
reponse = urllib.request.urlopen("https://salesforce.com)
urllib.error.URLError:(urlopen error [WinError 10061] No connection could be made because the target machine actively refused it)
Please help
here is another try with new error. I feel like im getting close!
import urllib.request
proxies = {'https': 'http://proxyName:ProxyNumber'}
opener = urllib.request.build_opener(proxies)
#urllib.request.get("https://login.salesforce.com/", proxies=proxies)
urllib.request.install_opener(opener)
response = urllib.request.urlopen("https://login.salesforce.com/", proxies=proxies)
Here is the Error message:
File S:/...py, line 6 in (module)
urllib.request.build_opener(proxies)
TypeError:expected BaseHandler instance, got (class 'dict)
If you can use third party modules, here is an easy solution with the requests module.
import requests
proxies = {
"http": "http://proxyName:proxyNumber"
}
requests.get("https://salesforce.com", proxies=proxies, auth=('user', 'pass'))
adapted from http://docs.python-requests.org/en/latest/user/advanced/#proxies
I have a perl program that retrieves data from the database of my university library and it works well. Now I want to rewrite it in python but encounter the problem
<urlopen error [errno 104] connection reset by peer>
The perl code is:
my $ua = LWP::UserAgent->new;
$ua->cookie_jar( HTTP::Cookies->new() );
$ua->timeout(30);
$ua->env_proxy;
my $response = $ua->get($url);
The python code I wrote is:
cj = CookieJar();
request = urllib2.Request(url); # url: target web page
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj));
opener = urllib2.install_opener(opener);
data = urllib2.urlopen(request);
I use VPN(virtual private network) to log in my university's library at home, and I tried both the perl code and python code. The perl code works as I expected, but the python code always encountered "urlopen error".
I googled for the problem and it seems that the urllib2 fails to load the environmental proxy. But according to the document of urllib2, the urlopen() function works transparently with proxies which do not require authentication. Now I feels quite confusing. Can anybody help me with this problem?
I tried faking the User-Agent headers as Uku Loskit and Mikko Ohtamaa suggested, and solved my problem. The code is as follows:
proxy = "YOUR_PROXY_GOES_HERE"
proxies = {"http":"http://%s" % proxy}
headers={'User-agent' : 'Mozilla/5.0'}
proxy_support = urllib2.ProxyHandler(proxies)
opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler(debuglevel=1))
urllib2.install_opener(opener)
req = urllib2.Request(url, None, headers)
html = urllib2.urlopen(req).read()
print html
Hope it is useful for someone else!
Firstly, as Steve said, you need response.read(), but that's not your problem
import urllib2
response = urllib2.urlopen('http://python.org/')
html = response.read()
Can you give details of the error? You can get it like this:
try:
urllib2.urlopen(req)
except URLError, e:
print e.code
print e.read()
Source: http://www.voidspace.org.uk/python/articles/urllib2.shtml
(I put this in a comment but it ate my newlines)
You might find that the requests module is a much easier-to-use replacement for urllib2.
Did you try specifying the proxy manually?
proxy = urllib2.ProxyHandler({'http': 'your_proxy_ip'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
urllib2.urlopen('http://www.uni-database.com')
if it still fails, try faking your User-Agent headers so as to make it seem that the request is coming from a real browser.