Python BeautifulSoup get request with double quotes in URL - python

I'm trying to get BeautifulSoup to read this page but the URL is not passed correctly into the get() command.
The URL is https://www.econjobrumors.com/topic/supreme-court-to-%e2%80%9cconsider%e2%80%9d-taking-up-harvard-affirmative-action-case-on-june-10. But when I try to use BeautifulSoup to get the data from the URL it always gives an error saying that the URL is incorrect
response = requests.get(url = "https://www.econjobrumors.com/topic/supreme-court-to-%e2%80%9cconsider%e2%80%9d-taking-up-harvard-affirmative-action-case-on-june-10",
verify = False \
)
print(response.request.url, end="\r")
It was the double quotes, “ (U+201C) and ” (U+201D), that caused the error. I've been trying for hours but still don't know to figure out a way to pass the URL correctly.

I changed the double quotes to single around the URL
from bs4 import BeautifulSoup
import requests
url = 'https://www.econjobrumors.com/topic/supreme-court-to-%e2%80%9cconsider%e2%80%9d-taking-up-harvard-affirmative-action-case-on-june-10'
r = requests.get(url, allow_redirects=False)
soup = BeautifulSoup(r.content, 'lxml')
print(soup)
prints out the html as expected, I edited it to fit in this answer
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xml:lang="en-US" xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta content="IE=8" http-equiv="X-UA-Compatible"/>
<ALL THE CONTENT>Too much to paste in the answer</ALL THE CONTENT>
</html>

Related

Why python requests module not pulling the whole html?

The link: https://www.hyatt.com/explore-hotels/service/hotels
code:
r = requests.get('https://www.hyatt.com/explore-hotels/service/hotels')
soup = BeautifulSoup(r.text, 'lxml')
print(soup.prettify())
Tried also this:
r = requests.get('https://www.hyatt.com/explore-hotels/service/hotels')
data = json.dumps(r.text)
print(data)
output:
<!DOCTYPE html>
<head>
</head>
<body>
<script src="SOME_value">
</script>
</body>
</html>
Its printing the html without the tag the data are in, only showing a single script tag.
How to access the data (shown in browsing view, looks like json)?browsing view my code code response)
I don't believe this can be done...That data simply isn't in the r.text
If you do this:
import requests
from bs4 import BeautifulSoup
r = requests.get("https://www.hyatt.com/explore-hotels/service/hotels")
soup = BeautifulSoup(r.text, "html.parser")
print(soup.prettify())
You get this:
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<script src="/149e9513-01fa-4fb0-aad4-566afd725d1b/2d206a39-8ed7-437e-a3be-862e0f06eea3/ips.js?tkrm_alpekz_s1.3=0EOFte3LjRKv3iJhEEV2hrnisE5M3Lwy3ac3UPZ19zdiB49A6ZtBjtiwBqgKQN3q2MEQ3NbFjTWfmP9GqArOIAML6zTvSb4lRHD7FsmJFVWNkSwuTNWUNuJWv6hEXBG37DhBtTXFEO50999RihfPbTjsB">
</script>
</body>
</html>
As you can see there is no <pre> tag for whatever reason. So you're unable to access that.
I also get an 429 Error when accessing the URL:
GET https://www.hyatt.com/explore-hotels/service/hotels 429
What is the end goal here? Because this site doesn't seem to be willing to do anything. Some sites are unable to be parsed, for various reasons. If you're wanting to play with JSON data I would look into using an API instead.
If you google https://www.hyatt.com and manually go to the URL you mentioned you get a 404 error.
I would say Hyatt don't want you parsing their site. So don't!
The response is JSON, not HTML. You can verify this by opening the Network tab in your browser's dev tools. There you will see that the content-type header is application/json; charset=utf-8.
You can parse this into a useable form with the standard json package:
r = requests.get('https://www.hyatt.com/explore-hotels/service/hotels')
data = json.loads(r.text)
print(data)

Parsing MS specific html tags in BeautifulSoup

When trying to parse an email sent using MS Outlook, I want to be able to strip the annoying Microsoft XML tags that it has added. One such example is the o:p tag. When trying to use Python's BeautifulSoup to parse an email as HTML, it can't seem to find these specialty tags.
For example:
from bs4 import BeautifulSoup
textToParse = """
<html>
<head>
<title>Something to parse</title>
</head>
<body>
<p><o:p>This should go</o:p>Paragraph</p>
</body>
</html>
"""
soup = BeautifulSoup(textToParse, "html5lib")
body = soup.find('body')
for otag in body.find_all('o'):
print(otag)
for otag in body.find_all('o:p'):
print(otag)
This will output no text to the console, but if I switched the find_all call to search for p then it would output the p node as expected.
How come these custom tags do not seem to work?
It's a namespace issue. Apparently, BeautifulSoup does not consider custom namespaces valid when parsed with "html5lib".
You can work around this with a regular expression, which – strangely – does work correctly!
print (soup.find_all(re.compile('o:p')))
>>> [<o:p>This should go</o:p>]
but the "proper" solution is to change the parser to "lxml-xml" and introducing o: as a valid namespace.
from bs4 import BeautifulSoup
textToParse = """
<html xmlns:o='dummy_url'>
<head>
<title>Something to parse</title>
</head>
<body>
<p><o:p>This should go</o:p>Paragraph</p>
</body>
</html>
"""
soup = BeautifulSoup(textToParse, "lxml-xml")
body = soup.find('body')
print ('this should find nothing')
for otag in body.find_all('o'):
print(otag)
print ('this should find o:p')
for otag in body.find_all('o:p'):
print(otag)
>>>
this should find nothing
this should find o:p
<o:p>This should go</o:p>

Beautiful Soup doesn't give data for a site

I am trying to this site for information:
https://farm.ewg.org/addrsearch.php?stab2=NY&fullname=B&b=1&page=0
I tried writing code that has worked for other sites, but it just leaves me with an empty text file. Instead of filling up with data like it has for other sites. Here is my code:
import urllib
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
import json
import time
outfile = open('/Users/Luca/Desktop/test/farm_data.text','w')
my_list = list()
site = "https://farm.ewg.org/addrsearch.php?stab2=NY&fullname=A&b=1&page=0"
my_list.append(site)
site = "https://farm.ewg.org/addrsearch.php?stab2=NY&fullname=B&b=1&page=0"
my_list.append(site)
site = "https://farm.ewg.org/addrsearch.php?stab2=NY&fullname=C&b=1&page=0"
my_list.append(site)
for item in my_list:
time.sleep( 5 )
html = urlopen(item)
bsObj = BeautifulSoup(html.read(), "html.parser")
nameList = bsObj.prettify().split('.')
count = 0
for name in nameList:
print (name[2:])
outfile.write(name[2:] + ',' + item + '\n')
I am trying to split it into smaller parts and go from there. I have used this code on sites like this: https://www.mtggoldfish.com/price/Aether+Revolt/Heart+of+Kiran#online
for example and it worked.
Any ideas why it works for some sites and not others? thanks so much.
The website in question probably disallows webscraping, which is why you get:
HTTPError: HTTP Error 403: Forbidden
You can spoof your user agent, by pretending to be a browser agent. Here's an example of how to do it using the fantastic requests module. You'll pass a User-Agent header when making the request.
import requests
url = "https://farm.ewg.org/addrsearch.php?stab2=NY&fullname=A&b=1&page=0"
html = requests.get(url, headers={'User-Agent' : 'Mozilla/5.0'}).text
bsObj = BeautifulSoup(html, "html.parser")
print(bsObj)
Output:
<!DOCTYPE doctype html>
<html class="no-js" lang="en" prefix="og: http://ogp.me/ns#" xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://ogp.me/ns/fb#">
<head>
<meta charset="utf-8"/>
.
.
.
You can massage this code into your loop now.

Python: BeautifulSoup returning garbage

I am building a basic data crawler in python using BeautifulSoup, for Batoto, the manga host. For some the reason, the URL works sometimes and other times it doesn't. For example:
from bs4 import BeautifulSoup
from urllib2 import urlopen
x= urlopen(*manga url here*)
y = BeautifulSoup(x)
print y
The result should be a tag soup of the page but instead I get a big wall of this
´ºŸ{›æP™oRhtüs2å÷%ëmßñ6Y›þ�GDŸ0Ë­͇켮Yé)–ÀØÅð&ô]½f³ÓÞ€Þþ)ú$÷á�üv…úzW¿¾úà†lªÀí¥ï«·_ OTL_ˆêsÁÿƒÁÖ<Ø?°Þ›Â+WLç¥àEh>rýÜ>x ˆ‡eÇžù»èå»–Ùý e:›§`L_.‹¦úoÓ‘®e=‰ìÓ4Wëo’]~Ãõ¬À8>x:²âœ2¸ Á|&0ÍVpMLÎñ»v¥Ín÷-ÅÉ–T§`Ì.SÔsóë„œ¡×[˜·P6»�ùè�>Ô¾È]Œ—·ú£âÊgí%ضkwýÃ=Üϸ2cïÑfÙ_�×]Õê“ž?„UÖ* m³/­`ñ§ÿL0³dµ·jªÅ}õ/õOXß×;«]®’ϯw‹·þ¡ÿ|Gýª`I{µœ}œí�ë–¼yÖÇ'�Wç�ëµÅþþ*ýœd{ÿDv:РíHzqÿÆ­÷æélG-èÈâpÇßQé´^ÐO´®Xÿ�ýö(‹šëñþ"4!SÃõ2{òÿÜ´»ûE</kî?x´&ý˜`Ù)uÂï¹ã[ÏŠ²y°kÆpù}¢></uŒ¸kpž¼cì∬ƒcubÆ¡¢=en2‚påÓb9®`áï|z…p"i6pvif¨þõ“⟒></t`$ò-e></cé”r)$�ˆ)ìªÜrd&mÉÊ*ßdÒuÄ.Æ-hx#9[s=m�Ýfd2o1ˆ]‡[Ôádœtë¤qâxæ°‹qËÁ×,½ŠmʇꇢùÅýl></sí°çù¡h?‡ÌÜœbá‰æÆý¡sd~¬></zz¡ózwÎ[à!n‰Àš5¤…¸‘ݹŽ></sÃ:›3Ìæ></lÑggu�».Б#4õë\ÃñÆ:¸5ÔwÛ·…)~ÛacÑ,d­³båÖ6></tg9y+wΉí%r8ƒ·}n`¼ÁÆ8˜”é²êÞ½°¶Ï></sÖ-di¨a±j9³4></ss„*w(ßibðïj*¶„)pâýÌ”a§%va{‰ò¦m mi></o³o˜Ÿ?¿Ñu-}{cÜ›a~:k²Ì></r+=ÅÌk˜c></wÓ¹âߊž‡ëf7vÑ�akÆ4ƒ‚></szŽµiÞêzâšÒ¬ú¢“âÀ#�-></qebndΑg*cxgsÆ€Ùüe¡³-ŠngÁ:�3ænæ5ï0`coäÏÖ9œ1Ða¯,æ—ªìàãÉÂð></j›h¶`à;)òiÖ š+></o”64ˆÎº9°��u—Úd¿ý¥pÎÖ‰0¢s:c�yƧ³t=ÕŸ“Ý‹41%}*,e³Ô¥ó></hiræe—';></v�fÞ«Ë¥n§Ð·¡kaììë\�`ùsõ©¸pv¦‘></bñ¼ut«w)Ø'¹ú#{)n0¡Žan¶Ë5èsª�–u–></y_x.mÅd:g}ëÕðhçð«õõ8ŠcËÕÌvž­v™-šêÙ`b¹˜ùÃΓçˤÔÙtx¹�ßïǶÎgþ°r‹$ò†aÆ–š?ì<y«Ëñõo{%ׇo{ú¥Á»æ]‡></u´¬Ø¸eÖïÝtßÚ'è3®nh±ûk4È#l«s]–Åec¹ÑtmÓl|ë£Þ¼~zôéõûwêÓÑñÉÆw\soøÊiyjvØÖ$¯ÈoºÙoyã]æ5]-t^[“¡aÑ{²Å¸6¦ðtŒçm¼ÂÎz´></wà™´»äõ#©õ></mÏu:=¼þ·'�qwúËö«m„l^ˆær¥30q±ÒšŸëù></l(„7¼=xi’?¤;ö$ØË4ßoóiòyoµxÉøþ¨—«g³Ãíß{|></body></html>
wrapped in html and body tags.
Sometimes I will keep trying and it works, but it is so inconsistent, I can't figure out the reason for it.
Any help would be appreciated.
It seems to be urlopen having issues with encoding, requests works fine:
x = requests.get("http://bato.to/comic/_/comics/rakudai-kishi-no-eiyuutan-r11615")
y = BeautifulSoup(x.content)
print y
<!DOCTYPE html>
<html lang="en" xmlns:fb="http://www.facebook.com/2008/fbml">
<head>
<meta charset="utf-8"/>
<title>Rakudai Kishi no Eiyuutan - Scanlations - Comic - Comic Directory - Batoto - Batoto</title>
.................
Using urlopen we get the following:
x = urlopen("http://bato.to/comic/_/comics/rakudai-kishi-no-eiyuutan-r11615")
print x.read()
���������s+I���2���l��9C<�� ^�����쾯�dw�xzNT%��,T��A^�ݫ���9��a��E�C���W!�����ڡϳ��f7���s2�Px$���}I�*�'��;'3O>���'g?�u®{����e.�ڇ�e{�u���jf:aث
�����DS��%��X�Zͮ���������9�:�Dx�����\-�
�*tBW������t�I���GQ�=�c��\:����u���S�V(�><y�C��ã�*:�ۜ?D��a�g�o�sPD�m�"�,�Ɲ<;v[��s���=��V2�fX��ì�Cj̇�В~�
-~����+;V���m�|kv���:V!�hP��D�K�/`oԣ|�k�5���B�{�0�wa�-���iS
�>�œ��gǿ�o�OE3jçCV<`���Q!��5�B��N��Ynd����?~��q���� _G����;T�S'�#΀��t��Ha�.;J�61'`Й�#���>>`��Z�ˠ�x�#� J*u��'���-����]p�9{>����������#�<-~�K"[AQh0HjP
0^��R�]�{N#��
...................
So as you can see it is a problem with urlopen not BeautifulSoup.
The server is returning gzipped bytes. So to download the content using urllib2:
import sys
import urllib2
import gzip
import io
url = "http://bato.to/comic/_/comics/rakudai-kishi-no-eiyuutan-r11615"
response = urllib2.urlopen(url)
# print(response.headers)
content = response.read()
if response.headers['Content-Encoding'] == 'gzip':
g = gzip.GzipFile(fileobj=io.BytesIO(content))
content = g.read()
encoding = response.info().getparam('charset')
content = content.decode(encoding)
This checks the content is the same as the page.text returned by requests:
import requests
page = requests.get(url)
# print(page.headers)
assert content == page.text
Since requests handles the gunzipping and decoding for you -- and more robustly too -- using requests is highly recommended.

Issues with BeautifulSoup parsing

I am trying to parse an html page with BeautifulSoup, but it appears that BeautifulSoup doesn't like the html or that page at all. When I run the code below, the method prettify() returns me only the script block of the page (see below). Does anybody has an idea why it happens?
import urllib2
from BeautifulSoup import BeautifulSoup
url = "http://www.futureshop.ca/catalog/subclass.asp?catid=10607&mfr=&logon=&langid=FR&sort=0&page=1"
html = "".join(urllib2.urlopen(url).readlines())
print "-- HTML ------------------------------------------"
print html
print "-- BeautifulSoup ---------------------------------"
print BeautifulSoup(html).prettify()
The is the output produced by BeautifulSoup.
-- BeautifulSoup ---------------------------------
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<script language="JavaScript">
<!--
function highlight(img) {
document[img].src = "/marketing/sony/images/en/" + img + "_on.gif";
}
function unhighlight(img) {
document[img].src = "/marketing/sony/images/en/" + img + "_off.gif";
}
//-->
</script>
Thanks!
UPDATE: I am using the following version, which appears to be the latest.
__author__ = "Leonard Richardson (leonardr#segfault.org)"
__version__ = "3.1.0.1"
__copyright__ = "Copyright (c) 2004-2009 Leonard Richardson"
__license__ = "New-style BSD"
Try with version 3.0.7a as Łukasz suggested. BeautifulSoup 3.1 was designed to be compatible with Python 3.0 so they had to change the parser from SGMLParser to HTMLParser which seems more vulnerable to bad HTML.
From the changelog for BeautifulSoup 3.1:
"Beautiful Soup is now based on HTMLParser rather than SGMLParser, which is gone in Python 3. There's some bad HTML that SGMLParser handled but HTMLParser doesn't"
Try lxml. Despite its name, it is also for parsing and scraping HTML. It's much, much faster than BeautifulSoup, and it even handles "broken" HTML better than BeautifulSoup, so it might work better for you. It has a compatibility API for BeautifulSoup too if you don't want to learn the lxml API.
Ian Blicking agrees.
There's no reason to use BeautifulSoup anymore, unless you're on Google App Engine or something where anything not purely Python isn't allowed.
BeautifulSoup isn't magic: if the incoming HTML is too horrible then it isn't going to work.
In this case, the incoming HTML is exactly that: too broken for BeautifulSoup to figure out what to do. For instance it contains markup like:
SCRIPT type=""javascript""
(Notice the double quoting.)
The BeautifulSoup docs contains a section what you can do if BeautifulSoup can't parse you markup. You'll need to investigate those alternatives.
Samj: If I get things like
HTMLParser.HTMLParseError: bad end tag: u"</scr' + 'ipt>"
I just remove the culprit from markup before I serve it to BeautifulSoup and all is dandy:
html = urllib2.urlopen(url).read()
html = html.replace("</scr' + 'ipt>","")
soup = BeautifulSoup(html)
I had problems parsing the following code too:
<script>
function show_ads() {
document.write("<div><sc"+"ript type='text/javascript'src='http://pagead2.googlesyndication.com/pagead/show_ads.js'></scr"+"ipt></div>");
}
</script>
HTMLParseError: bad end tag: u'', at line 26, column 127
Sam
I tested this script on BeautifulSoup version '3.0.7a' and it returns what appears to be correct output. I don't know what changed between '3.0.7a' and '3.1.0.1' but give it a try.
import urllib
from BeautifulSoup import BeautifulSoup
>>> page = urllib.urlopen('http://www.futureshop.ca/catalog/subclass.asp?catid=10607&mfr=&logon=&langid=FR&sort=0&page=1')
>>> soup = BeautifulSoup(page)
>>> soup.prettify()
In my case by executing the above statements, it returns the entire HTML page.

Categories