I am building a basic data crawler in python using BeautifulSoup, for Batoto, the manga host. For some the reason, the URL works sometimes and other times it doesn't. For example:
from bs4 import BeautifulSoup
from urllib2 import urlopen
x= urlopen(*manga url here*)
y = BeautifulSoup(x)
print y
The result should be a tag soup of the page but instead I get a big wall of this
´ºŸ{›æP™oRhtüs2å÷%ëmßñ6Y›þ�GDŸ0ËÂ͇켮Yé)–ÀØÅð&ô]½f³ÓÞ€Þþ)ú$÷á�üv…úzW¿¾úà†lªÀí¥ï«·_ OTL_ˆêsÁÿƒÁÖ<Ø?°Þ›Â+WLç¥àEh>rýÜ>x ˆ‡eÇžù»èå»–Ùý e:›§`L_.‹¦úoÓ‘®e=‰ìÓ4Wëo’]~Ãõ¬À8>x:²âœ2¸ Á|&0ÍVpMLÎñ»v¥Ín÷-ÅÉ–T§`Ì.SÔsóë„œ¡×[˜·P6»�ùè�>Ô¾È]Œ—·ú£âÊgí%ضkwýÃ=Üϸ2cïÑfÙ_�×]Õê“ž?„UÖ* m³/`ñ§ÿL0³dµ·jªÅ}õ/õOXß×;«]®’ϯw‹·þ¡ÿ|Gýª`I{µœ}œí�ë–¼yÖÇ'�Wç�ëµÅþþ*ýœd{ÿDv:Ð íHzqÿÆ÷æélG-èÈâpÇßQé´^ÐO´®Xÿ�ýö(‹šëñþ"4!SÃõ2{òÿÜ´»ûE</kî?x´&ý˜`Ù)uÂï¹ã[ÏŠ²y°kÆpù}¢></uŒ¸kpž¼cì∬ƒcubÆ¡¢=en2‚påÓb9®`áï|z…p"i6pvif¨þõ“⟒></t`$ò-e></cé”r)$�ˆ)ìªÜrd&mÉÊ*ßdÒuÄ.Æ-hx#9[s=m�Ýfd2o1ˆ]‡[Ôádœtë¤qâxæ°‹qËÁ×,½ŠmʇꇢùÅýl></sí°çù¡h?‡ÌÜœbá‰æÆý¡sd~¬></zz¡ózwÎ[à!n‰Àš5¤…¸‘ݹŽ></sÃ:›3Ìæ></lÑggu�».Б#4õë\ÃñÆ:¸5ÔwÛ·…)~ÛacÑ,d³båÖ6></tg9y+wΉí%r8ƒ·}n`¼ÁÆ8˜”é²êÞ½°¶Ï></sÖ-di¨a±j9³4></ss„*w(ßibðïj*¶„)pâýÌ”a§%va{‰ò¦m mi></o³o˜Ÿ?¿Ñu-}{cÜ›a~:k²Ì></r+=ÅÌk˜c></wÓ¹âߊž‡ëf7vÑ�akÆ4ƒ‚></szŽµiÞêzâšÒ¬ú¢“âÀ#�-></qebndΑg*cxgsÆ€Ùüe¡³-ŠngÁ:�3ænæ5ï0`coäÏÖ9œ1Ða¯,æ—ªìàãÉÂð></j›h¶`à;)òiÖ š+></o”64ˆÎº9°��u—Úd¿ý¥pÎÖ‰0¢s:c�yƧ³t=ÕŸ“Ý‹41%}*,e³Ô¥ó></hiræe—';></v�fÞ«Ë¥n§Ð·¡kaììë\�`ùsõ©¸pv¦‘></bñ¼ut«w)Ø'¹ú#{)n0¡Žan¶Ë5èsª�–u–></y_x.mÅd:g}ëÕðhçð«õõ8ŠcËÕÌvžv™-šêÙ`b¹˜ùÃΓçˤÔÙtx¹�ßïǶÎgþ°r‹$ò†aÆ–š?ì<y«Ëñõo{%ׇo{ú¥Á»æ]‡></u´¬Ø¸eÖïÝtßÚ'è3®nh±ûk4È#l«s]–Åec¹ÑtmÓl|ë£Þ¼~zôéõûwêÓÑñÉÆw\soøÊiyjvØÖ$¯ÈoºÙoyã]æ5]-t^[“¡aÑ{²Å¸6¦ðtŒçm¼ÂÎz´></wà™´»äõ#©õ></mÏu:=¼þ·'�qwúËö«m„l^ˆær¥30q±ÒšŸëù></l(„7¼=xi’?¤;ö$ØË4ßoóiòyoµxÉøþ¨—«g³Ãíß{|></body></html>
wrapped in html and body tags.
Sometimes I will keep trying and it works, but it is so inconsistent, I can't figure out the reason for it.
Any help would be appreciated.
It seems to be urlopen having issues with encoding, requests works fine:
x = requests.get("http://bato.to/comic/_/comics/rakudai-kishi-no-eiyuutan-r11615")
y = BeautifulSoup(x.content)
print y
<!DOCTYPE html>
<html lang="en" xmlns:fb="http://www.facebook.com/2008/fbml">
<head>
<meta charset="utf-8"/>
<title>Rakudai Kishi no Eiyuutan - Scanlations - Comic - Comic Directory - Batoto - Batoto</title>
.................
Using urlopen we get the following:
x = urlopen("http://bato.to/comic/_/comics/rakudai-kishi-no-eiyuutan-r11615")
print x.read()
���������s+I���2���l��9C<�� ^�����쾯�dw�xzNT%��,T��A^�ݫ���9��a��E�C���W!�����ڡϳ��f7���s2�Px$���}I�*�'��;'3O>���'g?�u®{����e.�ڇ�e{�u���jf:aث
�����DS��%��X�Zͮ���������9�:�Dx�����\-�
�*tBW������t�I���GQ�=�c��\:����u���S�V(�><y�C��ã�*:�ۜ?D��a�g�o�sPD�m�"�,�Ɲ<;v[��s���=��V2�fX��ì�Cj̇�В~�
-~����+;V���m�|kv���:V!�hP��D�K�/`oԣ|�k�5���B�{�0�wa�-���iS
�>�œ��gǿ�o�OE3jçCV<`���Q!��5�B��N��Ynd����?~��q���� _G����;T�S'�#��t��Ha�.;J�61'`Й�#���>>`��Z�ˠ�x�#� J*u��'���-����]p�9{>����������#�<-~�K"[AQh0HjP
0^��R�]�{N#��
...................
So as you can see it is a problem with urlopen not BeautifulSoup.
The server is returning gzipped bytes. So to download the content using urllib2:
import sys
import urllib2
import gzip
import io
url = "http://bato.to/comic/_/comics/rakudai-kishi-no-eiyuutan-r11615"
response = urllib2.urlopen(url)
# print(response.headers)
content = response.read()
if response.headers['Content-Encoding'] == 'gzip':
g = gzip.GzipFile(fileobj=io.BytesIO(content))
content = g.read()
encoding = response.info().getparam('charset')
content = content.decode(encoding)
This checks the content is the same as the page.text returned by requests:
import requests
page = requests.get(url)
# print(page.headers)
assert content == page.text
Since requests handles the gunzipping and decoding for you -- and more robustly too -- using requests is highly recommended.
Related
I'm trying to get BeautifulSoup to read this page but the URL is not passed correctly into the get() command.
The URL is https://www.econjobrumors.com/topic/supreme-court-to-%e2%80%9cconsider%e2%80%9d-taking-up-harvard-affirmative-action-case-on-june-10. But when I try to use BeautifulSoup to get the data from the URL it always gives an error saying that the URL is incorrect
response = requests.get(url = "https://www.econjobrumors.com/topic/supreme-court-to-%e2%80%9cconsider%e2%80%9d-taking-up-harvard-affirmative-action-case-on-june-10",
verify = False \
)
print(response.request.url, end="\r")
It was the double quotes, “ (U+201C) and ” (U+201D), that caused the error. I've been trying for hours but still don't know to figure out a way to pass the URL correctly.
I changed the double quotes to single around the URL
from bs4 import BeautifulSoup
import requests
url = 'https://www.econjobrumors.com/topic/supreme-court-to-%e2%80%9cconsider%e2%80%9d-taking-up-harvard-affirmative-action-case-on-june-10'
r = requests.get(url, allow_redirects=False)
soup = BeautifulSoup(r.content, 'lxml')
print(soup)
prints out the html as expected, I edited it to fit in this answer
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xml:lang="en-US" xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta content="IE=8" http-equiv="X-UA-Compatible"/>
<ALL THE CONTENT>Too much to paste in the answer</ALL THE CONTENT>
</html>
I am trying to this site for information:
https://farm.ewg.org/addrsearch.php?stab2=NY&fullname=B&b=1&page=0
I tried writing code that has worked for other sites, but it just leaves me with an empty text file. Instead of filling up with data like it has for other sites. Here is my code:
import urllib
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
import json
import time
outfile = open('/Users/Luca/Desktop/test/farm_data.text','w')
my_list = list()
site = "https://farm.ewg.org/addrsearch.php?stab2=NY&fullname=A&b=1&page=0"
my_list.append(site)
site = "https://farm.ewg.org/addrsearch.php?stab2=NY&fullname=B&b=1&page=0"
my_list.append(site)
site = "https://farm.ewg.org/addrsearch.php?stab2=NY&fullname=C&b=1&page=0"
my_list.append(site)
for item in my_list:
time.sleep( 5 )
html = urlopen(item)
bsObj = BeautifulSoup(html.read(), "html.parser")
nameList = bsObj.prettify().split('.')
count = 0
for name in nameList:
print (name[2:])
outfile.write(name[2:] + ',' + item + '\n')
I am trying to split it into smaller parts and go from there. I have used this code on sites like this: https://www.mtggoldfish.com/price/Aether+Revolt/Heart+of+Kiran#online
for example and it worked.
Any ideas why it works for some sites and not others? thanks so much.
The website in question probably disallows webscraping, which is why you get:
HTTPError: HTTP Error 403: Forbidden
You can spoof your user agent, by pretending to be a browser agent. Here's an example of how to do it using the fantastic requests module. You'll pass a User-Agent header when making the request.
import requests
url = "https://farm.ewg.org/addrsearch.php?stab2=NY&fullname=A&b=1&page=0"
html = requests.get(url, headers={'User-Agent' : 'Mozilla/5.0'}).text
bsObj = BeautifulSoup(html, "html.parser")
print(bsObj)
Output:
<!DOCTYPE doctype html>
<html class="no-js" lang="en" prefix="og: http://ogp.me/ns#" xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://ogp.me/ns/fb#">
<head>
<meta charset="utf-8"/>
.
.
.
You can massage this code into your loop now.
I am trying to get all the urls on a website using python. At the moment I am just copying the websites html into the python program and then using code to extract all the urls. Is there a way I could do this straight from the web without having to copy the entire html?
In Python 2, you can use urllib2.urlopen:
import urllib2
response = urllib2.urlopen('http://python.org/')
html = response.read()
In Python 3, you can use urllib.request.urlopen:
import urllib.request
with urllib.request.urlopen('http://python.org/') as response:
html = response.read()
If you have to perform more complicated tasks like authentication or passing parameters I suggest to have a look at the requests library.
The most straightforward would probably be urllib.urlopen if you're using python2, or urllib.request.urlopen if you're using python3 (you have to do import urllib or import urllib.request first of course). That way you get an file like object from which you can read (ie f.read()) the html document.
Example for python 2:
import urllib
f = urlopen("http://stackoverflow.com")
http_document = f.read()
f.close()
The good news is that you seem to have done the hard part which is analyzing the html document for links.
You might want to use the bs4(BeautifulSoup) library.
Beautiful Soup is a Python library for pulling data out of HTML and XML files.
You can download bs4 with the followig command at the cmd line. pip install BeautifulSoup4
import urllib2
import urlparse
from bs4 import BeautifulSoup
url = "http://www.google.com"
response = urllib2.urlopen(url)
content = response.read()
soup = BeautifulSoup(content, "html.parser")
for link in soup.find_all('a', href=True):
print urlparse.urljoin(url, link['href'])
You can simply use the combination of requests and BeautifulSoup.
First make an HTTP request using requests to get the HTML content. You will get it as a Python string, which you can manipulate as you like.
Take the HTML content string and supply it into the BeautifulSoup, which has done all the job to extract the DOM, and get all URLs, i.e. <a> elements.
Here is an example of how to fetch all links from StackOverflow:
import requests
from bs4 import BeautifulSoup, SoupStrainer
response = requests.get('http://stackoverflow.com')
html_str = response.text
bs = BeautifulSoup(html_str, parseOnlyThese=SoupStrainer('a'))
for a_element in bs:
if a_element.has_attr('href'):
print(a_element['href'])
Sample output:
/questions/tagged/facebook-javascript-sdk
/questions/31743507/facebook-app-request-dialog-keep-loading-on-mobile-after-fb-login-called
/users/3545752/user3545752
/questions/31743506/get-nuspec-file-for-existing-nuget-package
/questions/tagged/nuget
...
I want to extract specific URLs from an HTML page.
from urllib2 import urlopen
import re
from bs4 import BeautifulSoup
url = http://bassrx.tumblr.com/tagged/tt # nsfw link
page = urlopen(url)
html = page.read() # get the html from the url
# this works without BeautifulSoup, but it is slow:
image_links = re.findall("src.\"(\S*?media.tumblr\S*?tumblr_\S*?jpg)", html)
print image_links
The output of the above is exactly the URL, nothing else: http://38.media.tumblr.com/tumblr_ln5gwxHYei1qi02clo1_500.jpg
The only downside is it is very slow.
BeautifulSoup is extremely fast at parsing HTML, so that's why I want to use it.
The urls that I want are actually the img src. Here's a snippet from the HMTL that contains that information that I want.
<div class="media"><a href="http://bassrx.tumblr.com/image/85635265422">
<img src="http://38.media.tumblr.com/tumblr_ln5gwxHYei1qi02clo1_500.jpg"/>
</a></div>
So, my question is, how can I get BeautifulSoup to extract all of those 'img src' urls cleanly without any other cruft?
I just want a list of matching urls. I've been trying to use soup.findall() function, but cannot get any useful results.
from urllib2 import urlopen
from bs4 import BeautifulSoup
url = 'http://bassrx.tumblr.com/tagged/tt'
soup = BeautifulSoup(urlopen(url).read())
for element in soup.findAll('img'):
print(element.get('src'))
You can use div.media > a > img CSS selector to find img tags inside a which is inside a div tag with media class:
from urllib2 import urlopen
from bs4 import BeautifulSoup
url = "<url_here>"
soup = BeautifulSoup(urlopen(url))
images = soup.select('div.media > a > img')
print [image.get('src') for image in images]
In order to make the parsing faster you can use lxml parser:
soup = BeautifulSoup(urlopen(url), "lxml")
You need to install lxml module first, of course.
Also, you can make use of a SoupStrainer class for parsing only relevant part of the document.
Hope that helps.
Have a look a BeautifulSoup.find_all with re.compile mix
from urllib2 import urlopen
import re
from bs4 import BeautifulSoup
url = "http://bassrx.tumblr.com/tagged/tt" # nsfw link
page = urlopen(url)
html = page.read()
bs = BeautifulSoup(html)
a_tumblr = [a_element for a_element in bs.find_all(href=re.compile("media\.tumblr"))]
##[<link href="http://37.media.tumblr.com/avatar_df3a9e37c757_128.png" rel="shortcut icon"/>, <link href="http://37.media.tumblr.com/avatar_df3a9e37c757_128.png" rel="apple-touch-icon"/>]
I can get the html page using urllib, and use BeautifulSoup to parse the html page, and it looks like that I have to generate file to be read from BeautifulSoup.
import urllib
sock = urllib.urlopen("http://SOMEWHERE")
htmlSource = sock.read()
sock.close()
--> write to file
Is there a way to call BeautifulSoup without generating file from urllib?
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(htmlSource)
No file writing needed: Just pass in the HTML string. You can also pass the object returned from urlopen directly:
f = urllib.urlopen("http://SOMEWHERE")
soup = BeautifulSoup(f)
You could open the url, download the html, and make it parse-able in one shot with gazpacho:
from gazpacho import Soup
soup = Soup.get("https://www.example.com/")