Error crawling wikipedia - python

According to the answer by #Jens Timmerman on this post: Extract the first paragraph from a Wikipedia article (Python)
i did this:
import urllib2
def getPage(url):
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')] #wikipedia needs this
resource = opener.open("http://en.wikipedia.org/wiki/" + url)
data = resource.read()
resource.close()
return data
print getPage('Steve_Jobs')
technically it should run properly and give me the source of the page. but here's what i get:
any help would be appreciated..

After checking with wget and curl, I saw that it wasn't a problem specific to Python - they too got "strange" characters; a quick check with file tells me that the response is simply gzip-compressed, so it seems that Wikipedia just sends gzipped data by default, without checking if the client actually says to support it in the request.
Fortunately, Python is capable of decompressing gzipped data: integrating your code with this answer you get:
import urllib2
from StringIO import StringIO
import gzip
def getPage(url):
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'MyTestScript/1.0 (contact at myscript#mysite.com)'), ('Accept-encoding', 'gzip')]
resource = opener.open("http://en.wikipedia.org/wiki/" + url)
if resource.info().get('Content-Encoding') == 'gzip':
buf = StringIO( resource.read())
f = gzip.GzipFile(fileobj=buf)
return f.read()
else:
return resource.read()
print getPage('Steve_Jobs')
which works just fine on my machine.
Still, as already pointed out in the comments, you should probably avoid "brutal crawling", if you want to access Wikipedia content programmatically use their APIs.

Related

Get value from online xml

I want to get the value of the 'latest' version tag from here: https://papermc.io/repo/repository/maven-public/com/destroystokyo/paper/paper-api/maven-metadata.xml
I tried using this python:
import urllib.request
from xml.etree import ElementTree
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
data = opener.open('https://papermc.io/repo/repository/maven-public/com/destroystokyo/paper/paper-api/maven-metadata.xml').
root = ElementTree.fromstring(data)
versioning = root.find("versioning")
latest = versioning.find("latest")
snip.rv = latest.text
The problem is, using this inside of vim (I'm trying to make UltiSnips snippets with it) makes the whole of vim extremely slow after the code has finished running.
What's causing my program to slow down just when I add that ^^ code?
I don't know if this will solve the performance issue in vim, but the code was not running for me due to errors in it.
opener.open returns a file-like object, so you should read it using
ElementTree.parse instead of ElementTree.fromstring (actually there is a trailing dot after opener.open(...), so I don't know if you missed a read() thereafter. In that case the return value is indeed a string).
Apart from that, you could try to close the opener to see if that frees up some resources (or use the with).
I attach an example of the improved code:
import urllib.request
from xml.etree import ElementTree
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
with opener.open('https://papermc.io/repo/repository/maven-public/com/destroystokyo/paper/paper-api/maven-metadata.xml') as data:
root = ElementTree.parse(data)
latest = root.find("./versioning/latest")
snip.rv = latest.text

How to modify Pandas's Read_html user-agent?

I'm trying to scrape English football stats from various html tables via the Transfetmarkt website using the pandas.read_html() function.
Example:
import pandas as pd
url = r'http://www.transfermarkt.co.uk/en/premier-league/gegentorminuten/wettbewerb_GB1.html'
df = pd.read_html(url)
However this code generates a "ValueError: Invalid URL" error.
I then attempted to parse the same website using the urllib2.urlopen() function. This time i got a "HTTPError: HTTP Error 404: Not Found". After the usual trial and error fault finding, it turns that the urllib2 header presents a python like agent to the webserver, which i presumed it doesn't recognize.
Now if I modify urllib2's agent and read its contents using beautifulsoup, i'm able to read the table without a problem.
Example:
from BeautifulSoup import BeautifulSoup
import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
url = r'http://www.transfermarkt.co.uk/en/premier-league/gegentorminuten/wettbewerb_GB1.html'
response = opener.open(url)
html = response.read()
soup = BeautifulSoup(html)
table = soup.find("table")
How do I modify pandas's urllib2 header to allow python to scrape this website?
Thanks
Currently you cannot. Relevant piece of code:
if _is_url(io): # io is the url
try:
with urlopen(io) as url:
raw_text = url.read()
except urllib2.URLError:
raise ValueError('Invalid URL: "{0}"'.format(io))
As you see, it just passes the url to urlopen and reads the data. You can file an issue requesting this feature, but I assume you don't have time to wait for it to be solved so I would suggest using BeautifulSoup to parse the html data and then load it into a DataFrame.
import urllib2
url = 'http://www.transfermarkt.co.uk/en/premier-league/gegentorminuten/wettbewerb_GB1.html'
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
response = opener.open(url)
tables = pd.read_html(response.read(), attrs={"class":"tabelle_grafik"})[0]
Or if you can use requests:
tables = pd.read_html(requests.get(url,
headers={'User-agent': 'Mozilla/5.0'}).text,
attrs={"class":"tabelle_grafik"})[0]

Trouble Getting a clean text file from HTML

I have looked at these previous questions
I am trying to consolidate news and notes from websites.
Reputed News service websites allow Users to post comments and views.
I am trying to get only the news content without the users comments. I tried working with BeautifulSoup and html2text. But user-comments are being included in the text file. I have even tried developing a custom program but with no useful progress than the above two.
Can anybody provide some clue how to proceed?
The code:
import urllib2
from bs4 import BeautifulSoup
URL ='http://www.example.com'
print 'Following: ',URL
print "Loading..."
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
identify_as = { 'User-Agent' : user_agent }
print "Reading URL:"+str(URL)
def process(URL,identify_as):
req = urllib2.Request(URL,data=None,headers=identify_as)
response = urllib2.urlopen(req)
_BSobj = BeautifulSoup(response).prettify(encoding='utf-8')
return _BSobj #return beauifulsoup object
print 'Processing URL...'
new_string = process(URL,identify_as).split()
print 'Buidiing requested Text'
tagB = ['<title>','<p>']
tagC = ['</title>','</p>']
reqText = []
for num in xrange(len(new_string)):
buffText = [] #initialize and reset
if new_string[num] in tagB:
tag = tagB.index(new_string[num])
while new_string[num] != tagC[tag]:
buffText.append(new_string[num])
num+=1
reqText.extend(buffText)
reqText= ''.join(reqText)
fileID = open('reqText.txt','w')
fileID.write(reqText)
fileID.close()
Here's a quick example I wrote using urllib which gets the contents of a page to a file:
import urllib
import urllib.request
myurl = "http://www.mysite.com"
sock = urllib.request.urlopen(myurl)
pagedata = str(sock.read())
sock.close()
file = open("output.txt","w")
file.write(pagedata)
file.close()
Then with a lot of string formatting you should be able to extract the parts of the html you want. This gives you something to get started from.

How to download image using requests

I'm trying to download and save an image from the web using python's requests module.
Here is the (working) code I used:
img = urllib2.urlopen(settings.STATICMAP_URL.format(**data))
with open(path, 'w') as f:
f.write(img.read())
Here is the new (non-working) code using requests:
r = requests.get(settings.STATICMAP_URL.format(**data))
if r.status_code == 200:
img = r.raw.read()
with open(path, 'w') as f:
f.write(img)
Can you help me on what attribute from the response to use from requests?
You can either use the response.raw file object, or iterate over the response.
To use the response.raw file-like object will not, by default, decode compressed responses (with GZIP or deflate). You can force it to decompress for you anyway by setting the decode_content attribute to True (requests sets it to False to control decoding itself). You can then use shutil.copyfileobj() to have Python stream the data to a file object:
import requests
import shutil
r = requests.get(settings.STATICMAP_URL.format(**data), stream=True)
if r.status_code == 200:
with open(path, 'wb') as f:
r.raw.decode_content = True
shutil.copyfileobj(r.raw, f)
To iterate over the response use a loop; iterating like this ensures that data is decompressed by this stage:
r = requests.get(settings.STATICMAP_URL.format(**data), stream=True)
if r.status_code == 200:
with open(path, 'wb') as f:
for chunk in r:
f.write(chunk)
This'll read the data in 128 byte chunks; if you feel another chunk size works better, use the Response.iter_content() method with a custom chunk size:
r = requests.get(settings.STATICMAP_URL.format(**data), stream=True)
if r.status_code == 200:
with open(path, 'wb') as f:
for chunk in r.iter_content(1024):
f.write(chunk)
Note that you need to open the destination file in binary mode to ensure python doesn't try and translate newlines for you. We also set stream=True so that requests doesn't download the whole image into memory first.
Get a file-like object from the request and copy it to a file. This will also avoid reading the whole thing into memory at once.
import shutil
import requests
url = 'http://example.com/img.png'
response = requests.get(url, stream=True)
with open('img.png', 'wb') as out_file:
shutil.copyfileobj(response.raw, out_file)
del response
How about this, a quick solution.
import requests
url = "http://craphound.com/images/1006884_2adf8fc7.jpg"
response = requests.get(url)
if response.status_code == 200:
with open("/Users/apple/Desktop/sample.jpg", 'wb') as f:
f.write(response.content)
I have the same need for downloading images using requests. I first tried the answer of Martijn Pieters, and it works well. But when I did a profile on this simple function, I found that it uses so many function calls compared to urllib and urllib2.
I then tried the way recommended by the author of requests module:
import requests
from PIL import Image
# python2.x, use this instead
# from StringIO import StringIO
# for python3.x,
from io import StringIO
r = requests.get('https://example.com/image.jpg')
i = Image.open(StringIO(r.content))
This much more reduced the number of function calls, thus speeded up my application.
Here is the code of my profiler and the result.
#!/usr/bin/python
import requests
from StringIO import StringIO
from PIL import Image
import profile
def testRequest():
image_name = 'test1.jpg'
url = 'http://example.com/image.jpg'
r = requests.get(url, stream=True)
with open(image_name, 'wb') as f:
for chunk in r.iter_content():
f.write(chunk)
def testRequest2():
image_name = 'test2.jpg'
url = 'http://example.com/image.jpg'
r = requests.get(url)
i = Image.open(StringIO(r.content))
i.save(image_name)
if __name__ == '__main__':
profile.run('testUrllib()')
profile.run('testUrllib2()')
profile.run('testRequest()')
The result for testRequest:
343080 function calls (343068 primitive calls) in 2.580 seconds
And the result for testRequest2:
3129 function calls (3105 primitive calls) in 0.024 seconds
This might be easier than using requests. This is the only time I'll ever suggest not using requests to do HTTP stuff.
Two liner using urllib:
>>> import urllib
>>> urllib.request.urlretrieve("http://www.example.com/songs/mp3.mp3", "mp3.mp3")
There is also a nice Python module named wget that is pretty easy to use. Found here.
This demonstrates the simplicity of the design:
>>> import wget
>>> url = 'http://www.futurecrew.com/skaven/song_files/mp3/razorback.mp3'
>>> filename = wget.download(url)
100% [................................................] 3841532 / 3841532>
>> filename
'razorback.mp3'
Enjoy.
Edit: You can also add an out parameter to specify a path.
>>> out_filepath = <output_filepath>
>>> filename = wget.download(url, out=out_filepath)
Following code snippet downloads a file.
The file is saved with its filename as in specified url.
import requests
url = "http://example.com/image.jpg"
filename = url.split("/")[-1]
r = requests.get(url, timeout=0.5)
if r.status_code == 200:
with open(filename, 'wb') as f:
f.write(r.content)
There are 2 main ways:
Using .content (simplest/official) (see Zhenyi Zhang's answer):
import io # Note: io.BytesIO is StringIO.StringIO on Python2.
import requests
r = requests.get('http://lorempixel.com/400/200')
r.raise_for_status()
with io.BytesIO(r.content) as f:
with Image.open(f) as img:
img.show()
Using .raw (see Martijn Pieters's answer):
import requests
r = requests.get('http://lorempixel.com/400/200', stream=True)
r.raise_for_status()
r.raw.decode_content = True # Required to decompress gzip/deflate compressed responses.
with PIL.Image.open(r.raw) as img:
img.show()
r.close() # Safety when stream=True ensure the connection is released.
Timing both shows no noticeable difference.
As easy as to import Image and requests
from PIL import Image
import requests
img = Image.open(requests.get(url, stream = True).raw)
img.save('img1.jpg')
This is how I did it
import requests
from PIL import Image
from io import BytesIO
url = 'your_url'
files = {'file': ("C:/Users/shadow/Downloads/black.jpeg", open('C:/Users/shadow/Downloads/black.jpeg', 'rb'),'image/jpg')}
response = requests.post(url, files=files)
img = Image.open(BytesIO(response.content))
img.show()
Here is a more user-friendly answer that still uses streaming.
Just define these functions and call getImage(). It will use the same file name as the url and write to the current directory by default, but both can be changed.
import requests
from StringIO import StringIO
from PIL import Image
def createFilename(url, name, folder):
dotSplit = url.split('.')
if name == None:
# use the same as the url
slashSplit = dotSplit[-2].split('/')
name = slashSplit[-1]
ext = dotSplit[-1]
file = '{}{}.{}'.format(folder, name, ext)
return file
def getImage(url, name=None, folder='./'):
file = createFilename(url, name, folder)
with open(file, 'wb') as f:
r = requests.get(url, stream=True)
for block in r.iter_content(1024):
if not block:
break
f.write(block)
def getImageFast(url, name=None, folder='./'):
file = createFilename(url, name, folder)
r = requests.get(url)
i = Image.open(StringIO(r.content))
i.save(file)
if __name__ == '__main__':
# Uses Less Memory
getImage('http://www.example.com/image.jpg')
# Faster
getImageFast('http://www.example.com/image.jpg')
The request guts of getImage() are based on the answer here and the guts of getImageFast() are based on the answer above.
I'm going to post an answer as I don't have enough rep to make a comment, but with wget as posted by Blairg23, you can also provide an out parameter for the path.
wget.download(url, out=path)
This is the first response that comes up for google searches on how to download a binary file with requests. In case you need to download an arbitrary file with requests, you can use:
import requests
url = 'https://s3.amazonaws.com/lab-data-collections/GoogleNews-vectors-negative300.bin.gz'
open('GoogleNews-vectors-negative300.bin.gz', 'wb').write(requests.get(url, allow_redirects=True).content)
my approach was to use response.content (blob) and save to the file in binary mode
img_blob = requests.get(url, timeout=5).content
with open(destination + '/' + title, 'wb') as img_file:
img_file.write(img_blob)
Check out my python project that downloads images from unsplash.com based on keywords.
You can do something like this:
import requests
import random
url = "https://images.pexels.com/photos/1308881/pexels-photo-1308881.jpeg? auto=compress&cs=tinysrgb&dpr=1&w=500"
name=random.randrange(1,1000)
filename=str(name)+".jpg"
response = requests.get(url)
if response.status_code.ok:
with open(filename,'w') as f:
f.write(response.content)
Agree with Blairg23 that using urllib.request.urlretrieve is one of the easiest solutions.
One note I want to point out here. Sometimes it won't download anything because the request was sent via script (bot), and if you want to parse images from Google images or other search engines, you need to pass user-agent to request headers first, and then download the image, otherwise, the request will be blocked and it will throw an error.
Pass user-agent and download image:
opener=urllib.request.build_opener()
opener.addheaders=[('User-Agent','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582')]
urllib.request.install_opener(opener)
urllib.request.urlretrieve(URL, 'image_name.jpg')
Code in the online IDE that scrapes and downloads images from Google images using requests, bs4, urllib.requests.
Alternatively, if your goal is to scrape images from search engines like Google, Bing, Yahoo!, DuckDuckGo (and other search engines), then you can use SerpApi. It's a paid API with a free plan.
The biggest difference is that there's no need to figure out how to bypass blocks from search engines or how to extract certain parts from the HTML or JavaScript since it's already done for the end-user.
Example code to integrate:
import os, urllib.request
from serpapi import GoogleSearch
params = {
"api_key": os.getenv("API_KEY"),
"engine": "google",
"q": "pexels cat",
"tbm": "isch"
}
search = GoogleSearch(params)
results = search.get_dict()
print(json.dumps(results['images_results'], indent=2, ensure_ascii=False))
# download images
for index, image in enumerate(results['images_results']):
# print(f'Downloading {index} image...')
opener=urllib.request.build_opener()
opener.addheaders=[('User-Agent','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582')]
urllib.request.install_opener(opener)
# saves original res image to the SerpApi_Images folder and add index to the end of file name
urllib.request.urlretrieve(image['original'], f'SerpApi_Images/original_size_img_{index}.jpg')
-----------
'''
]
# other images
{
"position": 100, # 100 image
"thumbnail": "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQK62dIkDjNCvEgmGU6GGFZcpVWwX-p3FsYSg&usqp=CAU",
"source": "homewardboundnj.org",
"title": "pexels-helena-lopes-1931367 - Homeward Bound Pet Adoption Center",
"link": "https://homewardboundnj.org/upcoming-event/black-cat-appreciation-day/pexels-helena-lopes-1931367/",
"original": "https://homewardboundnj.org/wp-content/uploads/2020/07/pexels-helena-lopes-1931367.jpg",
"is_product": false
}
]
'''
Disclaimer, I work for SerpApi.
Here is a very simple code
import requests
response = requests.get("https://i.imgur.com/ExdKOOz.png") ## Making a variable to get image.
file = open("sample_image.png", "wb") ## Creates the file for image
file.write(response.content) ## Saves file content
file.close()
for download Image
import requests
Picture_request = requests.get(url)

python urllib2.openurl doesn't work with specific URL (redirect)?

I need to download a CSV file, which works fine in browsers using:
http://www.ftse.com/objects/csv_to_csv.jsp?infoCode=100a&theseFilters=&csvAll=&theseColumns=Mw==&theseTitles=&tableTitle=FTSE%20100%20Index%20Constituents&dl=&p_encoded=1&e=.csv
The following code works for any other file (url) (with a fully qualified path), however with the above URL is downloads 800 bytes of gibberish.
def getFile(self,URL):
proxy_support = urllib2.ProxyHandler({'http': 'http://proxy.REMOVED.com:8080/'})
opener = urllib2.build_opener(proxy_support)
urllib2.install_opener(opener)
response = urllib2.urlopen(URL)
print response.geturl()
newfile = response.read()
output = open("testFile.csv",'wb')
output.write(newfile)
output.close()
urllib2 uses httplib under the hood, so the best way to diagnose this is to turn on http connection debugging. Add this code before you access the url and you should get a nice summary of exactly what http traffic is being generated:
import httplib
httplib.HTTPConnection.debuglevel = 1

Categories