I have written a script that scrapes a URL. It works fine on Linux OS. But i am getting http 503 error when running on Windows 7. The URL has some issue.
I am using python 2.7.11 .
Please help.
Below is the script:
import sys # Used to add the BeautifulSoup folder the import path
import urllib2 # Used to read the html document
if __name__ == "__main__":
### Import Beautiful Soup
### Here, I have the BeautifulSoup folder in the level of this Python script
### So I need to tell Python where to look.
sys.path.append("./BeautifulSoup")
from bs4 import BeautifulSoup
### Create opener with Google-friendly user agent
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
### Open page & generate soup
### the "start" variable will be used to iterate through 10 pages.
for start in range(0,1000):
url = "http://www.google.com/search?q=site:theknot.com/us/&start=" + str(start*10)
page = opener.open(url)
soup = BeautifulSoup(page)
### Parse and find
### Looks like google contains URLs in <cite> tags.
### So for each cite tag on each page (10), print its contents (url)
file = open("parseddata.txt", "wb")
for cite in soup.findAll('cite'):
print cite.text
file.write(cite.text+"\n")
# file.flush()
# file.close()
In case you run it in windows 7, the cmd throws http503 error stating the issue is with url.
The URL works fine in Linux OS. In case URL is actually wrong please suggest the alternatives.
Apparently with Python 2.7.2 on Windows, any time you send a custom User-agent header, urllib2 doesn't send that header. (source: https://stackoverflow.com/a/8994498/6479294).
So you might want to consider using requests instead of urllib2 in Windows:
import requests
# ...
page = requests.get(url)
soup = BeautifulSoup(page.text)
# etc...
EDIT: Also a very good point to be made is that Google may be blocking your IP - they don't really like bots making 100 odd requests sequentially.
Related
I have scraped the url of the picture I want, but I use requests module to download the pic, the server responds 403 Forbidden.
I have tried to capture traffic in chrome F12, there are many JS responses in main page and the url of the picture respond just type of Doc
import requests
lines =[
'https://i.hamreus.com/ps4/0-9/9%E5%8F%B7%E6%9D%80%E6%89%8B%E6%B9%9B%E8%93%9D%E4%BB%BB%E5%8A%A1[%E9%AB%98%E6%A1%A5%E7%BE%8E%E7%94%B1%E7%BA%AA]/vol_02/seemh-001-a5f6.jpg.webp?cid=121333&md5=7dHbKv51JwzRC6jjd7p3oQ',
'https://i.hamreus.com/ps4/0-9/9%E5%8F%B7%E6%9D%80%E6%89%8B%E6%B9%9B%E8%93%9D%E4%BB%BB%E5%8A%A1[%E9%AB%98%E6%A1%A5%E7%BE%8E%E7%94%B1%E7%BA%AA]/vol_02/seemh-002-c60d.jpg.webp?cid=121333&md5=7dHbKv51JwzRC6jjd7p3oQ',
'https://i.hamreus.com/ps4/0-9/9%E5%8F%B7%E6%9D%80%E6%89%8B%E6%B9%9B%E8%93%9D%E4%BB%BB%E5%8A%A1[%E9%AB%98%E6%A1%A5%E7%BE%8E%E7%94%B1%E7%BA%AA]/vol_02/seemh-003-4b8a.jpg.webp?cid=121333&md5=7dHbKv51JwzRC6jjd7p3oQ',
'https://i.hamreus.com/ps4/0-9/9%E5%8F%B7%E6%9D%80%E6%89%8B%E6%B9%9B%E8%93%9D%E4%BB%BB%E5%8A%A1[%E9%AB%98%E6%A1%A5%E7%BE%8E%E7%94%B1%E7%BA%AA]/vol_02/seemh-004-87ac.jpg.webp?cid=121333&md5=7dHbKv51JwzRC6jjd7p3oQ',
]
def download_pic(url,s):
pass
r = s.get(url,headers = headers)
with open(url.split('/')[-1].split('.')[0] +'.jpg','wb') as fp:
fp.write(r.content())
def main():
pass
s = requests.Session()
main_url = 'https://www.manhuagui.com/comic/12087/121333.html'
r = s.get(main_url,headers = headers)
for each_url in lines:
download_pic(each_url.strip(r'\n'),s)
if __name__ == '__main__':
main()
I can't download the picture I want
Some websites have a security provision against requests from external sources particularly python files. That is why you are getting the 403 error. You will not be able to use either the urllib or requests module.
My workaround was I made a call to a shell script from python in which I passed the URL of the image. In the shell script I used $1 to access the url passed with wget to download the image as such:
Python:
import subprocess
subprocess.call([filename, url])
Script (.sh)
wget $1
I am using a simple python code to try and fetch a URL and scrape out all the other URLs mentioned in every webpage(all html sub-pages if any under the home/root page) of that URL. Here is my code:
import urllib
import urllib2
import re
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050)
socket.socket = socks.socksocket
req = urllib2.Request('http://www.python.org')
#connect to a URL
try:
website = urllib2.urlopen(req)
except urllib2.URLError as e:
print "Error Reason:" ,e.reason
else:
#read html code
html = website.read()
#use re.findall to get all the links
links = re.findall('"((http|ftp)s?://.*?)"', html)
print links
Right now I am getting a simple error where the module socks is not recognized. I figured out I have to copy the "socks.py" in the correct path under Python's lib/site-packages directory.
I've added the socks module to my code, as my python script was not otherwise able to connect to the url http://www.python.org. My question is am I using the socks correctly ?
Also will my script take care of all the webpages under the root url ? as I want to scrape all urls from all such webpages under the root URL.
Also how can I check what would be the port to mention in setdefaultproxy line of my code ?
I would suggest you to use BeautifulSoup for Webscraping purpose. Below is the code for it with a lot more simpler method.
import requests
from bs4 import BeautifulSoup
r=requests.get("http://www.python.org")
c=r.content
soup=BeautifulSoup(c,"html.parser")
anchor_list=[a['href'] for a in soup.find_all('a', href=True) if a.text.strip()]
print(anchor_list)
Hope it helps !
This is probably a very simple task, but I cannot find any help. I have a website that takes the form www.xyz.com/somestuff/ID. I have a list of the IDs I need information from. I was hoping to have a simple script to go one the site and download the (complete) web page for each ID in a simple form ID_whatever_the_default_save_name_is in a specific folder.
Can I run a simple python script to do this for me? I can do it by hand, it is only 75 different pages, but I was hoping to use this to learn how to do things like this in the future.
Mechanize is a great package for crawling the web with python. A simple example for your issue would be:
import mechanize
br = mechanize.Browser()
response = br.open("www.xyz.com/somestuff/ID")
print response
This simply grabs your url and prints the response from the server.
This can be done simply in python using the urllib module. Here is a simple example in Python 3:
import urllib.request
url = 'www.xyz.com/somestuff/ID'
req = urllib.request.Request(url)
page = urllib.request.urlopen(req)
src = page.readall()
print(src)
For more info on the urllib module -> http://docs.python.org/3.3/library/urllib.html
Do you want just the html code for the website? If so, just create a url variable with the host site and add the page number as you go. I'll do this for an example with http://www.notalwaysright.com
import urllib.request
url = "http://www.notalwaysright.com/page/"
for x in range(1, 71):
newurl = url + x
response = urllib.request.urlopen(newurl)
with open("Page/" + x, "a") as p:
p.writelines(reponse.read())
Let me start by saying that I know there are a few topics discussing problems similar to mine, but the suggested solutions do not seem to work for me for some reason.
Also, I am new to downloading files from the internet using scripts. Up until now I have mostly used python as a Matlab replacement (using numpy/scipy).
My goal:
I want to download a lot of .csv files from an internet database (http://dna.korea.ac.kr/vhot/) automatically using python. I want to do this because it is too cumbersome to download the 1000+ csv files I require by hand. The database can only be accessed using a UI, where you have to select several options from a drop down menu to finally end up with links to .csv files after some steps.
I have figured out that the url you get after filling out the drop down menus and pressing 'search' contains all the parameters of the drop-down menu. This means I can just change those instead of using the drop down menu, which helps a lot.
An example url from this website is (lets call it url1):
url1 = http://dna.korea.ac.kr/vhot/search.php?species=Human&selector=drop&mirname=&mirname_drop=hbv-miR-B2RC&pita=on&set=and&miranda_th=-5&rh_th=-10&ts_th=0&mt_th=7.3&pt_th=99999&gene=
On this page I can select 5 csv-files, one example directs me to the following url:
url2 = http://dna.korea.ac.kr/vhot/download.php?mirname=hbv-miR-B2RC&species_filter=species_id+%3D+9606&set=and&gene_filter=&method=pita&m_th=-5&rh_th=-10&ts_th=0&mt_th=7.3&pt_th=99999&targetscan=&miranda=&rnahybrid=µt=&pita=on
However, this doesn't contain the csv file directly, but appears to be a 'redirect' (a new term for me, that I found by googeling, so correct me if I am wrong).
One strange thing. I appear to have to load url1 in my browser before I can access url2 (I do not know if it has to be the same day, or hour. url2 didn't work for me today and it did yesterday. Only after after accessing url1 did it work again...). If I do not access url1 before url2 I get "no results" instead of my csv file from my browser. Does anyone know what is going on here?
However, my main problem is that I cannot save the csv files from python.
I have tried using the packages urllib, urllib2 and request but I cannot get it to work.
From what i understand the Requests package should take care of redirects, but I haven't been able to make it work.
The solutions from the following web pages do not appear to work for me (or I am messing up):
stackoverflow.com/questions/7603044/how-to-download-a-file-returned-indirectly-from-html-form-submission-pyt
stackoverflow.com/questions/9419162/python-download-returned-zip-file-from-url
techniqal.com/blog/2008/07/31/python-file-read-write-with-urllib2/
Some of the things I have tried include:
import urllib2
import csv
import sys
url = 'http://dna.korea.ac.kr/vhot/download.php?mirname=hbv-miR-B2RC&species_filter=species_id+%3D+9606&set=or&gene_filter=&method=targetscan&m_th=-5&rh_th=-10&ts_th=0&mt_th=7.3&pt_th=-10&targetscan=on&miranda=&rnahybrid=µt=&pita='
#1
u = urllib2.urlopen(url)
localFile = open('file.csv', 'w')
localFile.write(u.read())
localFile.close()
#2
req = urllib2.Request(url)
res = urllib2.urlopen(req)
finalurl = res.geturl()
pass
# finalurl = 'http://dna.korea.ac.kr/vhot/download.php?mirname=hbv-miR-B2RC&species_filter=species_id+%3D+9606&set=or&gene_filter=&method=targetscan&m_th=-5&rh_th=-10&ts_th=0&mt_th=7.3&pt_th=-10&targetscan=on&miranda=&rnahybrid=µt=&pita='
#3
import requests
r = requests.get(url)
r.content
pass
#r.content = "< s c r i p t > location.replace('download_send.php?name=qgN9Th&type=targetscan'); < / s c r i p t >"
#4
import requests
r = requests.get(url,
allow_redirects=True,
data={'download_open': 'Download', 'format_open': '.csv'})
print r.content
# r.content = "
#5
import urllib
test1 = urllib.urlretrieve(url, "test.csv")
test2 = urllib.urlopen(url)
pass
For #2, #3 and #4 the outputs are displayed after the code.
For #1 and #5 I just get a .csv file with </script>'
Option #3 just gives me a new redirect I think, can this help me?
Can anybody help me with my problem?
The page does not send a HTTP Redirect, instead the redirect is done via JavaScript.
urllib and requests do not process javascript, so they cannot follow to the download url.
You have to extract the final download url by yourself, and then open it, using any of the methods.
You could extract the URL using the re module with a regex like r'location.replace\((.*?)\)'
Based on the response from ch3ka, I think I got it to work. From the source code I get the java redirect, and from this redirect I can get the data.
#Find source code
redirect = requests.get(url).content
#Search for the java redirect (find it in the source code)
# --> based on answer ch3ka
m = re.search(r"location.replace\(\'(.*?)\'\)", redirect).group(1)
# Now you need to create url from this redirect, and using this url get the data
data = requests.get(new_url).content
I'm new to web scraping with python, so I don't know if I'm doing this right.
I'm using a script that calls BeautifulSoup to parse the URLs from the first 10 pages of a google search. Tested with stackoverflow.com, worked just fine out-of-the-box. I tested with another site a few times, trying to see if the script was really working with higher google page requests, then it 503'd on me. I switched to another URL to test and worked for a couple, low-page requests, then also 503'd. Now every URL I pass to it is 503'ing. Any suggestions?
import sys # Used to add the BeautifulSoup folder the import path
import urllib2 # Used to read the html document
if __name__ == "__main__":
### Import Beautiful Soup
### Here, I have the BeautifulSoup folder in the level of this Python script
### So I need to tell Python where to look.
sys.path.append("./BeautifulSoup")
from BeautifulSoup import BeautifulSoup
### Create opener with Google-friendly user agent
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
### Open page & generate soup
### the "start" variable will be used to iterate through 10 pages.
for start in range(0,10):
url = "http://www.google.com/search?q=site:stackoverflow.com&start=" + str(start*10)
page = opener.open(url)
soup = BeautifulSoup(page)
### Parse and find
### Looks like google contains URLs in <cite> tags.
### So for each cite tag on each page (10), print its contents (url)
for cite in soup.findAll('cite'):
print cite.text
Automated querying is not permitted by Google Terms of Service.
See this article for information:
Unusual traffic from your computer
and also Google Terms of service
As Ettore said, scraping the search results is against our ToS. However check out the WebSearch api, specifically the bottom section of the documentation which should give you a hint about how to access the API from non-javascipt environments.