I want to read a url in python but I get error with different ways:
import urllib
link = "http://data.europa.eu/esco/isco/C0110"
f = urllib.urlopen(link)
myfile = f.read()
print(myfile)
HTTPError: HTTP Error 406: Not Acceptable
link = "http://data.europa.eu/esco/isco/C0110"
f = requests.get(link)
print(f)
<Response [406]>
Any idea?
In this particular case you can overcome HTTP 406 by providing appropriate headers as follows:-
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36',
'Accept-Encoding': '*',
'Accept': 'text/html',
'Accept-Language': '*'}
The link is broken/invalid. As per the site, the following link http://data.europa.eu/esco/isco/C0110 is not a URL but a URI.
It seems they have an API setup for the data.
You can either;
Check out the API and configure it
https://ec.europa.eu/esco/portal/api
OR
Use a module like BeautifulSoup4 for web scraping the page from which you want the content.
Related
I'm trying to return a GET request from an API using HTTPBasicAuth.
I've tested the following in Postman, and received the correct response
URL:"https://someapi.data.io"
username:"username"
password:"password"
And this returns me the data I expect, and all is well.
When I've tried this in python however, I get kicked back a 403 error, alongside a
""error_type":"ACCESS DENIED","message":"Please confirm api-key, api-secret, and permission is correct."
Below is my code:
import requests
from requests.auth import HTTPBasicAuth
URL = 'https://someapi.data.io'
authBasic=HTTPBasicAuth(username='username', password='password')
r = requests.get(URL, auth = authBasic)
print(r)
I honestly can't tell why this isn't working since the same username and password passes in Postman using HTTPBasicAuth
You have not conveyed all the required parameters. And postman is doing this automatically for you.
To be able to use in python requests just specify all the required parameters.
headers = {
'Host': 'sub.example.com',
'User-Agent': 'Chrome v22.2 Linux Ubuntu',
'Accept': '*/*',
'Accept-Encoding': 'gzip, deflate, br',
'Connection': 'keep-alive',
'X-Requested-With': 'XMLHttpRequest'
}
url = 'https://sub.example.com'
response = requests.get(url, headers=headers)
It could be due to the fact that the user-agent is not defined
try the following:
import requests
from requests.auth import HTTPBasicAuth
URL = 'https://someapi.data.io'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36'}
authBasic=HTTPBasicAuth(username='username', password='password')
r = requests.get(URL, auth = authBasic, headers=headers)
print(r)
I understand what location does in HTTP headers.
Access to a site with Chrome gets location in response headers.
However, access to it with Python requests cannot get that info.
import requests
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36',
'accept': '*/*',
'accept-language': 'en-US,en;q=0.9,ru-RU;q=0.8,ru;q=0.7,uk;q=0.6,en-GB;q=0.5',
}
response = requests.get('https://ec.ef.com.cn/partner/englishcenters', headers=headers)
response.headers
Does it matter for scrapy? How do I get that info? Because I guess it might be a flag the site could use for anti-scraping.
What you see in your screenshot is response with HTTP code 302 which will usually automatically redirect some clients (along with Python Requests) to another URL, specified in Location header.
If you enter the URL you shared (https://ec.ef.com.cn/partner/englishcenters) in your browser, you'll see you will get redirected to some other URL. Same behaviour can be observed in your Python code if you print out response.url which should return you the URL you've been redirected to.
I'm trying to retrieve and process the results of a web search using requests and beautifulsoup.
I've written some simple code to do the job, and it returns successfully (status = 200), but the content of the request is just an error message "We're sorry for any inconvenience, but the site is currently unavailable.", and has been the same for the last several days. Searching within Firefox returns results without issue, however. I've run the code using a URL for the UK-based site and it works without issue so I wonder if the US site is set up to block attempts to scrape web searches.
Are there ways to mask the fact I'm attempting to retrieve search results from within Python (eg, masquerading as a standard search within Firefox) or some other work around to allow access to the search results?
Code included for reference below:
import pandas as pd
from requests import get
import bs4 as bs
import re
# works
# baseURL = 'https://www.autotrader.co.uk/car-search?sort=sponsored&radius=1500&postcode=ky119sb&onesearchad=Used&onesearchad=Nearly%20New&onesearchad=New&make=TOYOTA&model=VERSO&year-from=1990&year-to=2017&minimum-mileage=0&maximum-mileage=200000&body-type=MPV&fuel-type=Diesel&minimum-badge-engine-size=1.6&maximum-badge-engine-size=4.5&maximum-seats=8'
# doesn't work
baseURL = 'https://www.autotrader.com/cars-for-sale/Certified+Cars/cars+under+50000/Jeep/Grand+Cherokee/Seattle+WA-98101?extColorsSimple=BURGUNDY%2CRED%2CWHITE&maxMileage=45000&makeCodeList=JEEP&listingTypes=CERTIFIED%2CUSED&interiorColorsSimple=BEIGE%2CBROWN%2CBURGUNDY%2CTAN&searchRadius=0&modelCodeList=JEEPGRAND&trimCodeList=JEEPGRAND%7CSRT%2CJEEPGRAND%7CSRT8&zip=98101&maxPrice=50000&startYear=2015&marketExtension=true&sortBy=derivedpriceDESC&numRecords=25&firstRecord=0'
a = get(baseURL)
soup = bs.BeautifulSoup(a.content,'html.parser')
info = soup.find_all('div', class_ = 'information-container')
price = soup.find_all('div', class_ = 'vehicle-price')
d = []
for idx, i in enumerate(info):
ii = i.find_next('ul').find_all('li')
year_ = ii[0].text
miles = re.sub("[^0-9\.]", "", ii[2].text)
engine = ii[3].text
hp = re.sub("[^\d\.]", "", ii[4].text)
p = re.sub("[^\d\.]", "", price[idx].text)
d.append([year_, miles, engine, hp, p])
df = pd.DataFrame(d, columns=['year','miles','engine','hp','price'])
By default, Requests sends a unique user agent when making requests.
>>> r = requests.get('https://google.com')
>>> r.request.headers
{'User-Agent': 'python-requests/2.22.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
It is possible that the website you are using is trying to avoid scrapers by denying any request with a user agent of python-requests.
To get around this, you can change your user agent when sending a request. Since it's working on your browser, simply copy your browser user agent (you can Google it, or record a request to a webpage and copy your user agent like that). For me, it's Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36 (what a mouthful), so I'd set my user agent like this:
>>> headers = {
... 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'
... }
and then send the request with the new headers (the new headers are added to the default headers, they don't replace them unless they have the same name):
>>> r = requests.get('https://google.com', headers=headers) # Using the custom headers we defined above
>>> r.request.headers
{'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
Now we can see that the request was sent with our preferred headers, and hopefully the site won't be able to tell the difference between Requests and a browser.
As an exercise, I am trying to scrape data from a dynamic graph using Python. The graph can be found at this link (let's say I want the data from the first one).
Now, I was thinking of doing something like:
src = 'https://marketchameleon.com/Overview/WFT/IV/#_ABSTRACT_RENDERER_ID_11'
import json
import urllib.request
with urllib.request.urlopen(src) as url:
data = url.read()
reply = json.loads(data)
However, I receive an error message on the last line of the code, saying:
JSONDecodeError: Expecting value
"data" is not empty, so I believe there is a problem with the format of the information within it. Does someone have an idea to solve this issue? Thanks!
I opened that link and see that the site loads data from another URL - https://marketchameleon.com/charts/histStockChartData?p=747&m=12&_=1534060722519
You can use json.loads() function twice and do some hacks with headers (urllib2.Request is your friend in case of Python 2) since server returns HTTP 500 when you don't imitate browser
src = 'https://marketchameleon.com/charts/histStockChartData?p=747&m=12'
import json
import urllib.request
user_agent = {
'Host': 'marketchameleon.com',
'Connection': 'keep-alive',
'Pragma': 'no-cache',
'Cache-Control': 'no-cache',
'Upgrade-Insecure-Requests': 1,
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,kk;q=0.6'
}
request = urllib.request.Request(src, headers=user_agent)
data = urllib.request.urlopen(request).read()
print(data)
reply = json.loads(data)
table = json.loads(reply['GTable'])
print(table)
I'm just trying to simply use a Python get request to access JSON data from stats.nba.com. It seems pretty straight-forward as I can enter the URL into your browser and get the results I'm looking for. However, whenever I run this the program just runs to no end. I'm wondering if I have to include some type of headers information in my get request.
The code is below:
import requests
url = 'http://stats.nba.com/stats/commonteamroster?LeagueID=00&Season=2017-18&TeamID=1610612756'
response=requests.get(url)
print response.text
I have tried to visit the url you given, you can add header to your request to avoid this problem (the minimum information you need to provide is User-Agent, I think you can use more header information as you can):
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
response = requests.get(url, headers=headers)
The stats.nba.com website need your 'User-Agent' header information.
You can get your request header information from Network tab in the browser.
Take chrome as example, when you press F12, and visit url you given, you can find the relative request information, the most useful information is request headers.
You need to use headers. Try copying from your browser's network tab. Here's what worked for me:
request_headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive',
'Host': 'stats.nba.com',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'
}
And here's the modified get:
response = requests.get(url, headers = request_headers)