I'm trying to scrape this website:
https://www.footpatrol.com/
However it seems like the website denies my scraping attempt.
Using headers did not help.
from bs4 import BeautifulSoup
import requests
url = "https://www.footpatrol.com/"
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
r = requests.get(url, headers = headers)
data = r.text
soup = BeautifulSoup(data, 'lxml')
for a in soup.find_all():
print(a)
This leads to me getting the ConnectionError, how can I fix my code so I can scrape the site?
I'm able to get a response by changing the User Agent to:
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
and the following User Agent also works:
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
It seems that the Chrome version is the culprit in your User Agent.
Related
I have watched other questions on stakeoverflow regarding HTTP 403 error however, have not found solution there.
i would like to change error from 403 to 200
trying to scrape this url https://angel.co/startups.
import requests
import random
my_session = requests.session()
for_cookies = my_session.get('https://angel.co/startups')
cookies = for_cookies.cookies
user_agents_list = [
'Mozilla/5.0 (iPad; CPU OS 12_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko)
Mobile/15E148',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/99.0.4844.83 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/99.0.4844.51 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/105.0.0.0 Safari/537.36',
]
response = my_session.get('https://angel.co/startups',cookies=cookies, headers={'User-Agent':
random.choice(user_agents_list)})
print(response.text)
response.status_code #403
while running this code i am getting 403 error and instead of whole HTML page.
apart from that, i successfully managed to scrape 1st page using cloudscraper however, no idea how to scraper another pages.
page format 1,2,3...2500
It may be due to cloudflare protection or some sort of protection.
So, use cloudscraper to bypass it.
import cloudscraper
url = "https://angel.co/startups"
scraper = cloudscraper.create_scraper()
response = scraper.get(url)
text = response.text
print(response.status_code)
Output
200
hey everyone I am trying to scrape this website but for some reason, it's not scarping. its really appreciate it if someone can give me a hand with this problem I have tried to use a different user agent but it's not working for some reason. for page content, it prints b'' and for the soup its empty
thanks in advance here's my code:
import requests
from bs4 import BeautifulSoup
url = "https://www.carrefourjordan.com/mafjor/en/c/deals?currentPage=1&filter=&nextPageOffset=0&pageSize=60&sortBy=relevance"
headers = {'User-Agent':'test'}
page = requests.get(url,headers=headers)
print(page.content)
soup = BeautifulSoup(page.content, "html.parser")
print(soup)
** These the 3 different headers I used **
```headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.143 Safari/537.36'}
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36', "Upgrade-Insecure-Requests": "1","DNT": "1","Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8","Accept-Language": "en-US,en;q=0.5","Accept-Encoding": "gzip, deflate"}```
you need to get the right cookies first. so you'll need to use a session
import requests
from bs4 import BeautifulSoup
headers={'User-Agent': 'Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.2; Trident/5.1)'}
url = "https://www.carrefourjordan.com/mafjor/en/c/deals?currentPage=1&filter=&nextPageOffset=0&pageSize=60&sortBy=relevance"
with requests.session() as s:
s.headers.update(headers)
# get the cookies first
s.get("https://www.carrefourjordan.com")
page = s.get(url)
soup = BeautifulSoup(page.text, "html.parser")
print(soup)
I'm trying to scrape carrefour website data through python. I've used scrappy, beautiful soup, selenium but nothing seems to work. I'm getting the error that you don't have the permission to access. Is there any way to scrape this website? The code is attached below, NEED HELP!
from requests_html import HTMLSession
session = HTMLSession()
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36"}
resp = session.get("https://www.carrefour.pk/",headers=headers)
resp.html.render()
a=resp.html.html
print(a)
think you are using the wrong headers. These headers work fine for me.
headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Cafari/537.36'}
Or full:
import requests
from bs4 import BeautifulSoup as bs
# Block cookies
from http import cookiejar # Python 2: import cookielib as cookiejar
class BlockAll(cookiejar.CookiePolicy):
return_ok = set_ok = domain_return_ok = path_return_ok = lambda self, *args, **kwargs: False
netscape = True
rfc2965 = hide_cookie2 = False
s = requests.Session()
s.cookies.set_policy(BlockAll())
#Get URL
url = "https://www.carrefour.pk"
headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Cafari/537.36'}
r = s.get(url, headers=headers)
soup = bs(r.text, 'html.parser')
print(soup)
hi can anyone get this to work - I am trying to scrape sizes from an interactive dropdown selector but keep getting a timeout error
import requests
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}
soup = BeautifulSoup(requests.get("https://www.asos.com/nike/nike-air max-95-logo-leather-trainers-in-dark-navy-orange/prd/20750072 colourwayid=60085113", timeout=60.0).content)
print([size.text.strip() for size in soup.find(class_="colour-size select")])
It's because you've forgot the parameter headers
Try again:
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}
soup = BeautifulSoup(requests.get("https://www.asos.com/nike/nike-air max-95-logo-leather-trainers-in-dark-navy-orange/prd/20750072 colourwayid=60085113",
timeout=60.0,
headers=headers).content)
Done a few small successful projects, been struggling to get the requests from this website fro ages - any tips?
UPDATE - Would like to get full beautiful soup request so I can start scraping the information from the tables
from bs4 import BeautifulSoup
import requests
r = requests.get("http://www.transfermarkt.co.uk/championship/marktwerte/wettbewerb/GB2")
soup = BeautifulSoup(r.content,"html.parser")
print soup
returning
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</hr></body>
</html>
You need to pretend to be a real user with a browser and provide a User-Agent header:
r = requests.get("http://www.transfermarkt.co.uk/championship/marktwerte/wettbewerb/GB2", headers={
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36"
})
Demo:
>>> from bs4 import BeautifulSoup
>>> import requests
>>>
>>> r = requests.get("http://www.transfermarkt.co.uk/championship/marktwerte/wettbewerb/GB2", headers={
... "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36"
... })
>>> soup = BeautifulSoup(r.content,"html.parser")
>>> print(soup.title.get_text())
Top market values 15/16 - Championship - Transfermarkt
There are some sites where requests fail to give response as many of them track if the request originating party is a browser or a bot.
So, let us look like a browser.
It can be done by modifying the header as follows:
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36', "Upgrade-Insecure-Requests": "1","DNT": "1","Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8","Accept-Language": "en-US,en;q=0.5","Accept-Encoding": "gzip, deflate"}
And then, just simply add this header your GET request as follow:
response = requests.get("https://example.com",headers=headers)
In total you will get:
import requests
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36', "Upgrade-Insecure-Requests": "1","DNT": "1","Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8","Accept-Language": "en-US,en;q=0.5","Accept-Encoding": "gzip, deflate"}
response = requests.get("https://example.com",headers=headers)