Getting page 404 with Python Requests but I can access the page no problem through my browser. I can access other pages that are formatted exactly the same as this page and they load no problem.
Have already tried changing headers with no luck.
My Code:
string_page = str(page)
with requests.Session() as s:
resp = s.get('https://bscscan.com/token/generic-tokentxns2?m=normal&contractAddress=0x470862af0cf8d27ebfe0ff77b0649779c29186db&a=&sid=f58c1cdefacc680b799412c7645ed7f7&p='+string_page)
page_info = str(resp.text)
print(page_info)
I have also tried with urllib and the same thing happens
I'm not sure if this will fix it, but try adding this in the header maybe it might work
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36'
Related
Goal:
I am trying to scrape the HTML from this page: https://www.doherty.jobs/jobs/search?q=&l=&lat=&long=&d=.
(note - I will eventually want to paginate and scrape all job listings from this page)
My issue:
I get a 503 error when I try to scrape the page using Python and Requests. I am working out of Google Colab.
Initial Code:
import requests
url = 'https://www.doherty.jobs/jobs/search?q=&l=&lat=&long=&d='
response = requests.get(url)
print(response)
Attempted solutions:
Using 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36'
Implementing this code I found in another thread:
import requests
def getUrl(url):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36',
}
res = requests.get(url, headers=headers)
res.raise_for_status()
getUrl('https://www.doherty.jobs/jobs/search?q=&l=&lat=&long=&d=')
I am able to access the website via my browser.
Is there anything else I can try?
Thank you
That page is protected by cloudflare, there's some options to try to bypass it, seems that using cloudscraper works:
import cloudscraper
scraper = cloudscraper.create_scraper()
url = 'https://www.doherty.jobs/jobs/search?q=&l=&lat=&long=&d='
response = scraper.get(url).text
print(response)
In order to use it, you'll need to install it:
pip install cloudscraper
I am using a python version of Selenium to capture comments on a Chinese website.
The website is https://v.douyu.com/show/kDe0W2q5bB2MA4Bz
I want to find this span element. In Chinese, this is called "弹幕列表".
I tried the absolute path like:
driver.find_elements_by_xpath('/body/demand-video-app/main/div[2]/demand-video-helper//div/div[1]/a[3]/span')
But it returns NoSuchElementException. I just thought that maybe this site has a protection mechanism. However, I don't know much about Selenium and would like to ask for help. Thanks in advance.
I guess you use Selenium because requests can't capture the value.
If it's not what you want to do, don’t read my answer.
Because you are requests.get(url='https://v.douyu.com/show/kDe0W2q5bB2MA4Bz')
You need to find the source of the data ApiUrl on F12 Network.
In fact, his source of information is
https://v.douyu.com/wgapi/vod/center/getBarrageListByPage + parameter
↓
https://v.douyu.com/wgapi/vod/center/getBarrageListByPage?vid=kDe0W2q5bB2MA4Bz&forward=0&offset=-1
Although I can't help you solve the Selenium problem.
But I will use the following methods to get the data.
import requests
url = 'https://v.douyu.com/wgapi/vod/center/getBarrageListByPage?vid=kDe0W2q5bB2MA4Bz&forward=0&offset=-1'
headers = {'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36'}
res = requests.get(url=url, headers=headers).json()
print(res)
for i in res['data']['list']:
print(i)
Get All Data
import requests
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36'}
url = 'https://v.douyu.com/wgapi/vod/center/getBarrageListByPage?vid=kDe0W2q5bB2MA4Bz&forward=0&offset=-1'
while True:
res = requests.get(url=url, headers=headers).json()
next_json = res['data']['pre']
if next_json == -1:
break
for i in res['data']['list']:
print(i)
url = f'https://v.douyu.com/wgapi/vod/center/getBarrageListByPage?vid=kDe0W2q5bB2MA4Bz&forward=0&offset={next_json}'
Can someone help me to solve my problem in this code?
CODE:
from bs4 import BeautifulSoup
import requests
headers = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/83.0.4103.61 Safari/537.36"}
url = "https://www.amazon.com/RUNMUS-Surround-Canceling-Compatible-Controller/dp/B07GRM747Y"
resp = requests.get(url, headers=headers)
s = BeautifulSoup(resp.content, features='lxml')
product_title = s.select("#productTitle")[0].get_text().strip()
print(product_title)
If you try to print what you get as response, you will not encounter related errors.
import requests
headers = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36"}
url = "https://www.amazon.com/RUNMUS-Surround-Canceling-Compatible-Controller/dp/B07GRM747Y"
resp = requests.get(url, headers=headers)
print(resp.content)
The output you are getting from this request:
b'<!--\n To discuss automated access to Amazon data please contact api-services-support#amazon.com.\n For information about migrating to our APIs refer to our Marketplace APIs...
The site you are sending requests is not allowing you to access content with provided headers. So your s.select("#productTitle") creates empty list therefore you are getting an index error.
I have tried the code below. Doesn't work.
The PDF is readable in browser.
I want to GET the pdf file from the GET-url and POST it to another server.
import requests
response = requests.get(url='some url')
requests.post(url='my_url', files={'file':response.content})
Link: (Expired)
It is caused by a missing header, specific the Uses-Agent. Looks like the site checks it.
The call returns a HTTP 406 (response.status_code). With the header a HTTP 200 is returned.
Try this:
import requests
header = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36"}
response = requests.get(url='some url', headers=header)
requests.post(url='my_url', files={'file':response.content})
r = request.get(url='https://www.zomato.com')
It gives a time out error and python interpreter just does not respond . I have tried some other sites and it works for them , but for this site it does not . Why is that ?
Usually, provide a User-Agent header may help a lot when trying to get a web page, which makes it more like a browser visit:
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36 Edge/15.15063'
}
r = requests.get('https://www.zomato.com', headers=headers)